One Easy Strategy for the Database Enterprise

Comments: 1 Comment
Published on: September 8, 2015



Welcome to the second Tuesday of the month. And in the database world of SQL Server and the SQL Server community, that means it is time for TSQL2SDAY. This month the host is Jen McCown (blog / twitter), half of the MidnightDBA team, and the topic that she wants us to write about is: “Strategies for managing an enterprise”. Specifically, she wants to know “How do you manage an enterprise? Grand strategies? Tips and tricks? Techno hacks? Do tell.”

For me, this month, I will be just doing a real quick entry. I have been more focused on my 60 Days of Extended Events series and was looking for something that might tie into both really well that won’t necessarily be covered in the series, but that might work well as an “Enterprise” worthy topic.

ussenterpriseSo, what I decided to land on was the system_health session.


Wait, isn’t the system_health session one of those things that is configured per Instance?

Yes it is!

The system_health session is a default Extended Events session that is running by default on every instance of SQL Server (keyword is default) since SQL Server 2008. Whether you want it to be running or not is an entirely different conversation. But by default it is running.

There is a small problem with that default though. That problem is in the 2008 and 2008 R2 flavors of SQL Server. The default behavior is that the session only dumps the events to the ring buffer. And if you are only dumping the events to the ring buffer, you can imagine this is not entirely that useful. Why? Well, the ring buffer is just a memory target and is considerably more volatile than to write the event session data out to a file. One need not try terribly hard to see why this can be frustrating (unless of course you didn’t even know it was there).

So what to do to help push this in a more enterprise friendly direction? The answer is to add a file target like was done in the 2012 (and up) flavors of SQL Server. Here is the entire system_health session as defined in u_tables.sql (the backup script of the session deployed to the install directory):

Now, with all of the session data going out to disk, you can also schedule a scraper to copy the files to a central log folder on the network. Unfortunately, placing the files directly on a UNC share (via mapped drive or via UNC naming) does not work in 2008 or R2. I have a few more configurations to run on that still, but it doesn’t look good.

At least by dumping the session data to an event file, you are closer to an enterprise worthy solution. Just remember to do it!

One last thing. After you alter the system_health session, make sure you start it again.


Compressing Encrypted Backups

TSQL2sDayA common requirement, whether it be based out of pure want or truly out of necessity, is to make a large database backup file, that is encrypted, be much smaller.

This was a knock for the early days of Transparent Data encryption (circa SQL Server 2012). If TDE were enabled, then a compressed backup (though compression was available) was not an option. Not only did compression in the 2012 implementation of TDE make the database backup not smaller, it occasionally caused it to be larger.

This was a problem.  And it is still a problem if you are still on SQL 2012. Having potentially seen this problem, amongst many others, Ken Wilson (blog | twitter) decided to ask us to talk about some of these things as a part of the TSQL Tuesday Blog party. Read all about that invite here.

Encrypted and Compressed

dbsecurityWell, thankfully Microsoft saw the shortcoming as well. With SQL Server 2014, MS released some pretty cool changes to help us encrypt and compress our database backups at rest.

Now, instead of a database backup that could potentially get larger due to encryption and compression combined, we have a significant hope of reducing the encrypted backup footprint to something much smaller. Here is a quick example using the AdventureWorks2014 database.

In this little exercise, I will perform three backups. But before I can even get to those, I need to ensure I have a Master Key set and a certificate created. The encrypted backups will require the use of that certificate.

Do this in a sandbox environment please. Do not do this on a production server.

In the first backup, I will attempt to backup the AW database using both encryption and compression. Once that is finished, then a backup that utilizes the encryption feature only will be done. And the last backup will be a compressed only backup. The three backups should show the space savings and encryption settings of the backup if all goes well. The compressed and encrypted backup should also show an equivalent savings as the compression only backup.

With that script executed, I can query the backup information in the msdb database to take a peek at what happened.

This should produce results similar to the following:


Looking at the results, I can see that the compression only backup and the compression with encryption backup show very similar space savings. The compression only dropped to 45.50MB and the Compression with encryption dropped to 45.53MB. Then the encryption only backup showed that, interestingly, the CompBackSizeMB (compressed_backup_size) got larger (which is the actual size on disk of this particular backup).

At any rate, the compression now works with an encrypted backup and your backup footprint can be smaller while the data is protected at rest. Just don’t go using the same certificate and password for all of your encrypted backups. That would be like putting all of your eggs in one basket.

With the space savings available in 2014, and if you are using SQL 2014, why not use encrypted backups?

SQL Server and Defaults

TSQL2sDayWhat is that default setting?

SQL server does a fantastic job of having numerous settings at the server level and at the database level predefined for you. Additionally, the OS has a bunch of settings that are predefined. These are notoriously the default settings. Or as some would say: “Set it and Forget it Settings.” Sadly, the set it part never really happens – except during the install.

Today is the second Tuesday of the the month, and that means it is TSQL Tuesday. As I mentioned in a previous article, this month the topic is hosted by Andy Yun (blog | twitter). Andy has chosen to have everybody talk about default settings in SQL Server. You can read everything Andy has said in his invite for the month.


Defaults Defaults Defaults

I could ramble on about database settings that get changed, but I have already addressed the changing of database settings – here. While I did address the changing of the settings, I did not address that it is an awesome thing to use to find out who might be changing those default settings (at the database level) that you have already set and optimized for the instance.

Or I could go on about how the OS has various settings that are far less than optimal for SQL Server. A good example would the that tree hugger setting that I talked about last month. But rather than do that, I will advise that you just read that article – here.

Or we can belabor the point about a fantastic setting that equates to NOLOCK. But, that isn’t a default setting and one would need a pretty good reason to change the default setting to READ UNCOMMITTED. So, just read a bit about the coolness you can see in the execution plans when the NOLOCK directive is used – here.

There are just so many wonderful default settings within SQL Server (or within the Windows OS) that could be discussed or that have already been discussed.

About all of those settings, I will say this. Database Settings are not “One Size Fits ALL.” But that is what a default setting is trying to do. If you find default settings within a database, then you should probably evaluate the setting and make sure it is set appropriately.

What default to discuss then?

Rather than talk about a plethora of settings that could / should be changed, I want to speak about one I doubt most would consider. This is the default setting that can only be adjusted from between the ears!

This is a default behavior I have noticed with many DBAs. I have heard various names for this behavior and most are probably quite accurate. The default setting that I think should be changed is one that will lead to a longer more fulfilling career. Let me see if I can describe the behavior well enough.

Often times, companies will purchase monitoring software to help alert to problems within the environment. This is a good thing. A DBA can’t be available 24x7x365. So the software should be configured and tuned to each database to allow for different tolerance/alert thresholds. This is a good thing too – if it is done. When done properly, the DBA can get a good nights rest as well as feel confident the environment is running properly.

If the software is not properly tuned and configured, the DBA will probably rectify that sooner rather than later. This is not the big behavior to change.

What I do see all too often is a complete reliance on the software that is monitoring – to a fault. I have seen hundreds of DBAs just sit and watch the pretty little dials and gauges on the screen of the software and only react when one of the items turns red. A career surely can’t be built off of this kind of behavior.

Rather than sit there and wait for something to fail, why not proactively check the servers? Why not try to build a script repository that will do everything (and more) that the monitoring software can do? While building that repository, think of the skills that will be gained and the knowledge that can be retained across jobs! In addition, I have been some places where a script repository was able to replace the purchased software and saved 100’s of thousands of dollars per year in software maintenance costs.

One of my favorite statements is that a “Senior DBA should be able to script his own solutions!” Being able to create monitoring scripts to replace that canned app will certainly get you to that next level. It will also get you out of that default behavior of complete reliance on the canned software and imminent career stagnation.

Learn a little and grow your career.

Oh, and there are some great monitoring tools out there. They can provide a great asset to a company – if and when used properly.


Extended Events, Birkenstocks and SQL Server

TSQL Tuesday

I bring you yet another installment in the monthly meme called T-SQL Tuesday.  This is the 67th edition, and this time we have been given the opportunity to talk about something I really enjoy – Extended Events.

Props to Jes Borland (blog | twitter) for picking such an awesome topic. There is so much to cover with extended events, it’s like a green canvas ready for anything you can imagine.



I will save the explanation here for later when hopefully it all ties together for you (well, at least buckles up).


While that is all fun and playful, let’s get down to the serious side now. One of my favorite quick fixes as a consultant is to come in and find that the server is set to “environment friendly” / “green” / “treehugger” mode. You can read more about power saving cpus from my friend Wayne Sheffield here.

That poor old cpu thing has been beat up pretty good. But how can we tell if the server is running in that mode if the only thing we can do is look in SQL Server (can’t install cpu-z, or don’t have adequate permissions on the server to see windows settings – just play along)? Lucky for us there is this cool thing called Extended Events.

In SQL Server we have this cool event called perfobject_processor. This particular event has some really cool metrics that it captures.  One such metric is the frequency. The frequency is an indicator to us whether the server has the cpu set to balanced, high performance, or power saver. Having that in mind, let’s create a session to trap this data and experiment a little with the cpu settings.

Well, that looks amazingly easy and straight forward. I am telling the session to trap the additional CPU information such as numa_node_id and cpu_id. You can eliminate those if you wish. They may be beneficial when trying to identify if there is an issue on a specific processor though.

To experiment, I will break out the age old argument provoker – xp_cmdshell. I will use that to cycle through each of the power saving settings and look at the results. Here is the bulk of the script all together.

And now for the XE Parser.

If I parse through the extended event after each change of the power scheme, I would be able to see the effect of each scheme change in the event session as well as in a tool such as Resource Monitor. Here is what I was able to see with each of the changes.

Balanced Saver

From Resource Monitor:


And the XE data:


This is my default power scheme. On my laptop, this is ok. For a production SQL server, this will cause problems.

High Performance

fullpower fullpower_results

Quickly, you should be able to spot that the blue line in the graph, and the numeric values from the XE session correlate to the processor giving you everything it has. This is good for SQL Server.

Power Saver


See how that blue line falls off sharply?




Supporting that steep fall in the graph, we can see that the XE trapped the percent of max frequency as 36%. You might be lucky and attain 36%. Don’t be surprised if you see something even lower. Please don’t use this setting on a production box – unless you want to go bald.

We can see that we have great tools via Extended Events to help troubleshoot various problems. As I said, this is one of my favorites because it is a very common problem and a very easy fix.

SQL Server is not GREEN! Do not put birkenstocks on the server and try to turn the server into a tree hugger. It just won’t work out that well. Set your fileservers or your print servers to be more power conscientious, but this is something that will not work well on SQL Server.

Final thought. If you have not figured out the birkenstocks, well it is a common stereotype with environmentalists in some areas that they may wear woolly socks and birkenstocks.

No wool socks were harmed in the making of this blog post!

Monitoring SQL Server

TSQL2sDay150x150Welcome to the fabulous world of blog parties, SQL Server and what has been the longest running SQL Server related meme in the blogosphere – TSQLTuesday.

This month we are hosted by Catherine Wilhemsen (blog | twitter) from Norway. And interestingly, Catherine has asked for us to talk about monitoring SQL Server.  Wow! Talk about a HUGE topic to cover in such a short space. Well, let’s give it a go.

I am going to try and take this in a bit of a different direction, and we shall see if I have any success with it or not.

Direction the First

Monitoring is a pretty important piece of the database puzzle. Why? Well, because you want to try and find out before the end-users that something is happening. Or do you? It is a well established practice at many shops to allow the end-users to be the monitoring solution. How does this work, you ask?

It works, by waiting for an end-user to experience an error or some unexpected slowness. Then the user will either call you (the DBA), your manager, the company CEO, or (if you are lucky) the helpdesk. Then, the user will impatiently wait for you to try and figure out what the problem is.

The pros to this solution involve a much lower cost to implementation.  The cons, well we won’t talk about that because I am trying to sell you on this idea. No, in all seriousness, the con to this approach could involve a lot of dissatisfaction, job loss, outages, delays in processing, delays in paychecks, dizziness, fainting, shortness of breath, brain tumors, and rectal bleeding.  Oh wait, those last few are more closely related to trial medications for <insert ailment here>.

If you are inclined to pursue this type of monitoring – may all the hope, prayers, faith and luck be on your side that problems do not occur.

New Direction

This methodology is also rather cheap to implementation.  The risk is relatively high as well and I have indeed seen this implementation. In this new approach, we will require that the DBA eyeball monitor the databases all day and all night.

At the DBA’s disposal is whatever is currently available in SQL Server to perform the monitoring.  It is preferred that only Activity Monitor and Profiler be used to perform these duties. However, the use of sp_who2 and the DMVs is acceptable for this type of duty.

The upside to this is that you do not incur any additional cost for monitoring over what has been allocated for the salary of the DBA. This an easy and quick implementation and requires little knowledge transfer or ability.

The downside here is – well – look at the problems from the last section and then add the glassed over stoner look of the 80s from staring at the monitor all day.

If you have not had the opportunity to use this type of monitoring – consider how lucky you are.  This has been mandated by several companies (yes I have witnessed that mandate).

Pick your Poison

Now we come to a multi-forked path.  Every path at this level leads to a different tool set.  All of these tools bare different costs and different levels of knowledge.

The pro here is that these come with lower risk to those suspicious symptoms from the previous two options. The con is that it will require a little bit more grey matter to configure and implement.

You can do anything you would like at this level so long as it involves automation.  You should configure alerts, you should establish baselines, you should establish some level of history for what has been monitored and discovered. My recommendation here is to know your data and your environment and then to create scripts to cover your bases.

One last thought, no matter what solution you decide to implement, you should also monitor the monitor. If the DBA collapses from long hours of eyeball monitoring, who will be there to pick him/her up to resume the monitoring?

If you opt to not implement any of these options, or if you opt to implement either of the first two options, I hope you have dusted off your resume!

Security as a Fleeting Thought

Comments: 6 Comments
Published on: February 10, 2015

Today we have another installment in what is known as TSQL Tuesday.  This month we have an invitation and topic given to us by the infamous Kenneth Fisher ( blog | twitter).

TSQL2sDay150x150Today, the invitation is for us to share our stories on how we like to manage security.  Or at least that is the request that was made by Kenneth.  I am going to take a bit of a twist on that request.  Instead of sharing how I like to manage security, I am going to share some interesting stories on how I have seen security managed.

Let’s just call this a short series on various case studies in how to manage your security in a very peculiar way.  Or as the blog title suggests, how to manage your security as an afterthought.

Case Study #1

dbsecurityWe have all dealt with the vendor that insists on the user account that will be used for their database and application be one of two things.  Either it needs to be sa or needs to be a member of the sysadmin fixed server role.  The ensuing discussion with those vendors is always a gem.  They insist the application will break, you as the diligent DBA prove otherwise, and then the senior manager sponsoring the application comes around with a mandate that you must provide the access the vendor is requesting.

Those are particularly fun times.  Sometimes, there is a mutual agreement in the middle on what security can be used and sometimes the DBA just loses.

But what about when it is not a vendor application that mandates such relaxed security for their application and database?  What if it happens to be the development group?  What if it happens to be a developer driven shop and you are the consultant coming in to help get things in order?

I have had the distinct pleasure of working in all of those scenarios.  My favorite was a client that hosted ~700 clients, each with their own database.  There were several thousand connections coming into the server and every single connection was coming in as ‘sa’.  Yes, that is correct.  There were no user logins other than the domain admins group on the server – which was also added to the sysadmin security role.  That is always a fun discussion to start and finish.  The look of color disappearing from the clients’ eyes as the realize the severity of the problem.

Please do not attempt this stunt at home.

Case Study #2

In a similar vain, another one that I have seen far too often is the desire to grant users dbo access within a database.  While this is less heinous than granting everybody sysadmin access – it is only a tad better.  Think about it in this way – does Joe from financing really need to be able to create and drop tables within the accounting database?  Does Marie from human resources need to be able to create or drop stored procedures from the HR database?  The answer to both should be ‘NO’.

In another environment, I was given the opportunity to perform a security audit.  Upon looking over things, it became very clear what the security was.  Somebody felt it necessary to add [Domain Users] to the dbo role on every database.  Yes, you read that correctly.  In addition to that, the same [Domain Users] group was added to the sysadmin server fixed security role.  HOLY COW!

In this particular case, they were constantly trying to figure out why permissions and objects were changing for all sorts of things within the database environment.  The answer was easy.  The fix is also easy – but not terribly easy to accept.

Please do not attempt this stunt at home.

Case Study #3

I have encountered vendor after vendor that has always insisted that they MUST have local admin and sysadmin rights on the box and instance (respectively).  For many this is a grey area because of the contracts derived between the client and the vendor.

For me, I have to ask why they need that level of access.  Does the vendor really need to be able to backup your databases and investigate system performance on your server?  Does that vendor need, or are they even engaged, to troubleshoot your system as a whole?  Or, do they just randomly sign in and apply application updates without your knowledge or perform other “routine” tasks unknown to you?

I have seen vendors change permissions and add back door accounts far too often.  They seldom if ever are capable of providing the level of support necessary when you are stuck with deadlocks by the second or blocking chains that tie up the entire server.  In addition, they are generally unavailable for immediate support when a production halting issue arises in their application – or at least not for a few hours.

This is specifically in regards to application vendors.  They are not your sysadmin and they are not your DBA.  If they must have RDP access or access to the database – put it under tight control.  Disable the account until they request access.  Then a request can be made and a note documented about why the access is needed.  Then the account can be enabled, monitored and disabled after a specified amount of time.

Please do not attempt this stunt at home.

This also changes when that vendor happens to be providing you IT functionality and is not specifically tied to an application.  Those relationships are a bit different and do require a little more trust to the person who is acting on your behalf as your IT staff.


I have shared three very dangerous stunts that are sometimes portrayed to be done by professionals.  Do not try this in your environment or at home.  It is dangerous to treat security with so little concern.  Security is not some stunt, and should be treated with a little more care and attention.

If you find yourself in any of these situations, an audit is your friend.  Create some audit process within SQL Server or on the Local server to track changes and accesses.  Find out what is going on and be prepared to act while you build your case and a plan for implementing tighter security.

Minor ailments and Healthy SQL

Comments: 1 Comment
Published on: January 13, 2015

TSQL2sDay150x150One of the things that DBAs love to do is keep their servers running and healthy.  A healthy server, after all, is your ticket to a stress free day and a full night’s sleep.  Granted this not a guarantee but it sure helps make life easier.

We are always looking for the big ticket items to keep the servers tuned and purring.  But from time to time, like with humans, it’s not Ebola that takes us down but the little sniffles and minor aches and pains that keep us from doing our best.

This month as a part of the TSQL Tuesday party, Robert Pearl (blog | twitter) has asked us to write about Healthy SQL and the things that make SQL go yum.

Nagging Cough

Like that tickle in the back of the throat, that makes you cough now and again, there is an occasional tickle that can bug SQL Server and cause some pain here and there.  I have written about this nagging cough in the past.

The problem with this ailment is that since it is not always a cold, or always present, it is often times missed and frequently hard to diagnose.  I talked about this previously as one of those things that should be checked from time to time.  This is that nagging synonym* cough.  You can read about it here.

That’s great, we can take a little cough syrup, fix some synonyms and feel a lot better about SQL Server in the morning.

Aching Joints

Ever have one of those trick knees that decides to give out on you out of the blue?  You might be walking up the stairs (or down the stairs) and suddenly the knee is gone and you end up flat on your face.  And it could be great most of the time, it’s just that once in a while the knee decides to give a little twinge of pain, fold up and drop you to the floor.

SQL Server has a similar problem with this next one.  I come across on a frequent basis, within SQL Server, an bad knee in the form of a linked server.  Sure, I can hear you saying that linked servers are always bad.  Fair enough!  I have seen linked servers work wonderfully and most of the time they cause pain.

The type of pain with a linked server I would like to share today is around more of an edge case (like standing on the edge of the stairs with your knee about to give way).

As a good DBA would want to do, you may want to restore your databases to a test environment on a routine basis to ensure the backups worked and that you have a functioning recovery point for your databases in the event of a disaster.  I was working on just that sort of thing for a client recently when I ran into this beautifully pain filled knee.

This customer had not one, not two, not even three instances of this problem.  They had a glorious 492 instances of this problem.  The vendor for this client decided the best thing to do would be to replicate the production database to a separate database to help offload performance.  This other database happens to be on the same server (so no performance offload).  While that is not the most intelligent thing to do, it is not the end of this ailment.

In addition to the replication, there were 492 views that utilized linked servers to UNION ALL data between the two databases.  The data in each table between the two databases (in this case) is the same.  So we have a linked server to UNION ALL this data between two databases on the same server that is replicated.  Wowza!

Now, due to this linked server proliferation in the views to get to data the long way around, when restoring this database to a test environment there is a lot of cleanup work left to be done.  After all, the restore of the database is only a piece of the healthy backup puzzle.  You would want to test the data and the application against it.

Gratefully, this kind of cleanup can be made easy by doing a simple search and replace when querying sql_modules to find any views or stored procedures need to be updated to work in the test environment.  Here is an example of such a script to help fix that problem.

Take the results from a query such as that and now, I can either change the views en masse or I could copy all of the results from the ModCode column and paste those to a new window.  Using a regex (to replace all GO statements with a GO \n as shown in the pic) or something like SQL Prompt to prettify the code would be pretty easy from there to make it more useful.


Of course, this only helps address the issue with the linked servers in the views.  The same problem would exist with stored procedures.  It is up to the DBA to know which ones need to be changed and which should not be changed.  Just understand that linked servers are there to present yet another nagging symptom to keep your server from being healthy and worse is they help keep you from being healthy (remember the lack of sleep they cause).

If you are interested, I also have this article to help you find those pesky linked servers before you start diving down the restore rabbit hole.  The article will help evaluate the scope of linked server use and has a query to help identify linked servers.

*Funny afterthought is that both of these niggles that can help decrease health of your SQL Server have ties back to Linked Servers.  If you read the links, you will see what I mean.

T-SQL Tuesday 61: A Season of Giving

Comments: No Comments
Published on: December 9, 2014

TSQL2sDay150x150Tis the season for TSQL Tuesday.  Not only is it that season again, but it is also the Holiday season.

During this season, many people start to think about all of the things for which they are thankful.  Many may start to think about their families and friends.  And many others will focus more of their attention to neighbors and other people in the community.  This is often done regardless of how well you may know the people or in spite of ill feelings that may exist for the people at other times of the year.

Yes, it is a good time of the year.  And to top it off, we may even get to enjoy snow during this time of year while we sip hot cocoa, learn SQL and eat pies of many different sorts.  Yes! It is a glorious time of the year.

I already have a couple of SQL books to read as I cozy up close to the fire with my children near.  (Oh yes, it is never too early to learn SQL.)  A little SQL roasting on the open fire so to speak.  Awesome time of year.

With all that is going well and all the SQL I can be learning, it is also a Season of Giving.  It is because of the time of year that Wayne Sheffield was probably prompted to invite all of us to write about that topic for this months TSQL Tuesday.  You can read the invite here.

But thinking about that topic and the time of year, I wanted to talk briefly about some ways I know the SQL Community gives back.

Doctors without Borders

A really well known opportunity this past year that helped people to give back to the community was hosted by Argenis Fernandez (twitter) and Kirsten Benzel (twitterhere.  The two of them had this fantastic idea to involve the SQL Family in driving a fundraiser for Doctors without Borders.  They had publicized various goals to make it fun and achieved a lot of those goals.  This was an event I would like to see again and it was one that accomplished a lot of good.

Christmas Jars

Christmas JarEach Christmas season there is a phenomenon associated to a book called Christmas Jars.  People from all across the country anonymously donate a jar to somebody in the community that may have hit a stretch of hard luck.  In the jar is a variable amount of money for the family to use to help with whatever they need during that time.  You can read more about that here and here.

The Christmas Jars is something that my children do each year.  They find a family somewhere in town and find a way to get the jar to the family anonymously.  The amount of money is never the same, but the intent and love is always the same.  They are doing it to help their neighbors without any publicity.  They know the good that is brought from the love they show to their neighbors.

Watching my children participate in a worthwhile way to give makes me happy.  I hope it is something that will stick with them throughout their lives.


All of that said, the TSQL Tuesday invite asked for what we plan to do in the upcoming year for the SQL community.  This is a really hard topic to answer.  It kind of depends on what opportunities become available in the upcoming year.  I can say this though, I do plan to continue to help and give where I can.

TSQL Tuesday #60: Something Learned This Way Comes

Comments: 5 Comments
Published on: November 11, 2014

TSQL2sDay150x150It is once again time to come together as a community and talk about a common theme.  This monthly gathering of the community has just reached it’s 5th anniversary.  Yes, that’s right.  We have been doing this for 60 months or five years at this point.  That is pretty cool.

This month Chris Yates (blog | twitter) has taken the helm to lead us in our venture to discuss all the wonderful things that we have learned.  Well, maybe not all the things we have learned, but at least to discuss something we have learned.

Here are some details from the actual invite that you can read here.

Why do we come to events, webinars, sessions, networking? The basic fundamental therein is to learn; community. With that said here is this month’s theme. You have to discuss one thing, few things, or many things on something new you’ve learned recently. It could be from a webinar, event, conference, or colleague. The idea is for seasoned vets to new beginners to name at least one thing; in doing so it might just help one of your fellow SQL friends within the community.

The topic is straight forward but can be a bit difficult at times.  This is a pretty good topic to try and discuss.  I know I have been struggling for content for the topic.  Which makes it that much better because it provides a prime example of how to think about and discuss some pretty important things, while trying to compile that into a recap of one’s personal progress.

Let’s think about the topic for a bit and the timing of the topic.  This comes to us right on the heels of PASS Summit 2014 and in the middle of SQL Intersections in Las Vegas.  We might as well throw in there all of the other things like SQL Saturdays that have been happening leading up to and following those major conferences.

There has been ample opportunity over the past few weeks to learn technical content.  When networking with people there are ample opportunities at these major conferences to also learn about other people and about one’s self.  A good example of that can be seen in a blog post I wrote while attending PASS Summit, which you can read here.

The biggest learning opportunity that evolved from PASS Summit 2014 for me was the constant prodding in various sessions to break out the debugger and become more familiar with what is happening in various cases.  I saw the debugger used in three of the sessions I attended.  There are some great opportunities to learn more about SQL Server by taking some trinket of information from a session and trying to put it to use in your development environment.  This is where learning becomes internalized and gives a deeper understanding.

I hope you have been able to pick up on some tidbit that can be used to your advantage to get a deeper understanding of SQL Server.

T-SQL Tuesday #58 – Security Phrases

Comments: 2 Comments
Published on: September 9, 2014

TSQL2sDay150x150Today is once again TSQL Tuesday.  This month the event and topic are being hosted by Sebastian Meine (blog | twitter).  You can read all about the topic this month on his blog inviting all to participate.

Despite Sebastian being a real cool kid, I was not too hip to the topic this month.  Not because it is a bad topic or anything, it’s just that I really had nothing that seemed to stand out as easy to write for the blog party.

Then all of a sudden, a nice fat, juicy pork chop landed right in my lap.

The Pork Chop

A client requested that I make some changes to a task on a development server for them.  As it happens, the task is a powershell script that was being run on a schedule via a Scheduled Task in the Windows “Scheduled Tasks” control panel.  Making the requested change is a no-brainer of a change – or it should have been.

The change was to change the owner/executor of the scheduled task to the service account for the SQL Service.  By doing that, they would be less likely for the job to fail in the future due to an employee leaving the company.

As luck would have it the client DBA happened to know the password for the service account.  When changing the task to use the service account with the supplied password, we soon discovered that the supplied password caused the service account to become locked.  OUCH!

Maybe it was just fat fingered?  Nope, no dice!  As it turns out the DBA had the incorrect password and did not know the correct password.  Worse, nobody else knew what the correct password was.  Due to this issue, I proposed that the sysadmins and I work together to get the password changed.  That is to be done at a future date.

In addition to this, we decided that the passwords need to be more accurately documented.  These should be stored in an encrypted vault (the application is your choice).  But the mere use of an encrypted vault is far better than the use of a sticky note to document passwords (and I have seen that far too often at client sites).

This is just a short and sweet post for the day.  I think that it demonstrates problems that can arise from bad password management and also the risks that could come from that password management.  In our case, it was at least a Dev server with minimal users.

«page 1 of 7

October 2015
« Sep    


  • @mvelic: It's just maddening because this lookup has *always* worked in the past. It's just now deciding to not recognize matches. #sqlhelp
  • @mvelic: Has anyone just seen an SSIS Lookup fail to make matches? You know the matches exist, but it doesn't connect them and it fails? #sqlhelp
  • @banerjeeamit: @MattPgh No. Current processing report is not visible. This is visible in RunningJobs table but not the stats breakdown. #sqlhelp
  • @forhakim: #sqlhelp in Visual Studio SSDT is there a way to make it NOT show table designer, only the script, when I edit a table?
  • @MattPgh: @banerjeeamit Will the current report show up in ExecutionLog? whatever processing is happening did not finish yet. #sqlhelp
  • @banerjeeamit: @MattPgh Look at the time processing and rendering in the logging table: #sqlhelp
  • @banerjeeamit: @MattPgh Also, CPU time can be consumed due to rpt processing. This is available thru the ExecutionLogStorage table #sqlhelp
  • @banerjeeamit: @MattPgh Using XEvents or profiler u can see which stmt r CPU intensive? This wud gv u the cpu time consumed by the DB queries. #sqlhelp
  • @MattPgh: Is there a way to tell exactly what SSRS service is doing when it has CPU pegged to 100%? like a "what running" query in sql. #sqlhelp
  • @flippbreezy: Any #SSIS people out there have any experience with @CozyRoc's Amazon S3 task for uploading files? #sqlhelp

Welcome , today is Friday, October 9, 2015