Las Vegas UG This week

Comments: No Comments
Published on: January 5, 2015

What a fun week we have tuned up for the folks in Las Vegas.  It is the first full week of January and there is this huge convention going on near the strip.  And as timing would have it, this week is also the perfect time to have our User Group meeting.

What major conference is going on this week you ask?  Why it is time for 200,000 people to invade the strip, all of the major convention centers (well nearly all of them), and every last hotel room (as seen by the tripling in price for hotel rooms starting Tuesday Night – unless you go to the Cosmo where the room rate increased 20x this week).  That conference happens to be CES (Consumer Electronics Show).

If you happen to be in town for CES, live here in Vegas or are just in town this week, we invite you to come join us at the Las Vegas UG meeting on Thursday.

January SQL Invite

We have the full details for the meeting on our meetup site.  This month we have an interactive session where Wayne Sheffield will walk you through some nifty tips and tricks for Management Studio.  Read the full details on the meetup site and come visit us in person or via our livemeeting.  We’d love to see you.

Update:

We are swapping presenters this month at the chance to have a presenter in person.  Randy Knight will be in the area and has asked to be able to present.  Thanks to Wayne for being understanding.  Please check the meetup site for more details.

Effects of Max Mem on Plan Cache

Reading a Microsoft Article (which can be found here) while making sure I understood what can cause a query plan to be recompiled or removed from the plan cache, I got to thinking a bit.

In the article, it lists the following as things that can cause a plan to be removed from cache or to be recompiled.

  • Execution plans remain in the procedure cache as long as there is enough memory to store them. When memory pressure exists, the Database Engine uses a cost-based approach to determine which execution plans to remove from the procedure cache. To make a cost-based decision, the Database Engine increases and decreases a current cost variable for each execution plan.
  • Changes made to a table or view referenced by the query (ALTER TABLE and ALTER VIEW).
  • Changes made to a single procedure, which would drop all plans for that procedure from the cache (ALTER PROCEDURE).
  • Changes to any indexes used by the execution plan.
  • Updates on statistics used by the execution plan, generated either explicitly from a statement, such as UPDATE STATISTICS, or generated automatically.
  • Dropping an index used by the execution plan.
  • An explicit call to sp_recompile.
  • Large numbers of changes to keys (generated by INSERT or DELETE statements from other users that modify a table referenced by the query).
  • For tables with triggers, if the number of rows in the inserted or deleted tables grows significantly.
  • Executing a stored procedure using the WITH RECOMPILE option.

The first item removes a plan from cache while the rest mark the plan is invalid and a recompile will be forced on the next execution of the SQL Statement.  But then the question comes, why make the distinction between the two?  Doesn’t removing a plan from cache in essence equate to a recompile further down the road when the query is executed again?  While semantically it is not a recompile, it sure sounds a lot like one.  The query was compiled, invalidated and removed from cache and then compiled again when executed the next time around.

When thinking about all of this, how can we see that memory pressure can “invalidate” query plans?  That is actually quite simple.  We can see the same net effects with the following experiment.

WARNING: DO NOT DO THIS ON ANY ENVIRONMENT THAT IS NOT A SANDBOX ENVIRONMENT.

In this setup, I have a Sandbox server with a simulated load that keeps plenty of plans in cache and SQL Server is happy with about eight gigabytes of memory allocated to it.  To show that memory can cause a bunch of plans to be invalidated (and removed from the plan cache), I am going to take memory away from SQL Server and in essence simulate a memory leak or memory pressure.

This shows that the settings have taken effect on SQL Server and all of this without a Service restart (I keep hearing that myth).

Max Memory in SQL Server

Seeing that the memory settings are taking immediate effect, we can turn our attention to that query that was run.  In the query to experiment with this, I took a count of the number of plans in the plan cache first, then changed the max memory, and then took a count of the number of plans in cache again.  This next image shows the results of those two plan count queries.  It is obvious here that the number of plans in cache was severely impacted by this change in Max memory when taking TOO much memory away from SQL Server (this is why you should only do this on a sandbox server).

Max Mem effect on Plan Cache

 

If you happen to take too much memory away from SQL Server, you could end up in a severe memory pressure situation where connections are limited.  Even trying to increase max memory could be severely hampered due to insufficient resources in the default resource pool.  Here is a blog post that demonstrates the effects on connections due to this error (even as bad as SQL Server not starting up).

Since the problem is in the default resource pool, you could try a connection to the DAC and modify the max memory there.  If the pressure is severe enough, you may be limited in your ability to connect to the DAC as well.  In that case, you may be in need of starting up in single user mode as referenced in that prior blog post.  Sometimes, it is enough to reduce the number of connections or resource requirements to regain access to the server to change the max memory appropriately.  Suffice it to say, do not run this on a production type of box, lest you are ready to cause a bit of an outage.  The intent here is solely to share that max memory can have an immediate impact on compiles and plans in cache.

Happy Holidays

Comments: No Comments
Published on: December 26, 2014

Quick and simple for today.  Happy Holidays to you!

Happy Holidays

 

Murder in Charleston

I am about to set sail on a new venture with my next official whistle stop.  This year has been plenty full of whistle stops and I plan on continuing.  You can read (in full) about previous whistle stops and why they are called whistle stops here.

Suffice it to say at this point that it all started with a comment about a sailing train a few months back.

Charleston Friend

 

Time to sink or sail, so to speak.  SQL Saturday 354 in South Carolina will mark the next attempt at what I hope to be a repeat performance – many times.  I will be tag-teaming with Wayne Sheffield in this all day workshop event.  The session is one of two all day sessions for the event in Charleston, SC.

If you are a DBA or a database developer, this session is for you.  If you are managing a database and are experiencing performance issues, this session is a must.  We will chat with attendees about a horde of performance killers and other critical issues we have seen in our years working with SQL Server.  In short, some of these issues are pure murder on your database, DBA, developer and team in general.  We will work through many of these things and show some methods to achieve a higher state of database Zen.

Description

Join Microsoft Certified Masters, Wayne Sheffield and Jason Brimhall, as they examine numerous crazy implementations they have seen over the years, and how these implementations can be murder on SQL Server.  No topic is off limits as they cover the effects of these crazy implementations from performance to security, and how the “Default Blame Acceptors” (DBAs) can use alternatives to keep the developers, DBAs, bosses and even the end-users happy.

Presented by:

wayneWayne Sheffield, a Microsoft Certified Master in SQL Server, started working with xBase databases in the late 80’s. With over 20 years in IT, he has worked with SQL Server (since 6.5 in the late 90’s) in various dev/admin roles, with an emphasis in performance tuning. He is the author of several articles at www.sqlservercentral.com, a co-author of SQL Server 2012 T-SQL Recipes, and enjoys sharing his knowledge by presenting at SQL PASS events and blogging at http://blog.waynesheffield.com/wayne

 

 

 

JasonBrimhall

Jason Brimhall has 10+ yrs experience and has worked with SQL Server from 6.5 through SQL 2012. He has experience in performance tuning, high transaction environments, as well as large environments.  Jason also has 18 years experience in IT working with the hardware, OS, network and even the plunger (ask him sometime about that). He is currently a Consultant, SQL Server MVP and a Microsoft Certified Master(MCM). Jason is the VP of the Las Vegas User Group (SSSOLV).

 

 

 

 

Course Objectives

  1. Recognize practices that are performance pitfalls
  2. Learn how to Remedy the performance pitfalls
  3. Recognize practices that are security pitfalls
  4. Learn how to Remedy the security pitfalls
  5. Demos Demos Demos – scripts to demonstrate pitfalls and their remedies will be provided
  6. Have fun and discuss
  7. We might blow up a database

kaboom

 

There will be a nice mix of real world examples and some painfully contrived examples. All will have a good and useful point.

If you will be in the area, and you are looking for high quality content with a good mix of enjoyment, come and join us.  You can find registration information and event details at the Charleston SQL Saturday site – here.  There are only 75 seats available for this murder mystery theater.  Reserve yours now.

The cost for the class is $110 (plus fees) up through the day of the event.  When you register, be sure to tell your coworkers and friends.

Wait, there’s more…

Not only will I be in Charleston for this workshop, we will also be presenting as a part of the SQLSaturday event on December 13, 2014 (the day after the workshop which is December 12, 2014).  You can view the available sessions here.

Shameless plug time

I present regularly at SQL Saturdays.  Wayne also presents regularly at SQL Saturdays.  If you are organizing an event and would like to fill some workshop sessions, please contact either Wayne, myself or both of us for this session.

Audit Database File Size Changes

dbsizechange

Recently I shared an article on how to track the growths and shrinks that occur within database files.  I shared that article here, you should read it before proceeding with today’s article.

Near the end of that article I declared that it was a really good method to track that information within SQL Server 2008.  What if you happen to be on SQL 2012 or SQL 2014 (as of this writing)?  Will it work there too?

Does it Work Past 2008?

It is a good question.  Does that extended event session I shared work in SQL Server 2012 or 2014?  Let’s take a quick look.  I am skipping the setup scripts for that XE session in this article, so you will need to get them from the previous article in order to follow along in this segment.

Before we can investigate if this extended event session will work, let’s take a quick look to confirm that the session is running on the server.  We can do that with a query similar to the following.

Running that query will produce results very similar to the following.

SessionCheck

This is good news.  If you noticed, I am querying a couple of views to get this information.  In the server_event_sessions catalog view I can determine if the event exists.  When checking the dm_xe_sessions DMV, I can see if the session is running or not by whether or not the session exists in the view.  When the session is enabled and running, then the DMV will return a record for it. Otherwise, the DMV does not hold a record for the session.

With a session running, we can now validate if it is running properly by running any script that will cause the files to grow or shrink.  Recall that in the previous article, it was shown that any growth or shrink operation will cause an event to fire with this session.  Here is a sample of the query I am running.

That query has three distinct segments.  The first is just to check my file sizes.  The second segment performs my file shrink operations.  And the final segment checks the file sizes again.  Here is what the first and third segments would look like on the Sandbox2 database that I used in the previous article.

filesizes_vold

With the evidence that we have the session running and that there was indeed a file size change, let’s now check the event session data and confirm whether or not the session is working on this SQL 2014 server.

The preceding is the query I am using to query the session data.  Running that query will produce the following results.

nodata

That’s right!  There is no session data despite the event having occurred and despite the session running currently.  We also know that this session works (we demonstrated it on SQL 2008).  So there must be a bug, something is broken.  Right?

What Now?

Since the extended event obviously no longer works, we are stuck with few options.  We could always try resorting back to the default trace.  After all, I demonstrated that the default trace is already trapping information about file shrinks.  That was discussed in the prior article and here as well.

So, what if we tried to go and capture all of the same information from the default trace?  We could certainly try that.  Assuming that the default trace is still running on the server, this query could get us pretty close.

And this does a fairly decent job of getting the info we seek.  Sadly, though, it does not trap all of the necessary information.  Only the DBCC event (event 116) traps the sql statement that triggered the event to be recorded in the default trace.  But for the most part it can be a decent swing at getting the information.  Without the sql statements tied to the event, I’d rather not use it because it really just shows me how many times the size changed, what time the event occurred, and the size of the change.

There has got to be some other way of getting this to work in extended events.  A good question to ask is “Why does the extended event no longer work?”

A little digging, and one might eventually find a document that can shed some light on the problem.  Reading this document, we can see why the event no longer works.  It has been deprecated.  What?  After one release, they decide to take away a critical piece of information?  How can that be?

Time to back up those findings with something a little more authoritative such as this.  Looking at this article, we see that indeed the event was deprecated.  But wait a minute, the event was not just deprecated, it was also replaced with a new event.  We are in business so let’s do some querying within event sessions.

Back in Business

We could have probably spared some time by checking the available events in SQL Server by using this next query.  However, the events used in the previously used event session still exist.  If they did not exist, the session creation would have failed.  This can be a bit misleading, so it is good to have the information from Microsoft that the events have been deprecated and merged into a single event.

This produces the desired results with the new event name specified in that Microsoft article.

2012filesizeevent

Based on this information, a rewrite of the extended event session is possible and necessary.  We can update the extended event session that audits when a database file changes in size.  This will look something like the following session.

And since I happened to have that session also running at the same time as the shrinkfiles that were run previously in this article, I can go ahead and check to see if anything was captured.  To check the session data, I will use the following query.

In this new event for 2012 and beyond, there is different data that is captured.  This means that I have access to better information about what is happening to my database files with regards to the size changes (growths and shrinks).

Conclusion

If you just so happen to be running on SQL Server 2012 or later, you will need to change your event sessions that were tracking file changes.  It is a bit of an exercise to make the change and can be frustrating, but it is well worth it.  The improved data that can be captured is going to help better control and oversee the environment.

TSQL Tuesday #60: Something Learned This Way Comes

Comments: 4 Comments
Published on: November 11, 2014

TSQL2sDay150x150It is once again time to come together as a community and talk about a common theme.  This monthly gathering of the community has just reached it’s 5th anniversary.  Yes, that’s right.  We have been doing this for 60 months or five years at this point.  That is pretty cool.

This month Chris Yates (blog | twitter) has taken the helm to lead us in our venture to discuss all the wonderful things that we have learned.  Well, maybe not all the things we have learned, but at least to discuss something we have learned.

Here are some details from the actual invite that you can read here.

Why do we come to events, webinars, sessions, networking? The basic fundamental therein is to learn; community. With that said here is this month’s theme. You have to discuss one thing, few things, or many things on something new you’ve learned recently. It could be from a webinar, event, conference, or colleague. The idea is for seasoned vets to new beginners to name at least one thing; in doing so it might just help one of your fellow SQL friends within the community.

The topic is straight forward but can be a bit difficult at times.  This is a pretty good topic to try and discuss.  I know I have been struggling for content for the topic.  Which makes it that much better because it provides a prime example of how to think about and discuss some pretty important things, while trying to compile that into a recap of one’s personal progress.

Let’s think about the topic for a bit and the timing of the topic.  This comes to us right on the heels of PASS Summit 2014 and in the middle of SQL Intersections in Las Vegas.  We might as well throw in there all of the other things like SQL Saturdays that have been happening leading up to and following those major conferences.

There has been ample opportunity over the past few weeks to learn technical content.  When networking with people there are ample opportunities at these major conferences to also learn about other people and about one’s self.  A good example of that can be seen in a blog post I wrote while attending PASS Summit, which you can read here.

The biggest learning opportunity that evolved from PASS Summit 2014 for me was the constant prodding in various sessions to break out the debugger and become more familiar with what is happening in various cases.  I saw the debugger used in three of the sessions I attended.  There are some great opportunities to learn more about SQL Server by taking some trinket of information from a session and trying to put it to use in your development environment.  This is where learning becomes internalized and gives a deeper understanding.

I hope you have been able to pick up on some tidbit that can be used to your advantage to get a deeper understanding of SQL Server.

Summit 2014 – Next Impressions

Comments: 4 Comments
Published on: November 7, 2014

As Summit 2014 begins to wind down, it is time for some more impressions from the week.  The week has been good so far.  It has been very busy and also can be quite a drain mentally and physically from everything that has transpired.

If you are interested, I have written about some of my other impressions from the week, here.

Several years ago, I blogged about an incident with plagiarism with both an original post and a follow-up.  I bring that up, not to rehash the negative, but instead to discuss an impression from this week.  If you read the follow-up, you will see that I had a chat, at that time, with Steinar (twitter) about the problem and how to resolve it.  I met Steinar for the first time this week.  And to be honest, I had forgotten about the conflict and had removed the RSS feed since the original domain had gone down.

Anyway, Steinar and I had the chance to chat for the first time face to face this week.  Steinar, in my opinion, is a pretty cool guy that made a simple RSS mistake.  The impression is that he remembered me for how I treated him several years ago and was very appreciative of that.  How cool is that?  I really appreciate the opportunity to chat with him and that something I did left a positive impact on him.

Another opportunity is to be able to perform random acts of kindness or service while at Summit.  Much like helping Paul White learn how to use a smart phone, I had the even more rare opportunity to help Kalen Delaney (blog | twitter) out of a sticky situation.  It was a minor but frustrating thing that all of us run into from time to time.  The zipper on her Surface case had become stuck, so I helped her with that.  It’s a little thing but it is the type of thing that, if you are watching, you will see happening all over the place during the week of PASS Summit.

So, the next time you are at Summit, and while back in your local communities after the week has ended for Summit, keep an eye out for those little acts of kindness.  But at the same time, keep an eye out for those that might be watching you.  What kind of impression are you leaving for them or for the SQL Community?

Ghosts – an eXtrasensory Experience

ghostrip_fireThis is the last article in a mini-series diving into the existence of ghosts and how to find them within your database.

So far this has been a fun and rewarding dive into Elysium to see and chat with these entities.  We have unearthed some means to be able to see these things manifesting themselves in the previous articles.  You can take a look at the previous articles here.

For this article, I had planned to discuss another undocumented method to look into the ghost records and their existence based on what was said on an msdn blog.  But after a lot of research, testing and finally reaching out to Paul Randal, I determined that won’t work.  So that idea was flushed all the way to Tartarus.

Let it be made very clear that DBTABLE does not offer a means to see the ghosts.  Paul and I agree that the other article that mentioned DBTABLE really should have been referring to DBCC Page instead.

Despite flushing the idea to Tartarus, it was not a fruitless dive.  It just was meaningless for the purpose of showing ghosts via that DBCC command.  I still gained value from the dive!!

All of that said, the remainder of the plan still applies and it should be fun.

Really, at this point what is there that hasn’t been done about the ghosts?  Well, if you are well tuned to these apparitions, you may have received the urge to explore them with Extended Events – sometimes called XE for short.

As has been done in the past, before we board Charon’s boat to cross the River Styx to Hades to find these ghosts in Elysium, one really needs to run the setup outlined here.

With the framework in place, you are now ready to explore with XE.

Look at that! There are several possible events that could help us track these ghosts.  Or at the least we could get to know how these ghosts are handled deep down in the confines of Hades, err I mean the database engine.

Ghost_XE

 

From these possible events, I opted to work with ghost_cleanup and ghost_cleanup_task_process_pages_for_db_packet.  The sessions I defined to trap our ghost tracks are as follows.

You can see there are two sessions defined for this trip down the Styx.  Each session aptly named for our journey.  The first (GhostHunt) is defined to trap ghost_cleanup and sends that information to a histogram target.  The second (SoulSearch) is defined to use the other event, and is configured to send to the ring_buffer.  Since the second event has a “count” field defined as a part of the event, it will work fine to just send it to the ring buffer for later evaluation.

Once I have the traps, I mean event sessions defined, I can now resume the test harness from the delete step as was previously done in previous articles.  The following Delete is what I will use.

Prior to running that delete though, I checked the Event Session data to confirm a starting baseline.  Prior to the delete, I had the following in my histogram target.

 

predelete_count

 

After running the delete, and checking my histogram again, I see the following results.

post_count

 

You can see from this that in addition to the 25 pre-existing ghosts, we had another 672 ghosts (666 of which were from the delete).

This is how I was able to investigate the GhostHunt Extended Event Histogram.

But what about looking at the other event session?

Let’s look at how we can go and investigate that session first and then look at some of the output data.

ghostclean

 

Cool!  Querying the SoulSearch session has produced some information for various ghosts in the database.  Unlike the histogram session that shows how many ghosts have been cleaned, this session shows us some page ids that could contain some ghosts – in the present.  I can take page 1030111 for instance and examine the page with DBCC PAGE as follows.

 

 

pagealtLook at that page and result!! We have found yet another poltergeist.

RIP

Once again we have been able to journey to the depths of the Database engine and explore the ghosts that might be there.  This just happens to illustrate a possible means to investigate those ghosts.  That said, I would not necessarily run these types of event sessions on a persistent basis.  I would only run these sessions if there seems to be an issue with the Ghost cleanup or if you have a strong penchant to learn (on a sandbox server).

Some good information can be learned.  It can also give a little insight into how much data is being deleted on a routine basis from your database.  As a stretch, you could even possibly use something like this to get a handle on knowing the data you support.  Just be cautious with the configuration of the XE and understand that there could be a negative impact on a very busy server.  And certainly proceed at your own risk.

Database Ghosts

phantasmripRecently you may have seen my first article on this topic over at SQL Solutions Group.  If not, here is the link so you can read that article first before proceeding here.  This article is intended as a first of two part follow-up to that article.

Now that you have read that other article, we can recap it a little bit.

You will recall that in that article, I discussed the presence of Ghosts in your database.  I also discussed that those ghosts are a good thing.  They have some benefits to them, of which is a bit of a performance boost for some operations like the rollback of a delete.

In that article I discussed one method with which you could see these ghost records.  In this article, I would like to share an alternate method to be able to see these ghosts.

In order to explore this alternate means, let’s go ahead and follow the first few steps from the other article to get the setup complete so our second ghost hunting foray can begin.

Now with the setup complete, we should once again confirm that we have appropriate data available for the hunt.  Once again a simple query can suffice to show the data.

Upon execution of the check script, we should see something similar to the following data-set.

datasample

 

Great, the data is confirmed and we should have a wonderful chance to find some ghosts once again.  In the previous attempt, we needed an additional plasma blaster in the form of a trace flag.  In this attempt we will hunt these ghosts without that tool and see what we can find.  Why attempt it in this way?  Well, it is simply because I’d rather not use a trace flag if it is not necessary.  If you recall, that trace flag had a couple of noted effects.  One of the effects was that it turned off the ghost cleanup process.  If I can avoid it, I’d rather leave the containment unit in tact.

Now, due to that clerical error of putting Halloween on the wrong date, we need to follow the prescribed policy to delete records prior to creating the proper records.

Notice that we issued the delete in a transaction this time around.  We are going to leave that transaction open for a bit while we hunt those ghosts.

The first step is to verify that some ghosts might be present.  To do that we should run a query such as the following.

From that query we should see something like the following.

idxstats_ghosts

Once again we are on the right track.  We can further confirm the existence of these ghosts through a little more investigation and monitoring.  Let’s try the trick with the dblog function again.

With that query, we should see something like the following result set.

dblog_ghosts

 

Very cool.  We once again can see that these phantasms are in the database.  We have enough information that we can proceed on to the next stage.  We can pass the PageID into DBCC PAGE in order to investigate the ghosts on the page.  If we use the PageID that is circled with the green in the preceding result set, we can get a better feel for these specters.

And we may see results such as the following.

page_ghosts2

Recall that the log says this page has ghost records on it.  When we check the page with DBCC PAGE we can definitely see that there are ghosts on the page.  This is very cool.  Now, had we tried to check for ghost records on the PFS page we would not be able to see the ghost count like we were able to see by enabling the TF.

Once again we have been able to display the existence of ghosts in the database.  In order to get these ghosts to move on from the database to their afterlife, we merely need to commit the transaction or roll back the transaction.

Stay tuned for the next article in this mini-series about ghosts.  Who knows, we may even have a bonus Halloween article since this is Halloween month.

Part III of the series can now be found here.

Murder In Utah

I am about to set sail on a new venture with my next official whistle stop.  This year has been plenty full of whistle stops and I plan on continuing.  You can read (in full) about previous whistle stops and why they are called whistle stops here.

Suffice it to say at this point that it all started with a comment about a sailing train a few months back.

goldspike

 

Time to sink or sail, so to speak.  SQL Saturday 349 in Utah will mark the next attempt at what I hope to be a repeat performance – many times.  I will be tag-teaming with Wayne Sheffield in this all day workshop event.  The session is one of two all day sessions for the event in Lehi, UT (just south of Salt Lake City).

If you are a DBA or a database developer, this session is for you.  If you are managing a database and are experiencing performance issues, this session is a must.  We will chat with attendees about a horde of performance killers and other critical issues we have seen in our years working with SQL Server.  In short, some of these issues are pure murder on your database, DBA, developer and team in general.  We will work through many of these things and show some methods to achieve a higher state of database Zen.

Description

Join Microsoft Certified Masters, Wayne Sheffield and Jason Brimhall, as they examine numerous crazy implementations they have seen over the years, and how these implementations can be murder on SQL Server.  No topic is off limits as they cover the effects of these crazy implementations from performance to security, and how the “Default Blame Acceptors” (DBAs) can use alternatives to keep the developers, DBAs, bosses and even the end-users happy.

Presented by:

wayneWayne Sheffield, a Microsoft Certified Master in SQL Server, started working with xBase databases in the late 80’s. With over 20 years in IT, he has worked with SQL Server (since 6.5 in the late 90’s) in various dev/admin roles, with an emphasis in performance tuning. He is the author of several articles at www.sqlservercentral.com, a co-author of SQL Server 2012 T-SQL Recipes, and enjoys sharing his knowledge by presenting at SQL PASS events and blogging at http://blog.waynesheffield.com/wayne

 

 

 

JasonBrimhall

Jason Brimhall has 10+ yrs experience and has worked with SQL Server from 6.5 through SQL 2012. He has experience in performance tuning, high transaction environments, as well as large environments.  Jason also has 18 years experience in IT working with the hardware, OS, network and even the plunger (ask him sometime about that). He is currently a Consultant and a Microsoft Certified Master(MCM). Jason is the VP of the Las Vegas User Group (SSSOLV).

 

 

 

 

Course Objectives

  1. Recognize practices that are performance pitfalls
  2. Learn how to Remedy the performance pitfalls
  3. Recognize practices that are security pitfalls
  4. Learn how to Remedy the security pitfalls
  5. Demos Demos Demos – scripts to demonstrate pitfalls and their remedies will be provided
  6. Have fun and discuss
  7. We might blow up a database

kaboom

 

There will be a nice mix of real world examples and some painfully contrived examples. All will have a good and useful point.

If you will be in the area, and you are looking for high quality content with a good mix of enjoyment, come and join us.  You can find registration information and event details at the Salt Lake City SQL Saturday site – here.  There are only 75 seats available for this murder mystery theater.  Reserve yours now.

The cost for the class is $150 (plus fees) up through the day of the event.  When you register, be sure to tell your coworkers and friends.

Wait, there’s more…

Not only will I be in Utah for this workshop, I will also be presenting as a part of the SQLSaturday event on October 25, 2014 (the day after the workshop which is Oct. 24, 2014).  You can view the available sessions here.

Shameless plug time

I present regularly at SQL Saturdays.  Wayne also presents regularly at SQL Saturdays.  If you are organizing an event and would like to fill some workshop sessions, please contact either Wayne, myself or both of us for this session.

«page 1 of 17






Calendar
January 2015
M T W T F S S
« Dec    
 1234
567891011
12131415161718
19202122232425
262728293031  
Content
SQLHelp

SQLHelp


Welcome , today is Sunday, January 25, 2015