September 2014 Las Vegas UG Meeting

Comments: No Comments
Published on: September 10, 2014

Who is up for a little free learning this week? Besides the opulence and feast that was the 24 Hours of PASS (Summit Preview), we have more training in store for you from the people in Las Vegas. Let’s call this a preview for next week which happens to be DevConnections (which also happens to be in Vegas)!!

The Las Vegas User Group is happy to announce our monthly meeting.  The meeting is available for in person and webinar style.  The start time is 6:30 PM Pacific and the details are listed in this post.  We hope to see you there!

Capture

Abstract: PowerShell: The Way of the DBA Dragon

In this introduction to PowerShell, attendees will learn how to start from scratch with PowerShell 3.0 or newer, use the pipeline, run T-SQL against multiple instances, use transcripts, and be shown martial arts usage of one of the SQLPSX cmdlets.  Scripts will be provided.

BIO

Lars Rasmussen was born in Illinois, but considers Utah home.  He does not play video games, is learning to camp and hike, and is happy to have shared the summit of Mt. Timpanogos with two of his sons.  Lars’ wife and four children help him smile and laugh, and the family dog is teaching him patience.  Playing board games is one his favorite pastimes.  He considers SQL Server, PowerShell, and CMD.EXE some of his dearest frenemies.  Lars enjoys the company of SQL Server professionals and sysadmins – he used to be one of the latter, and is employed as a database administrator for HealthEquity.

LiveMeeting Info

Attendee URLhttps://www.livemeeting.com/cc/UserGroups/join?id=MR7C92&role=attend

Meeting ID: MR7C92

Effects of sp_rename on Stored Procedures

There comes a time when mistakes are made.  Sometimes those mistakes can be as annoying as a spelling mistake during the creation of a stored procedure.  When a mistake such as that happens, we are given a few choices.  One could either rename the stored procedure, drop and recreate the stored procedure or simply leave the mistake alone.

When choosing to rename the stored procedure, one may quickly reach for the stored procedure that can be readily used for renaming various objects.  That procedure was provided by Microsoft after-all and is named sp_rename.  Reaching for that tool however might be a mistake.  Here is what is documented about the use of sp_rename to rename a stored procedure.  That documentation can be read at this link on MSDN.

We recommend you do not use this statement to rename stored procedures, triggers, user-defined functions, or views; instead, drop the object and re-create it with the new name.

And later in the same documentation, one can read the following.

Renaming a stored procedure, function, view, or trigger will not change the name of the corresponding object name in the definition column of the sys.sql_modules catalog view. Therefore, we recommend that sp_rename not be used to rename these object types. Instead, drop and re-create the object with its new name.

Now, a chief complaint against dropping and recreating the stored procedure, as recommended, is that process can cause permissions issues.  I am less concerned about the permissions issues and see that as more of a nuisance that is easily overcome due to great documentation and a few quick script executions to restore the permissions.  Despite that, I think we might have a means to address the rename and permissions issue that will be shared later in this article.

Using sp_rename

When using sp_rename, it would be good to understand what happens and what one might expect to see.  Let’s use the following script to create a stored procedure to step through an exercise to rename a stored procedure and evaluate the results.

When I execute that series of batches, I will get an output that matches the following.

renameme

 

When looking at the results we can see that the use of sp_rename does indeed change the name of the stored procedure as it is represented via sys.objects and metadata.  We can also see that the definition of the stored procedure does not change as it is held within the metadata.

If I choose to check the definition through the use of OBJECT_DEFINITION()  instead of sys.sql_modules, you will be pleased to know that sys.sql_modules calls OBJECT_DEFINITION() to produce the definition that is seen in the catalog view.

Well, that does pose a potential problem.  We see that the object definition is unchanged and may report the name as being different than what the object name truly is.  What happens if I execute the stored procedure?  Better yet, if I can execute the stored procedure and then capture the sql text associated to that plan, what would I see?

Yes!  The renamed stored procedure does indeed execute properly.  I even get three results back for that execution.  Better yet, I get an execution plan which I can pull a plan_hash from in order to evaluate the sql text associated to the plan.  In case you are wondering, the execution plan does contain the statement text of the procedure.  But for this case, I want to look at the entire definition associated to the plan rather than the text stored in the plan.  In this particular scenario, I only see the body of the procedure and not the create statement that is held in metadata.

plan

For this particular execution and plan, I can see a plan_hash of 0xE701AFB2D865FA71.  I can now take this and provide it to the following query to find the full proc definition from metadata.

And after executing that query, I can see results similar to the following.

execplancache_text

 

Now is that because in some way the query that was just run was also running OBJECT_DEFINITION()?  Let’s look at the execution plan for both OBJECT_DEFINITION() and the query that was just run.

obj_def_plan

 

Looking at the XML for that particular plan and we see xml supporting that plan.  There is no further function callout and the plan is extremely simple.

Now looking at the plan for the query involving the query_plan_hash we will see the following.

fngetsql

 

Looking at this graphical plan, we can see that we are calling FNGETSQL.  Looking at the XML for this plan, we can verify that FNGETSQL is the only function call to retrieve the full sql text associated to this plan execution.  FNGETSQL is an internal function for SQL server used to build internal tables that might be used by various DMOs.  You can read just a bit more about that here.

What now?

After all of that, it really looks pessimistic for sp_rename.  The procedure renames but does not properly handle metadata and stored procedure definitions.  So does that mean we are stuck with drop and create as the Microsoft documentation suggests?

If you have access to the full procedure definition you could issue an alter statement.  In the little example that I have been using, I could issue the following statement.

After executing that script, I could check sys.sql_modules once again and find a more desirable result.

And my results…

finallymatching

 

If you don’t have the text to create the proc, you could use SSMS to script it out for you.  It is as simple as right-clicking the proc in question, selecting modify and then executing the script.  It should script at with the correct proc name (the beauty of SMO) and then you can get the metadata all up to snuff in your database.

Of course, if you prefer, you could just drop and recreate the procedure.  Then reapply all of the pertinent permissions.  That is pretty straight forward too.

SQL Server UG in Vegas August 2014 Meeting

Comments: No Comments
Published on: August 13, 2014

evite

 

Another Great meeting and topic is coming to the folks in Las Vegas.  This month we have had the luck of finding Mike Fal (blog | twitter) step up and fill our speaker void.

Yes, it happens to be the second Thursday of the month already.  Being that time of month, the SQL Server UG of Las Vegas will be meeting at the Tahitti Village Resort and Spa to take in some great info on SQL Server and Powershell.

You can read the information about the meeting on our Meetup page here.  Or you can continue reading here.

Improving Database Restores with Powershell

Database restores are a key function of any database administrator’s job. However, it can be an extremely time consuming task to sort through your backups, find the right files, and then get your database up and running. In an emergency this will have a disastrous impact on your Recovery Time Objective(RTO) and lead to the dreaded angry-CTO-in-your-cube effect. By leveraging some easy-to-use Powershell scripts, you can avoid the second disaster and the pain that comes with it. By attending this session, you will understand how you can use the Powershell automation framework for database restores, see scripts that will let you restore faster, and learn techniques to extend these tools for migrating data and testing backups.

Michael Fal  

Mike Fal is a musician turned SQL Server DBA, with 10+ years of experience as a database administrator. He has worked for several different industries, including healthcare, software development, marketing, and manufacturing and has experience supporting databases from 1 GB to 4 TB in size. Mike received his a Bachelor’s Degree from the University of Colorado at Boulder in 1996 and has been caught playing trombone in public on more than one occasion.

LiveMeeting Info

Attendee URL:https://www.livemeeting.com/cc/UserGroups/join?id=4RD8NP&role=attend

Meeting ID: 4RD8NP

Whether you are in Vegas or you are somewhere else, you are welcome to join us.  We hope to see you Thursday evening.

T-SQL Tuesday #57 – SQL Family and Community

Comments: 1 Comment
Published on: August 12, 2014

TSQL2sDay150x150Look at that, it is once again that time of the month that has come to be known as TSQL Tuesday.  TSQL Tuesday is a recurring blog party that occurs on the second Tuesday (most generally) of the month.  This event was the brainchild of Adam Machanic (Blog | Twitter).  

Anybody who desires to participate in this blog party is welcome to join.  Coincidentally, that open invitation is at the base of this months topic – Family and Community.  The invitation, issued by Jeffrey Verheul (blog | twitter), for this month said the following.

This month I would like to give everyone the opportunity to write about SQL Family. The first time I heard of SQL Family, was on Twitter where someone mentioned this. At first I didn’t know what to think about this. I wasn’t really active in the community, and I thought it was a little weird. They were just people you meet on the internet, and might meet in person at a conference some day. But I couldn’t be more wrong about that!

Once you start visiting events, forums, or any other involvement with the community, you’ll see I was totally wrong. I want to hear those stories. How do you feel about SQL Family? Did they help you, or did you help someone in the SQL Family? I would love to hear the stories of support, how it helped you grow and evolve, or how you would explain SQL Family to your friends and family (which I find hard). Just write about whatever topic you want, as long as it’s related to SQL Family or community.

What is it?

We have all likely seen SQL Family thrown about here and there.  But what exactly is this notion we hear about so often?

I think we have a good idea about what family might be.  I think we might even have a good idea of what a friend is.  Lastly, I might propose that we know what a community is.  When we talk of this thing called SQL Family, I like to think that it is a combination of family, friends and community.

mushroom

These are people that can come together and talk about various different things that span far beyond SQL Server.  We may only see each other at events every now and then.  Those events can be anything from a User Group meeting to a large conference or even at a road race (5k, half marathon, marathon).

These are the people that are willing to help where help is needed or wanted.  That help can be anything ranging from well wishes and prayers, to teaching about SQL Server, to lending a vehicle, or anything along that spectrum.

I have seen this community go out of their way to help provide a lift to a hotel or to the airport.  These people will help with lodging in various circumstances when/if they can.  These are the people that have been known to make visits to hospitals to give well wishes for other people in the community.

Isn’t that what friends / family really boils down to?  People that can talk to each other on an array of topics?  People that go out of their way to help?  Think about it for a minute or three.

Virtual Box is TOO Small

Comments: No Comments
Published on: August 5, 2014

I have been working on upgrading my laptop.  Since my laptop housed a bunch of my virtual machines for various presentations and labs, that means also upgrading and/or converting my virtual machines.

The new laptop is Windows 8.1.  With the new laptop I was interested in converting a bunch of my Virtual Box machines to Hyper-V machines.  That desire was put on hold after discovering that I needed to first convert the virtual disks and then import them.  That wasn’t what put me off to it for awhile though.

The fact that I had to install Virtual Box on the new laptop to do the conversion made it less desirable. You see, in order to make Virtual Box work, I had to uninstall Hyper-V from Windows 8.1 because HV disables settings that are required by Virtual Box to work.  Jumping through those hoops just makes me less inclined to hurry and try the conversion to Hyper-V.  Maybe someday down the road.

That said, with the new laptop supporting a resolution of 3200 x 1800, I found that my virtual machines started displaying extremely tiny resolutions.  No matter how I scaled the machine, the internal machine resolution was remaining tiny.

Come to find out, the virtual machine was using the 3200 x 1800 resolution of the host despite setting the virtual machine (internally) to resolution settings as low as 640 x 480 (which just produced a small window on the desktop).  In the settings for Virtual Box, I could not find a means to override that behavior.

What I did find though was a setting in the Windows 8.1 host control panel that affected the virtual machine size.  In the Display control panel there is a setting  saying “Let me choose one scaling level for all my displays”. That seems to help with this scaling issue.

As it appears, many applications automatically adjust the scaling on such a high resolution device.  Virtual Box does not automatically scale and just adopts the host machine resolution.  In my case, it was also always adopting the host machine max resolution.

I changed that setting, adjusted my resolution to 1920 x 1080, and then also adjusted the scaling factor to 150%.  This fixed the issue with the excessively small screen on the virtual machines.  Sure, I could have continued to use the windows “+” combination to zoom in an out constantly, but that was more of a hassle than convenience.

We’ll just have to see how it goes for the time being.  Maybe in a few weeks when I have time to try the Hyper-V conversion again, I will be able to jump back to the higher resolution.  We’ll find out then if that scaling issue is an issue for Hyper-V just as it is for Virtual Box.

Presenting at PSSUG

Comments: No Comments
Published on: August 4, 2014

pssug

 

Coming up this week, I have been given the opportunity to do something I enjoy doing.  I have been invited to present to the folks in Philadelphia.

Sebastian Meine (blog | twitter) approached me during SQL Saturday in Philadelphia and I was happy to help where I could.

The topic for this presentation will be Extended Events.  We are going to try a slightly different approach, but here is what was posted in the meeting invite.

Jason Brimhall SQL 2012 Extended Events
Extended Events were introduced in SQL Server 2008. With each edition since, we have seen a significant upgrade to this feature. Join me for a little adventure into defining this thing called Extended Events. We will discuss how to use Extended Events to aid in performance tuning and in day to day administration. We will also explore some background and the architecture of Extended Events.
Jason Brimhall

Jason Brimhall

 

As a Microsoft Certified Master/Microsoft Certified Solutions Master, I have 19 years’ experience in the technology industry, including more than 10 with SQL Server. I also earned a Bachelor’s Degree in Business Information Systems from Utah State University. One of the highlights of my career was co-authoring SQL Server 2012 T-SQL Recipes: A Problem-Solution Approach .  I am a frequent presenter at SQL Server events worldwide, which includes SQLSaturdays and User Groups. I am also currently helping lead the Las Vegas SQL Users Group.

I am looking forward to this opportunity and to mingle with the group for a bit.  I hope to see you there.

Oh, and here is the link to the invite for the meeting.

Murder in Raleigh

sqlsat320_webI am about to set sail on a new venture with my next official whistle stop.  This year has been plenty full of whistle stops and I plan on continuing.  You can read (in full) about previous whistle stops and why they are called whistle stops here.

Suffice it to say at this point that it all started with a comment about a sailing train a few months back.

raleigh_traini

Time to sink or sail, so to speak.  SQL Saturday 320 in Raleigh will mark the next attempt at what I hope to be a repeat performance – many times.  I will be tag-teaming with Wayne Sheffield in this all day workshop event.  The session is one of two all day sessions for the event in Raleigh NC.

If you are a DBA or a database developer, this session is for you.  If you are managing a database and are experiencing performance issues, this session is a must.  We will chat with attendees about a horde of performance killers and other critical issues we have seen in our years working with SQL Server.  In short, some of these issues are pure murder on your database, DBA, developer and team in general.  We will work through many of these things and show some methods to achieve a higher state of database Zen.

Description

Join Microsoft Certified Masters, Wayne Sheffield and Jason Brimhall, as they examine numerous crazy implementations they have seen over the years, and how these implementations can be murder on SQL Server.  No topic is off limits as they cover the effects of these crazy implementations from performance to security, and how the “Default Blame Acceptors” (DBAs) can use alternatives to keep the developers, DBAs, bosses and even the end-users happy.

Presented by:

wayneWayne Sheffield, a Microsoft Certified Master in SQL Server, started working with xBase databases in the late 80’s. With over 20 years in IT, he has worked with SQL Server (since 6.5 in the late 90’s) in various dev/admin roles, with an emphasis in performance tuning. He is the author of several articles atwww.sqlservercentral.com, a co-author of SQL Server 2012 T-SQL Recipes, and enjoys sharing his knowledge by presenting at SQL PASS events and blogging at http://blog.waynesheffield.com/wayne

 

 

 

JasonBrimhall

Jason Brimhall has 10+ yrs experience and has worked with SQL Server from 6.5 through SQL 2012. He has experience in performance tuning, high transaction environments, as well as large environments.  Jason also has 18 years experience in IT working with the hardware, OS, network and even the plunger (ask him sometime about that). He is currently a Consultant and a Microsoft Certified Master(MCM). Jason is the VP of the Las Vegas User Group (SSSOLV).

 

 

 

 

Course Objectives

  1. Recognize practices that are performance pitfalls
  2. Learn how to Remedy the performance pitfalls
  3. Recognize practices that are security pitfalls
  4. Learn how to Remedy the security pitfalls
  5. Demos Demos Demos – scripts to demonstrate pitfalls and their remedies will be provided
  6. Have fun and discuss
  7. We might blow up a database

kaboom

 

There will be a nice mix of real world examples and some painfully contrived examples. All will have a good and useful point.

If you will be in the area, and you are looking for high quality content with a good mix of enjoyment, come and join us.  You can find registration information and event details at the Raleigh SQL Saturday site – here.  There are only 25 seats available for this murder mystery theater.  Reserve yours now.

The cost for the class is $110 (plus fees) up through the day of the event.  When you register, be sure to tell your coworkers and friends.

Wait, there’s more…

Not only will I be in Raleigh for this workshop, I hope to also be presenting as a part of the SQLSaturday event on Sep 6 2014 (the day after the workshop which is Sep 5, 2014).  I hope to update with the selected session(s) when that information becomes available.

You can see more details about the topics lined up for this event – here.

Shameless plug time

I present regularly at SQL Saturdays.  Wayne also presents regularly at SQL Saturdays.  If you are organizing an event and would like to fill some workshop sessions, please contact either Wayne, myself or both of us for this session.

Murder In Denver

Comments: 1 Comment
Published on: July 14, 2014

sqlsat331_webI am about to set sail on a new venture with my next official whistle stop.  This year has been plenty full of whistle stops and I plan on continuing.  You can read (in full) about previous whistle stops and why they are called whistle stops here.

Suffice it to say at this point that it all started with a comment about a sailing train a few months back.

train

Time to sink or sail, so to speak.  SQL Saturday 331 in Denver will mark the next attempt at what I hope to be a repeat performance – many times.  I will be tag-teaming with Wayne Sheffield in this all day pre-con / workshop event.  The session is one of three all day sessions for the event in Denver CO.

If you are a DBA or a database developer, this session is for you.  If you are managing a database and are experiencing performance issues, this session is a must.  We will chat with attendees about a horde of performance killers and other critical issues we have seen in our years working with SQL Server.  In short, some of these issues are pure murder on your database, DBA, developer and team in general.  We will work through many of these things and show some methods to achieve a higher state of database Zen.

Description

Join Microsoft Certified Masters, Wayne Sheffield and Jason Brimhall, as they examine numerous crazy implementations they have seen over the years, and how these implementations can be murder on SQL Server.  No topic is off limits as they cover the effects of these crazy implementations from performance to security, and how the “Default Blame Acceptors” (DBAs) can use alternatives to keep the developers, DBAs, bosses and even the end-users happy.

Presented by:

wayneWayne Sheffield, a Microsoft Certified Master in SQL Server, started working with xBase databases in the late 80’s. With over 20 years in IT, he has worked with SQL Server (since 6.5 in the late 90’s) in various dev/admin roles, with an emphasis in performance tuning. He is the author of several articles atwww.sqlservercentral.com, a co-author of SQL Server 2012 T-SQL Recipes, and enjoys sharing his knowledge by presenting at SQL PASS events and blogging at http://blog.waynesheffield.com/wayne

 

 

 

JasonBrimhall

Jason Brimhall has 10+ yrs experience and has worked with SQL Server from 6.5 through SQL 2012. He has experience in performance tuning, high transaction environments, as well as large environments.  Jason also has 18 years experience in IT working with the hardware, OS, network and even the plunger (ask him sometime about that). He is currently a Consultant and a Microsoft Certified Master(MCM). Jason is the VP of the Las Vegas User Group (SSSOLV).

 

 

 

 

Course Objectives

  1. Recognize practices that are performance pitfalls
  2. Learn how to Remedy the performance pitfalls
  3. Recognize practices that are security pitfalls
  4. Learn how to Remedy the security pitfalls
  5. Demos Demos Demos – scripts to demonstrate pitfalls and their remedies will be provided
  6. Have fun and discuss
  7. We might blow up a database

kaboom

 

There will be a nice mix of real world examples and some painfully contrived examples. All will have a good and useful point.

If you will be in the area, and you are looking for high quality content with a good mix of enjoyment, come and join us.  You can find registration information and event details at the Denver SQL site – here.  There are only 30 seats available for this murder mystery theater.  Reserve yours now.

The cost for the class is $125 up through the day of the event.  When you register, be sure to choose Wayne’s class.

Wait, there’s more…

Not only will I be in Denver for the Precon, I hope to also be presenting as a part of the SQLSaturday event on Sep 20 2014 (the day after the precon which is Sep 19, 2014).  I hope to update with the selected session(s) when that information becomes available.

You can see more details about the topics lined up for this event – here.

Shameless plug time

I present regularly at SQL Saturdays.  Wayne also presents regularly at SQL Saturdays.  If you are organizing an event and would like to fill some pre-con sessions, please contact either Wayne, myself or both of us for this session.

Is your Team Willing to Take Control?

TSQL2sDay150x150

The calendar tells us that once again we have reached the second tuesday of the month.  In the SQL Community, this means a little party as many of you may already know.  This is the TSQL Tuesday Party.

This month represents the 56th installment of this party.  This institution was implemented by Adam Machanic (b|t) and is hosted by Dev Nambi (b|t) this month.

The topic chosen for the month is all about the art of being able to assume.

In many circles, to assume something infers a negative connotation.  From time to time, it is less drastic when you might have a bit of evidence to support the assumption.  In this case, it would be closer to a presumption.  I will not be discussing either of those connotations.

What is this Art?

Before getting into this art that was mentioned, I want to share a little background story.

Let’s try to paint a picture of a common theme I have seen in environment after environment.  There are eight or nine different teams.  Among these teams you will find multiple teams to support different data environments.  These data environments could include a warehouse team, an Oracle team, and a SQL team.

As a member of the SQL team, you have the back-end databases that support the most critical application for your employer/client.  As a member of the SQL team, one of your responsibilities is to ingest data from the warehouse or from the Oracle environment.

Since this is a well oiled machine, you have standards defined for the ingestion, source data, and the destination.  Right here we could throw out a presumption (it is well founded) that the standards will be followed.

Another element to consider is the directive from management that the data being ingested is not to be altered by the SQL team to make the data conform to standards.  That responsibility lies squarely on the shoulder of the team providing the data.  Should bad data be provided, it should be sent back to the team providing it.

Following this mandate, you find that bad data is sent to the SQL team on a regular basis and you report it back to have the data, process, or both fixed.  The next time the data comes it appears clean.  Problem solved, right?  Then it happens again, and again, and yet again.

Now it is up to you.  Do you continue to just report that the data could not be imported yet again due to bad data?  Or do you now assume the responsibility and change your ingestion process to handle the most common data mistakes that you have seen?

I am in favor of assuming the responsibility.  Take the opportunity to make the ingestion process more robust.  Take the opportunity to add better error handling.  Take the opportunity continue to report back that there was bad data.  All of these things can be done in most cases to make the process more seamless and to have it perform better.

By assuming the responsibility to make the process more robust and to add better reporting/ logging to your process, you can only help the other teams to make their process better too.

While many may condemn assumptions, I say proceed with your assumptions.  Assume more responsibility.  Assume better processes by making them better yourself.  If it means rocking the boat, go ahead – these are good assumptions.

If you don’t, you are applying the wrong form of assumption.  By not assuming the responsibility, you are assuming that somebody else will or that the process is good enough.  That is bad in the long run.  That would be the only burning “elephant in the room”.

elephants

From here, it is up to you.  How are you going to assume in your environment?

Using Synonyms to Extend SSIS

Comments: No Comments
Published on: July 3, 2014

There are a million and one uses for synonyms.  There are at least that many uses for SSIS.  The reality is, not all of those uses are good for you nor for your data nor for your database.

Recently I wrote an article about some good and some bad with synonyms.  You can read that article from my work blog site here.  Today, I just want to expand on that article a little bit.  I glossed over some things pretty quick in that article that I though might be fun to explain with a little more detail.

The Bad – a little closer look

First, let’s take a look a little closer at the bad.  More specifically, in the example I used, there was a vendor that created a synonym for everything.  I really do mean everything.  This is one case where using the word “literally” would be accurate.  On the client database, I could run a quick query such as the following and see over 7300 synonyms.

[codesyntax lang="tsql"]

[/codesyntax]

In the case of this client and this vendor, 7300+ synonyms is far too many.  It just led to mass confusion.  If you were to run that query, you might see something like the following image.

massSynonyms

I added a “derived” column to show the total count of synonyms and the record name as it relates to that total.  That is a crazy amount of synonyms.  That just makes me think that somebody got into the krazy kool-aid one day, was bored and gave life to a synonym beast.

The Good – With more detail

On the flip side, in the aforementioned article, I talked about synonyms as a means to tweak performance in SSIS.  Normally I would not tout a synonym as a performance tweak.  So I want to caution that the performance gains are specific to SSIS and a handful of those millions of uses for SSIS.

Let’s just begin with a little bit of background.  For that background, some pictures will be really handy.  So here goes.

SpiceSource

In the preceding image we see a very simple segment of a data flow.

The data source uses a sql command to fetch the data required for the data flow.  In the beginning, it is very straight forward.  You probably have some package lying around with something similar to this.

In the following image, we see what the SQL Command was for that data source circled in red in the previous image.

SQLWithout

In the next image we see a slight tweak to the query.  This time to include a reference to a table that is defined/obfuscated by a synonym.

SQLWith

At this point I can hear some of you saying, “Ok, I see what he is doing.”  While many others are wondering why I just made the query more complex than the previous example.

Well as luck would have it, this change serves a couple of purposes.  1) The data has been staged in a separate database.  That database has a different name in every environment (recall the aforementioned article).  So the synonym minimizes code changes when deploying the package.  2) The synonym allows us to confirm that there is data in the stage table and that the data matches a ClientCode in the destination table.  3) Lastly, the synonym reduces my dataset which reduces memory requirements and also gets the data loaded faster (because it is smaller).

In addition to this minor tweak, I can also do something like the following.

WithoutSynonym

In the preceding image, we see two datasources.  Each datasource is subsequently sorted and then eventually joined.  Much like the previous example, due to naming standards and an effort to try and minimize code changes during deployments, at least one datasource is pulling in too much data.  The data is filtered down due to the Join transformation, but this is not very efficient.

WithSynonym

Through the use of a synonym, the datasources can be reduced to a single datasource.  This will eliminate the need for the Sort transformations and Join transformation.  Removing those three transformations reduced memory requirements.  And like the previous example, since we can trim down the number of records, the data flow will run a little bit faster too.

BigSQLWith

As You can see, the code is simple.  It’s not a super TSQL trick or anything to add a synonym into an existing query.  It just gets referenced like any other table.  Once again, in this case, the synonym is pointing to a table in a staging database.  That table has been loaded as a part of an ETL process and now needs to be manipulated a little bit through some other transformations and then inserted eventually into a “target” database.

Conclusion

As with tuning stored procedures or any TSQL, a similar technique was used here.  Reducing the datasets to contain JUST the data that is needed for the flow.  To facilitate that reduction in data to be just the essential data, I employed synonyms.

The reasons for using a synonym in this case were to: 1) restrict data to precisely what was needed, 2) ensure data being loaded was “constrained” by data in the destination table (e.g. only load for a specific client that does exist), and 3) minimize code changes during deployments.

When dealing with databases that serve the same purpose but follow some absurd naming standard that changes the name between environments, it can become cumbersome to maintain code during deployments.  This is particularly true when dealing with cross database joins or lookups.

«page 1 of 16




Calendar
September 2014
M T W T F S S
« Aug    
1234567
891011121314
15161718192021
22232425262728
2930  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, September 19, 2014