Last Execution of a Proc

SQL Server is full of good stuff.  There are plenty of features to be used.  Plenty of applications to help it.  And there is even plenty of metadata within SQL Server to help you better understand your data and the queries that are being run.

It just so happens that a couple of clients requested some information on this additional metadata.  Both of the clients wanted something just a little different from the other.  After a little bit of thought, it came pretty clearly that what they wanted was definitely available within SQL Server.  The first client simply wanted to know the last time a procedure had been executed.  That is relatively easy enough – granted the procedure has been executed and granted the execution statistics have not been evicted.

The second client also wanted to know some information about the execution of a stored procedure.  But this time, they wanted to get the execution plan.  There are a few ways to trap an execution plan.  You could either run a trace, an XE session, or you could execute the query and grab the plan.  But if you didn’t already have an XE session running or a trace running and the client does not want you to execute the query to trap that plan?

Well, that is not a problem because SQL Server stores this stuff.  As long as the query and plan have not been evicted from the plan cache then you can get the metadata you desire.

Metadata to the Rescue

The metadata that we seek happens to be accessible through the use of several dynamic management views.  These are sometimes called dynamic management objects and are great to use to get to know your data and environment.  This time around, I am mostly interested in the following dynamic management objects: sys.dm_exec_query_statssys.dm_exec_cached_planssys.dm_exec_sql_text , and sys.dm_exec_query_plan.  I am using these objects because I want to trap execution stats along with the query SQL statement as well as the cached plan for that query.

So this is what I came up with to aid each of the clients in getting the data they sought.

Pretty simple and straight forward.  Now, I have this quick query to find the last time a proc was run, as well as a means to trap the execution plan for that query.  If I run that query, I would be able to see something like the following.

Capture

I hope this is useful to you and hope you enjoy.

Trapping Online Index Operations

Categories: News, Professional, SSC
Comments: No Comments
Published on: January 29, 2015

Recently I wrote an article about Capturing Online Index Operations.  In that article, I discussed a problem that I had encountered.  Well, there were multiple problems.  One was an issue with a vendor app that had some hidden module that was performing online index defrags that was causing corruption in a couple of indexes every night.

The second of the issues was not vendor related but more to do with pain I was experiencing with getting the XEvent session I was creating to work properly.  The problem, as I had concluded, was that it would not work to an asynchronous file target.  As it turns out, I finally got it working.

Today, I just want to share that solution.  Being able to store the results to a file target offers a few benefits (less memory consumed and persisted data) that makes it far more ideal for this problem than to use the ring buffer.

For more background on the full issue and a demonstration on the output of the data from this session, please visit the original post on SQL Solutions Group.

And for the query to parse the data that is stored, you can use something like this next query.

I hope you find this useful.  But before you proceed, I do have a caution to throw out there.  This session is dependent on the file system directory of C:\Database\XE to exist.  If it does not, you will get a nasty error message.  Despite the error message, the session will create.  And since the session creates, when you try to run the next query, you will get an additional error because the file does not exist since the path does not exist.

Why does the session still create despite an error you ask?  Well, the answer to that is due to the existence of the ring_buffer target that is also specified.  I left that in there despite not needing it. You can safely remove the ring_buffer target in this XEvent session.  If you remove that ring_buffer target and do not have the directory previously noted, then the session will not create once the error is reached.

Minor ailments and Healthy SQL

Comments: 1 Comment
Published on: January 13, 2015

TSQL2sDay150x150One of the things that DBAs love to do is keep their servers running and healthy.  A healthy server, after all, is your ticket to a stress free day and a full night’s sleep.  Granted this not a guarantee but it sure helps make life easier.

We are always looking for the big ticket items to keep the servers tuned and purring.  But from time to time, like with humans, it’s not Ebola that takes us down but the little sniffles and minor aches and pains that keep us from doing our best.

This month as a part of the TSQL Tuesday party, Robert Pearl (blog | twitter) has asked us to write about Healthy SQL and the things that make SQL go yum.

Nagging Cough

Like that tickle in the back of the throat, that makes you cough now and again, there is an occasional tickle that can bug SQL Server and cause some pain here and there.  I have written about this nagging cough in the past.

The problem with this ailment is that since it is not always a cold, or always present, it is often times missed and frequently hard to diagnose.  I talked about this previously as one of those things that should be checked from time to time.  This is that nagging synonym* cough.  You can read about it here.

That’s great, we can take a little cough syrup, fix some synonyms and feel a lot better about SQL Server in the morning.

Aching Joints

Ever have one of those trick knees that decides to give out on you out of the blue?  You might be walking up the stairs (or down the stairs) and suddenly the knee is gone and you end up flat on your face.  And it could be great most of the time, it’s just that once in a while the knee decides to give a little twinge of pain, fold up and drop you to the floor.

SQL Server has a similar problem with this next one.  I come across on a frequent basis, within SQL Server, an bad knee in the form of a linked server.  Sure, I can hear you saying that linked servers are always bad.  Fair enough!  I have seen linked servers work wonderfully and most of the time they cause pain.

The type of pain with a linked server I would like to share today is around more of an edge case (like standing on the edge of the stairs with your knee about to give way).

As a good DBA would want to do, you may want to restore your databases to a test environment on a routine basis to ensure the backups worked and that you have a functioning recovery point for your databases in the event of a disaster.  I was working on just that sort of thing for a client recently when I ran into this beautifully pain filled knee.

This customer had not one, not two, not even three instances of this problem.  They had a glorious 492 instances of this problem.  The vendor for this client decided the best thing to do would be to replicate the production database to a separate database to help offload performance.  This other database happens to be on the same server (so no performance offload).  While that is not the most intelligent thing to do, it is not the end of this ailment.

In addition to the replication, there were 492 views that utilized linked servers to UNION ALL data between the two databases.  The data in each table between the two databases (in this case) is the same.  So we have a linked server to UNION ALL this data between two databases on the same server that is replicated.  Wowza!

Now, due to this linked server proliferation in the views to get to data the long way around, when restoring this database to a test environment there is a lot of cleanup work left to be done.  After all, the restore of the database is only a piece of the healthy backup puzzle.  You would want to test the data and the application against it.

Gratefully, this kind of cleanup can be made easy by doing a simple search and replace when querying sql_modules to find any views or stored procedures need to be updated to work in the test environment.  Here is an example of such a script to help fix that problem.

Take the results from a query such as that and now, I can either change the views en masse or I could copy all of the results from the ModCode column and paste those to a new window.  Using a regex (to replace all GO statements with a GO \n as shown in the pic) or something like SQL Prompt to prettify the code would be pretty easy from there to make it more useful.

Capture

Of course, this only helps address the issue with the linked servers in the views.  The same problem would exist with stored procedures.  It is up to the DBA to know which ones need to be changed and which should not be changed.  Just understand that linked servers are there to present yet another nagging symptom to keep your server from being healthy and worse is they help keep you from being healthy (remember the lack of sleep they cause).

If you are interested, I also have this article to help you find those pesky linked servers before you start diving down the restore rabbit hole.  The article will help evaluate the scope of linked server use and has a query to help identify linked servers.

*Funny afterthought is that both of these niggles that can help decrease health of your SQL Server have ties back to Linked Servers.  If you read the links, you will see what I mean.

Effects of Max Mem on Plan Cache

Reading a Microsoft Article (which can be found here) while making sure I understood what can cause a query plan to be recompiled or removed from the plan cache, I got to thinking a bit.

In the article, it lists the following as things that can cause a plan to be removed from cache or to be recompiled.

  • Execution plans remain in the procedure cache as long as there is enough memory to store them. When memory pressure exists, the Database Engine uses a cost-based approach to determine which execution plans to remove from the procedure cache. To make a cost-based decision, the Database Engine increases and decreases a current cost variable for each execution plan.
  • Changes made to a table or view referenced by the query (ALTER TABLE and ALTER VIEW).
  • Changes made to a single procedure, which would drop all plans for that procedure from the cache (ALTER PROCEDURE).
  • Changes to any indexes used by the execution plan.
  • Updates on statistics used by the execution plan, generated either explicitly from a statement, such as UPDATE STATISTICS, or generated automatically.
  • Dropping an index used by the execution plan.
  • An explicit call to sp_recompile.
  • Large numbers of changes to keys (generated by INSERT or DELETE statements from other users that modify a table referenced by the query).
  • For tables with triggers, if the number of rows in the inserted or deleted tables grows significantly.
  • Executing a stored procedure using the WITH RECOMPILE option.

The first item removes a plan from cache while the rest mark the plan is invalid and a recompile will be forced on the next execution of the SQL Statement.  But then the question comes, why make the distinction between the two?  Doesn’t removing a plan from cache in essence equate to a recompile further down the road when the query is executed again?  While semantically it is not a recompile, it sure sounds a lot like one.  The query was compiled, invalidated and removed from cache and then compiled again when executed the next time around.

When thinking about all of this, how can we see that memory pressure can “invalidate” query plans?  That is actually quite simple.  We can see the same net effects with the following experiment.

WARNING: DO NOT DO THIS ON ANY ENVIRONMENT THAT IS NOT A SANDBOX ENVIRONMENT.

In this setup, I have a Sandbox server with a simulated load that keeps plenty of plans in cache and SQL Server is happy with about eight gigabytes of memory allocated to it.  To show that memory can cause a bunch of plans to be invalidated (and removed from the plan cache), I am going to take memory away from SQL Server and in essence simulate a memory leak or memory pressure.

This shows that the settings have taken effect on SQL Server and all of this without a Service restart (I keep hearing that myth).

Max Memory in SQL Server

Seeing that the memory settings are taking immediate effect, we can turn our attention to that query that was run.  In the query to experiment with this, I took a count of the number of plans in the plan cache first, then changed the max memory, and then took a count of the number of plans in cache again.  This next image shows the results of those two plan count queries.  It is obvious here that the number of plans in cache was severely impacted by this change in Max memory when taking TOO much memory away from SQL Server (this is why you should only do this on a sandbox server).

Max Mem effect on Plan Cache

 

If you happen to take too much memory away from SQL Server, you could end up in a severe memory pressure situation where connections are limited.  Even trying to increase max memory could be severely hampered due to insufficient resources in the default resource pool.  Here is a blog post that demonstrates the effects on connections due to this error (even as bad as SQL Server not starting up).

Since the problem is in the default resource pool, you could try a connection to the DAC and modify the max memory there.  If the pressure is severe enough, you may be limited in your ability to connect to the DAC as well.  In that case, you may be in need of starting up in single user mode as referenced in that prior blog post.  Sometimes, it is enough to reduce the number of connections or resource requirements to regain access to the server to change the max memory appropriately.  Suffice it to say, do not run this on a production type of box, lest you are ready to cause a bit of an outage.  The intent here is solely to share that max memory can have an immediate impact on compiles and plans in cache.

Audit Database File Size Changes

dbsizechange

Recently I shared an article on how to track the growths and shrinks that occur within database files.  I shared that article here, you should read it before proceeding with today’s article.

Near the end of that article I declared that it was a really good method to track that information within SQL Server 2008.  What if you happen to be on SQL 2012 or SQL 2014 (as of this writing)?  Will it work there too?

Does it Work Past 2008?

It is a good question.  Does that extended event session I shared work in SQL Server 2012 or 2014?  Let’s take a quick look.  I am skipping the setup scripts for that XE session in this article, so you will need to get them from the previous article in order to follow along in this segment.

Before we can investigate if this extended event session will work, let’s take a quick look to confirm that the session is running on the server.  We can do that with a query similar to the following.

Running that query will produce results very similar to the following.

SessionCheck

This is good news.  If you noticed, I am querying a couple of views to get this information.  In the server_event_sessions catalog view I can determine if the event exists.  When checking the dm_xe_sessions DMV, I can see if the session is running or not by whether or not the session exists in the view.  When the session is enabled and running, then the DMV will return a record for it. Otherwise, the DMV does not hold a record for the session.

With a session running, we can now validate if it is running properly by running any script that will cause the files to grow or shrink.  Recall that in the previous article, it was shown that any growth or shrink operation will cause an event to fire with this session.  Here is a sample of the query I am running.

That query has three distinct segments.  The first is just to check my file sizes.  The second segment performs my file shrink operations.  And the final segment checks the file sizes again.  Here is what the first and third segments would look like on the Sandbox2 database that I used in the previous article.

filesizes_vold

With the evidence that we have the session running and that there was indeed a file size change, let’s now check the event session data and confirm whether or not the session is working on this SQL 2014 server.

The preceding is the query I am using to query the session data.  Running that query will produce the following results.

nodata

That’s right!  There is no session data despite the event having occurred and despite the session running currently.  We also know that this session works (we demonstrated it on SQL 2008).  So there must be a bug, something is broken.  Right?

What Now?

Since the extended event obviously no longer works, we are stuck with few options.  We could always try resorting back to the default trace.  After all, I demonstrated that the default trace is already trapping information about file shrinks.  That was discussed in the prior article and here as well.

So, what if we tried to go and capture all of the same information from the default trace?  We could certainly try that.  Assuming that the default trace is still running on the server, this query could get us pretty close.

And this does a fairly decent job of getting the info we seek.  Sadly, though, it does not trap all of the necessary information.  Only the DBCC event (event 116) traps the sql statement that triggered the event to be recorded in the default trace.  But for the most part it can be a decent swing at getting the information.  Without the sql statements tied to the event, I’d rather not use it because it really just shows me how many times the size changed, what time the event occurred, and the size of the change.

There has got to be some other way of getting this to work in extended events.  A good question to ask is “Why does the extended event no longer work?”

A little digging, and one might eventually find a document that can shed some light on the problem.  Reading this document, we can see why the event no longer works.  It has been deprecated.  What?  After one release, they decide to take away a critical piece of information?  How can that be?

Time to back up those findings with something a little more authoritative such as this.  Looking at this article, we see that indeed the event was deprecated.  But wait a minute, the event was not just deprecated, it was also replaced with a new event.  We are in business so let’s do some querying within event sessions.

Back in Business

We could have probably spared some time by checking the available events in SQL Server by using this next query.  However, the events used in the previously used event session still exist.  If they did not exist, the session creation would have failed.  This can be a bit misleading, so it is good to have the information from Microsoft that the events have been deprecated and merged into a single event.

This produces the desired results with the new event name specified in that Microsoft article.

2012filesizeevent

Based on this information, a rewrite of the extended event session is possible and necessary.  We can update the extended event session that audits when a database file changes in size.  This will look something like the following session.

And since I happened to have that session also running at the same time as the shrinkfiles that were run previously in this article, I can go ahead and check to see if anything was captured.  To check the session data, I will use the following query.

In this new event for 2012 and beyond, there is different data that is captured.  This means that I have access to better information about what is happening to my database files with regards to the size changes (growths and shrinks).

Conclusion

If you just so happen to be running on SQL Server 2012 or later, you will need to change your event sessions that were tracking file changes.  It is a bit of an exercise to make the change and can be frustrating, but it is well worth it.  The improved data that can be captured is going to help better control and oversee the environment.

Ghosts – an eXtrasensory Experience

ghostrip_fireThis is the last article in a mini-series diving into the existence of ghosts and how to find them within your database.

So far this has been a fun and rewarding dive into Elysium to see and chat with these entities.  We have unearthed some means to be able to see these things manifesting themselves in the previous articles.  You can take a look at the previous articles here.

For this article, I had planned to discuss another undocumented method to look into the ghost records and their existence based on what was said on an msdn blog.  But after a lot of research, testing and finally reaching out to Paul Randal, I determined that won’t work.  So that idea was flushed all the way to Tartarus.

Let it be made very clear that DBTABLE does not offer a means to see the ghosts.  Paul and I agree that the other article that mentioned DBTABLE really should have been referring to DBCC Page instead.

Despite flushing the idea to Tartarus, it was not a fruitless dive.  It just was meaningless for the purpose of showing ghosts via that DBCC command.  I still gained value from the dive!!

All of that said, the remainder of the plan still applies and it should be fun.

Really, at this point what is there that hasn’t been done about the ghosts?  Well, if you are well tuned to these apparitions, you may have received the urge to explore them with Extended Events – sometimes called XE for short.

As has been done in the past, before we board Charon’s boat to cross the River Styx to Hades to find these ghosts in Elysium, one really needs to run the setup outlined here.

With the framework in place, you are now ready to explore with XE.

Look at that! There are several possible events that could help us track these ghosts.  Or at the least we could get to know how these ghosts are handled deep down in the confines of Hades, err I mean the database engine.

Ghost_XE

 

From these possible events, I opted to work with ghost_cleanup and ghost_cleanup_task_process_pages_for_db_packet.  The sessions I defined to trap our ghost tracks are as follows.

You can see there are two sessions defined for this trip down the Styx.  Each session aptly named for our journey.  The first (GhostHunt) is defined to trap ghost_cleanup and sends that information to a histogram target.  The second (SoulSearch) is defined to use the other event, and is configured to send to the ring_buffer.  Since the second event has a “count” field defined as a part of the event, it will work fine to just send it to the ring buffer for later evaluation.

Once I have the traps, I mean event sessions defined, I can now resume the test harness from the delete step as was previously done in previous articles.  The following Delete is what I will use.

Prior to running that delete though, I checked the Event Session data to confirm a starting baseline.  Prior to the delete, I had the following in my histogram target.

 

predelete_count

 

After running the delete, and checking my histogram again, I see the following results.

post_count

 

You can see from this that in addition to the 25 pre-existing ghosts, we had another 672 ghosts (666 of which were from the delete).

This is how I was able to investigate the GhostHunt Extended Event Histogram.

But what about looking at the other event session?

Let’s look at how we can go and investigate that session first and then look at some of the output data.

ghostclean

 

Cool!  Querying the SoulSearch session has produced some information for various ghosts in the database.  Unlike the histogram session that shows how many ghosts have been cleaned, this session shows us some page ids that could contain some ghosts – in the present.  I can take page 1030111 for instance and examine the page with DBCC PAGE as follows.

 

 

pagealtLook at that page and result!! We have found yet another poltergeist.

RIP

Once again we have been able to journey to the depths of the Database engine and explore the ghosts that might be there.  This just happens to illustrate a possible means to investigate those ghosts.  That said, I would not necessarily run these types of event sessions on a persistent basis.  I would only run these sessions if there seems to be an issue with the Ghost cleanup or if you have a strong penchant to learn (on a sandbox server).

Some good information can be learned.  It can also give a little insight into how much data is being deleted on a routine basis from your database.  As a stretch, you could even possibly use something like this to get a handle on knowing the data you support.  Just be cautious with the configuration of the XE and understand that there could be a negative impact on a very busy server.  And certainly proceed at your own risk.

Database Ghosts

phantasmripRecently you may have seen my first article on this topic over at SQL Solutions Group.  If not, here is the link so you can read that article first before proceeding here.  This article is intended as a first of two part follow-up to that article.

Now that you have read that other article, we can recap it a little bit.

You will recall that in that article, I discussed the presence of Ghosts in your database.  I also discussed that those ghosts are a good thing.  They have some benefits to them, of which is a bit of a performance boost for some operations like the rollback of a delete.

In that article I discussed one method with which you could see these ghost records.  In this article, I would like to share an alternate method to be able to see these ghosts.

In order to explore this alternate means, let’s go ahead and follow the first few steps from the other article to get the setup complete so our second ghost hunting foray can begin.

Now with the setup complete, we should once again confirm that we have appropriate data available for the hunt.  Once again a simple query can suffice to show the data.

Upon execution of the check script, we should see something similar to the following data-set.

datasample

 

Great, the data is confirmed and we should have a wonderful chance to find some ghosts once again.  In the previous attempt, we needed an additional plasma blaster in the form of a trace flag.  In this attempt we will hunt these ghosts without that tool and see what we can find.  Why attempt it in this way?  Well, it is simply because I’d rather not use a trace flag if it is not necessary.  If you recall, that trace flag had a couple of noted effects.  One of the effects was that it turned off the ghost cleanup process.  If I can avoid it, I’d rather leave the containment unit in tact.

Now, due to that clerical error of putting Halloween on the wrong date, we need to follow the prescribed policy to delete records prior to creating the proper records.

Notice that we issued the delete in a transaction this time around.  We are going to leave that transaction open for a bit while we hunt those ghosts.

The first step is to verify that some ghosts might be present.  To do that we should run a query such as the following.

From that query we should see something like the following.

idxstats_ghosts

Once again we are on the right track.  We can further confirm the existence of these ghosts through a little more investigation and monitoring.  Let’s try the trick with the dblog function again.

With that query, we should see something like the following result set.

dblog_ghosts

 

Very cool.  We once again can see that these phantasms are in the database.  We have enough information that we can proceed on to the next stage.  We can pass the PageID into DBCC PAGE in order to investigate the ghosts on the page.  If we use the PageID that is circled with the green in the preceding result set, we can get a better feel for these specters.

And we may see results such as the following.

page_ghosts2

Recall that the log says this page has ghost records on it.  When we check the page with DBCC PAGE we can definitely see that there are ghosts on the page.  This is very cool.  Now, had we tried to check for ghost records on the PFS page we would not be able to see the ghost count like we were able to see by enabling the TF.

Once again we have been able to display the existence of ghosts in the database.  In order to get these ghosts to move on from the database to their afterlife, we merely need to commit the transaction or roll back the transaction.

Stay tuned for the next article in this mini-series about ghosts.  Who knows, we may even have a bonus Halloween article since this is Halloween month.

Part III of the series can now be found here.

Effects of sp_rename on Stored Procedures

Comments: 1 Comment
Published on: August 14, 2014

There comes a time when mistakes are made.  Sometimes those mistakes can be as annoying as a spelling mistake during the creation of a stored procedure.  When a mistake such as that happens, we are given a few choices.  One could either rename the stored procedure, drop and recreate the stored procedure or simply leave the mistake alone.

When choosing to rename the stored procedure, one may quickly reach for the stored procedure that can be readily used for renaming various objects.  That procedure was provided by Microsoft after-all and is named sp_rename.  Reaching for that tool however might be a mistake.  Here is what is documented about the use of sp_rename to rename a stored procedure.  That documentation can be read at this link on MSDN.

We recommend you do not use this statement to rename stored procedures, triggers, user-defined functions, or views; instead, drop the object and re-create it with the new name.

And later in the same documentation, one can read the following.

Renaming a stored procedure, function, view, or trigger will not change the name of the corresponding object name in the definition column of the sys.sql_modules catalog view. Therefore, we recommend that sp_rename not be used to rename these object types. Instead, drop and re-create the object with its new name.

Now, a chief complaint against dropping and recreating the stored procedure, as recommended, is that process can cause permissions issues.  I am less concerned about the permissions issues and see that as more of a nuisance that is easily overcome due to great documentation and a few quick script executions to restore the permissions.  Despite that, I think we might have a means to address the rename and permissions issue that will be shared later in this article.

Using sp_rename

When using sp_rename, it would be good to understand what happens and what one might expect to see.  Let’s use the following script to create a stored procedure to step through an exercise to rename a stored procedure and evaluate the results.

When I execute that series of batches, I will get an output that matches the following.

renameme

 

When looking at the results we can see that the use of sp_rename does indeed change the name of the stored procedure as it is represented via sys.objects and metadata.  We can also see that the definition of the stored procedure does not change as it is held within the metadata.

If I choose to check the definition through the use of OBJECT_DEFINITION()  instead of sys.sql_modules, you will be pleased to know that sys.sql_modules calls OBJECT_DEFINITION() to produce the definition that is seen in the catalog view.

Well, that does pose a potential problem.  We see that the object definition is unchanged and may report the name as being different than what the object name truly is.  What happens if I execute the stored procedure?  Better yet, if I can execute the stored procedure and then capture the sql text associated to that plan, what would I see?

Yes!  The renamed stored procedure does indeed execute properly.  I even get three results back for that execution.  Better yet, I get an execution plan which I can pull a plan_hash from in order to evaluate the sql text associated to the plan.  In case you are wondering, the execution plan does contain the statement text of the procedure.  But for this case, I want to look at the entire definition associated to the plan rather than the text stored in the plan.  In this particular scenario, I only see the body of the procedure and not the create statement that is held in metadata.

plan

For this particular execution and plan, I can see a plan_hash of 0xE701AFB2D865FA71.  I can now take this and provide it to the following query to find the full proc definition from metadata.

And after executing that query, I can see results similar to the following.

execplancache_text

 

Now is that because in some way the query that was just run was also running OBJECT_DEFINITION()?  Let’s look at the execution plan for both OBJECT_DEFINITION() and the query that was just run.

obj_def_plan

 

Looking at the XML for that particular plan and we see xml supporting that plan.  There is no further function callout and the plan is extremely simple.

Now looking at the plan for the query involving the query_plan_hash we will see the following.

fngetsql

 

Looking at this graphical plan, we can see that we are calling FNGETSQL.  Looking at the XML for this plan, we can verify that FNGETSQL is the only function call to retrieve the full sql text associated to this plan execution.  FNGETSQL is an internal function for SQL server used to build internal tables that might be used by various DMOs.  You can read just a bit more about that here.

What now?

After all of that, it really looks pessimistic for sp_rename.  The procedure renames but does not properly handle metadata and stored procedure definitions.  So does that mean we are stuck with drop and create as the Microsoft documentation suggests?

If you have access to the full procedure definition you could issue an alter statement.  In the little example that I have been using, I could issue the following statement.

After executing that script, I could check sys.sql_modules once again and find a more desirable result.

And my results…

finallymatching

 

If you don’t have the text to create the proc, you could use SSMS to script it out for you.  It is as simple as right-clicking the proc in question, selecting modify and then executing the script.  It should script at with the correct proc name (the beauty of SMO) and then you can get the metadata all up to snuff in your database.

Of course, if you prefer, you could just drop and recreate the procedure.  Then reapply all of the pertinent permissions.  That is pretty straight forward too.

SSIS: Value does not Fall Within the Expected Range

Categories: News, Professional, SSC
Comments: No Comments
Published on: July 17, 2014

Every now and again I find myself working with SSIS for one reason or another.  Every now and again I find myself fighting with SSIS because it provides less than useful error messages.  This is one of those error messages that can be a pain until you understand what can be causing it.  Albeit, that doesn’t help resolve the problem with the error message or with its usefulness or the pain that it can/will cause you.

execute sql taskThe error message “Value does not Fall Within the Expected Range” is tied to the execute sql task that can be placed in the control flow.

Let’s assume we have such a task in our control flow as demonstrated by the attached image.  We’ll call the Execute SQL Task “Parse Client Code.”

Inside of this task, we will probably have something like what is seen in the next image.

parametermapping

Here we have attached parameters to the task and even assigned those parameters to “variable” names that we might use within the query.  In this case, we are just trying to parse a code from a filename that can be used in downstream processing.  The code might look like the following.

[codesyntax lang=”tsql”]

[/codesyntax]

If I run that task at this point, I will get an error.  The error could be for any number of reasons based on the setup that I just showed.  The most common is that the Parameter Name is not really a name but really should be an ordinal position as to when the parameter is used in the script.  The ordinal position is 0 based.

The second issue is the data type that has been selected in error.  This should be a name and not a guid.  This means I should change the data type to the varchar type from the drop down that is available on the parameter screen under data type.

The next issues is the use of the variable name in the script itself.  One should use a ? instead of variable names.  So, this script should be fixed to look like the following.

[codesyntax lang=”tsql”]

[/codesyntax]

And the parameter screen should be made to look like the following.

paramfixed

These quick fixes can eliminate or even prevent what might possibly be a headache when dealing with SSIS.

Now, what if you need to have more than one parameter for your query?  No problem.  The same principles apply.  Just map your parameters according to proper data type and to the ordinal position that the parameter needs to be used within the query and you should be all set.

Is your Team Willing to Take Control?

TSQL2sDay150x150

The calendar tells us that once again we have reached the second tuesday of the month.  In the SQL Community, this means a little party as many of you may already know.  This is the TSQL Tuesday Party.

This month represents the 56th installment of this party.  This institution was implemented by Adam Machanic (b|t) and is hosted by Dev Nambi (b|t) this month.

The topic chosen for the month is all about the art of being able to assume.

In many circles, to assume something infers a negative connotation.  From time to time, it is less drastic when you might have a bit of evidence to support the assumption.  In this case, it would be closer to a presumption.  I will not be discussing either of those connotations.

What is this Art?

Before getting into this art that was mentioned, I want to share a little background story.

Let’s try to paint a picture of a common theme I have seen in environment after environment.  There are eight or nine different teams.  Among these teams you will find multiple teams to support different data environments.  These data environments could include a warehouse team, an Oracle team, and a SQL team.

As a member of the SQL team, you have the back-end databases that support the most critical application for your employer/client.  As a member of the SQL team, one of your responsibilities is to ingest data from the warehouse or from the Oracle environment.

Since this is a well oiled machine, you have standards defined for the ingestion, source data, and the destination.  Right here we could throw out a presumption (it is well founded) that the standards will be followed.

Another element to consider is the directive from management that the data being ingested is not to be altered by the SQL team to make the data conform to standards.  That responsibility lies squarely on the shoulder of the team providing the data.  Should bad data be provided, it should be sent back to the team providing it.

Following this mandate, you find that bad data is sent to the SQL team on a regular basis and you report it back to have the data, process, or both fixed.  The next time the data comes it appears clean.  Problem solved, right?  Then it happens again, and again, and yet again.

Now it is up to you.  Do you continue to just report that the data could not be imported yet again due to bad data?  Or do you now assume the responsibility and change your ingestion process to handle the most common data mistakes that you have seen?

I am in favor of assuming the responsibility.  Take the opportunity to make the ingestion process more robust.  Take the opportunity to add better error handling.  Take the opportunity continue to report back that there was bad data.  All of these things can be done in most cases to make the process more seamless and to have it perform better.

By assuming the responsibility to make the process more robust and to add better reporting/ logging to your process, you can only help the other teams to make their process better too.

While many may condemn assumptions, I say proceed with your assumptions.  Assume more responsibility.  Assume better processes by making them better yourself.  If it means rocking the boat, go ahead – these are good assumptions.

If you don’t, you are applying the wrong form of assumption.  By not assuming the responsibility, you are assuming that somebody else will or that the process is good enough.  That is bad in the long run.  That would be the only burning “elephant in the room”.

elephants

From here, it is up to you.  How are you going to assume in your environment?

«page 1 of 7






Calendar
March 2015
M T W T F S S
« Feb    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  
Content
SQLHelp

SQLHelp


Welcome , today is Tuesday, March 3, 2015