Table Space updated again

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: March 28, 2014

Today we have another update for an age old script on this site.  You can find the last update to the script here.

This time we have a bit of a bug update.  The details of that bug are in the notes for the script.

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
 
/*
Purpose:
To Evaluate table sizes combined with index space consumption to determine higher cost tables in 
terms of storage, resources and maintenance needs.
 
ModifiedDate	ModifiedBy	Description
2013-11-21		JB			Tables without Indexes had a Null Value in the Output
							Fixed the output to sum properly for the NULLS in absence of an index
 
*/
BEGIN TRAN
 
DECLARE @dbsize DECIMAL(19,2)
        ,@logsize DECIMAL(19,2)
 
SET NOCOUNT ON
 
/*
**  Summary data.
*/
BEGIN
        SELECT @dbsize = SUM(CONVERT(DECIMAL(19,2),CASE WHEN type = 0 THEN SIZE ELSE 0 END)) / 128.0
                , @logsize = SUM(CONVERT(DECIMAL(19,2),CASE WHEN type = 1 THEN SIZE ELSE 0 END)) / 128.0
                FROM sys.database_files
 
END;
        WITH FirstPass AS (
                SELECT OBJECT_ID,
                        ReservedPage = CONVERT(DECIMAL(19,2),SUM(reserved_page_count)) / 128.0,
                        UsedPage = CONVERT(DECIMAL(19,2),SUM(used_page_count)) / 128.0,
                        PageCnt = SUM(
                        CONVERT(DECIMAL(19,2),CASE
                                WHEN (index_id < 2)
                                        THEN (used_page_count)
                                ELSE lob_used_page_count + row_overflow_used_page_count
                                END
                        )) * 8/1024,
                        iPageCnt = SUM(
                        CONVERT(DECIMAL(19,2),CASE
                                WHEN (index_id > 1)
                                        THEN (used_page_count)
                                ELSE 0
                                END
                        )) * 8/1024,
                        RowCnt = SUM(
                        CASE
                                WHEN (index_id < 2)
                                        THEN row_count
                                ELSE 0
                        END
                        )
                FROM sys.dm_db_partition_stats
                WHERE 1 = 1
					--AND OBJECTPROPERTY(object_id,'IsMSShipped') = 0
					AND index_id < 2
                GROUP BY OBJECT_ID
        ),IndexPass AS (
                SELECT OBJECT_ID,
                        iReservedPage = CONVERT(DECIMAL(19,2),SUM(reserved_page_count)) / 128.0
                        ,iUsedPage = CONVERT(DECIMAL(19,2),SUM(used_page_count)) / 128.0
                        ,iPageCnt = SUM(
                        CONVERT(DECIMAL(19,2),CASE
                                WHEN (index_id > 1)
                                        THEN (used_page_count)
                                ELSE 0
                                END
                        )) / 128.0
                        ,RowCnt = SUM(
                        CASE
                                WHEN (index_id < 2)
                                        THEN row_count
                                ELSE 0
                        END
                        )
                FROM sys.dm_db_partition_stats
                WHERE 1 = 1 
					--AND OBJECTPROPERTY(object_id,'IsMSShipped') = 0
					AND index_id > 1
                GROUP BY OBJECT_ID
        ),InternalTables AS (
                SELECT ps.OBJECT_ID,
                        ReservedPage = CONVERT(DECIMAL(19,2),SUM(reserved_page_count)) / 128.0
                        ,UsedPage = CONVERT(DECIMAL(19,2),SUM(used_page_count)) / 128.0
                FROM sys.dm_db_partition_stats  ps
                        INNER Join sys.internal_tables it
                                ON it.OBJECT_ID = ps.OBJECT_ID
                                And it.internal_type IN (202,204,211,212,213,214,215,216)
                WHERE it.parent_id = ps.OBJECT_ID
					--And OBJECTPROPERTY(ps.object_id,'IsMSShipped') = 0
                GROUP BY ps.OBJECT_ID
        ),Summary AS (
                SELECT
                        OBJECT_NAME (F.OBJECT_ID) AS ObjName 
                        ,SCHEMA_NAME(o.schema_id) AS SchemaName
                        ,CASE WHEN OBJECTPROPERTY(F.OBJECT_ID,'IsMSShipped') = 1 
							THEN 'YES' 
							ELSE 'NO' 
							END AS IsMsShipped
                        ,NumRows = MAX(F.RowCnt)
                        ,ReservedPageMB = SUM(ISNULL(F.ReservedPage,0) + ISNULL(i.ReservedPage,0))
                        ,DataSizeMB = SUM(F.PageCnt)
                        --,IndexSizeMB = SUM(CASE WHEN (F.UsedPage + ISNULL(i.UsedPage,0)) > F.PageCnt
                        --                                THEN ((F.UsedPage + ISNULL(i.UsedPage,0)) - F.PageCnt) ELSE 0 END) ,-- Equivalent of max_record_size from sys.dm_db_index_physical_stats
						,IndexSizeMB = SUM(ISNULL(ip.iPageCnt,0))
                        ,UnusedSpace = SUM(CASE WHEN (F.ReservedPage + ISNULL(i.ReservedPage,0)) > (F.UsedPage + ISNULL(i.UsedPage,0))
                                THEN ((F.ReservedPage + ISNULL(i.ReservedPage,0)) - (F.UsedPage + ISNULL(i.UsedPage,0))) ELSE 0 END)
                                + (SUM(ISNULL(ip.iReservedPage,0)) - SUM(ISNULL(ip.iUsedPage,0)))
                        ,IndexReservedMB = SUM(ISNULL(ip.iReservedPage,0))
                        ,dbsizeMB = @dbsize
                        ,LogSizeMB = @logsize
                FROM FirstPass F
					INNER JOIN sys.objects o
						ON F.OBJECT_ID = o.OBJECT_ID
					LEFT Outer Join InternalTables i
                        ON i.OBJECT_ID = F.OBJECT_ID
                    LEFT OUTER JOIN IndexPass ip
						ON F.OBJECT_ID = ip.OBJECT_ID
                GROUP BY F.OBJECT_ID,o.schema_id
        ),TotalUnused AS (
				SELECT SUM(ISNULL(UnusedSpace,0)) AS UnusedSpace
						,SUM(ISNULL(reservedpageMB,0))+SUM(ISNULL(IndexReservedMB,0)) AS Usedr
					FROM Summary
		)
        SELECT ObjName,SchemaName,IsMsShipped,NumRows, ReservedPageMB, ISNULL(DataSizeMB,0) AS DataSizeMB, ISNULL(IndexSizeMB,0) AS IndexSizeMB
						,ISNULL(S.UnusedSpace,0) AS UnusedSpace, dbsizeMB, LogSizeMB
						,TU.UnusedSpace AS TotalTableFreeSpace 
                        ,dbsizeMB - TU.Usedr AS DataFileFreeSpace  
                        /*within 1.5gb on a 1.76tb database or .000085% variance or 99.999915% accuracy */
                        ,PercentofDBPhysFile = ((ISNULL(IndexSizeMB,0) + ISNULL(DataSizeMB,0)) / @dbsize) * 100
                        ,PercentofDBUsedSpace = ((ISNULL(IndexSizeMB,0) + ISNULL(DataSizeMB,0)) / (@dbsize - TU.UnusedSpace)) * 100
 
        FROM Summary S
			CROSS APPLY TotalUnused TU
        ORDER BY PercentofDBUsedSpace DESC
 
ROLLBACK

Table Hierarchy updated

Recently a need resurfaced to explore the foreign key tree (hierarchy/genealogy) as it related to a specific table within a database.  As I pulled out the script from the repository, I realized there was some unfinished work to be done.  But there was also some polish that needed to be added.  This is an update to the most recent posting of that script.  You can see several revisions in the series at this link or the group here.

Some of the changes involve formatting and and labeling.  I added a new column called “Direction” to help understand the relationship of the key to the table in question.  I also changed up the FKGenealogy (formerly called SortCol) to reflect the source table more accurately in the case when the key comes from an ancestor rather than a descendant.  The Level of the FK was also modified to help understand a little better how far away the ancestor was in relationship to the origin table.

A final adjustment also comes from the Genealogy attribute.  Ancestors were all starting at the wrong point in the lineage.  I adjusted that so the lineage can be seen from the point in the tree that the ancestor is related rather than as a root direct from the origin table.

All in all, this script should make more sense to the user than the previous versions.

DECLARE    @StartParentTable    VARCHAR(256);
 
SELECT @StartParentTable = 'tblCampaignMain';
 
WITH Posterity (ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy) AS (
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
          ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
          ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
          ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
          ,0 AS FKLevel
          ,CAST('.'+CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX))+'.' AS VARCHAR(MAX))
      FROM sys.foreign_key_columns sfkc
          INNER Join sys.foreign_keys sfk
              ON sfkc.constraint_object_id = sfk.OBJECT_ID
      WHERE OBJECT_NAME(sfk.referenced_object_id) = @StartParentTable
  UNION All
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
          ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
          ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
          ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
          ,FKLevel + 1
          ,STUFF(('.' + F.FKGenealogy + CAST(CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX)) + '.' AS VARCHAR(MAX))
                      ),1,1,'')
      FROM sys.foreign_keys sfk
          INNER Join Posterity F
              ON OBJECT_NAME(sfk.referenced_object_id) = F.ChildTable
              And F.FKGenealogy NOT LIKE '%'+CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX))+'%'
          INNER Join sys.foreign_key_columns sfkc
              ON sfkc.constraint_object_id = sfk.OBJECT_ID
      WHERE OBJECT_NAME(sfk.referenced_object_id) <> @StartParentTable
          And sfk.referenced_object_id <> sfk.parent_object_id
), Ancestry (ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy) AS (
  SELECT DISTINCT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
      ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
      ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
      ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
      ,(FKLevel * -1)-1 AS FKLevel
      ,F.FKGenealogy + CAST(CAST(OBJECT_NAME(sfk.Parent_object_id) AS VARCHAR(MAX))+'.' AS VARCHAR(MAX))
  FROM Posterity F
      INNER Join sys.foreign_keys sfk
          ON F.ChildTable = OBJECT_NAME(sfk.parent_object_id)
          And F.ParentTable <> OBJECT_NAME(sfk.referenced_object_id)
      INNER Join sys.foreign_key_columns sfkc
          ON sfkc.constraint_object_id = sfk.OBJECT_ID
  UNION All
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
      ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
      ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
      ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
      ,F.FKLevel -1
      ,STUFF(('.' + FKGenealogy + CAST(CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX)) + '.' AS VARCHAR(MAX))
                  ),1,1,'')
  FROM Ancestry F
      INNER Join sys.foreign_keys sfk
          ON F.ParentTable = OBJECT_NAME(sfk.parent_object_id)
      INNER Join sys.foreign_key_columns sfkc
          ON sfkc.constraint_object_id = sfk.OBJECT_ID
  WHERE F.ParentTable not in (SELECT ParentTable FROM Posterity)
      And sfk.referenced_object_id <> sfk.parent_object_id
      And F.ChildTable not IN (SELECT ChildTable FROM Posterity)
)
 
SELECT ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy
	,CASE 
		WHEN FKLevel > 0 THEN 'Descendant' 
		WHEN FKLevel = 0 THEN 'Root'
		END AS Direction
FROM Posterity
UNION All
SELECT ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy
	,'Ancestor' AS Direction
FROM Ancestry
  ORDER BY FKGenealogy ASC, Direction DESC
  OPTION (maxrecursion 500);

T-SQL Tuesday #45 Follow the Yellow Brick Road

TSQL2sDay150x150

We have the distinct pleasure once again to have a party full of SQL enthusiasts.  The best part is that few if any of us are even in the same room or building for this party.

The party is known as TSQL Tuesday and this month is hosted by Mickey Stuewe (Blog | Twitter).  This month Mickey has come up with the notion that for some reason or another you may want to have something audited, tracked, traced or otherwise followed.  To quote Mickey from the invitation (which you can read here): “So, it is time to follow Dorothy and Toto down the yellow brick road and to share your experience with auditing data.”

Mickey did provide some viable options for auditing.  And I didn’t see any restrictions on what the topic of auditing couldn’t be.  So I have a slightly different spin on the topic than what appeared to be the direction that Mickey was headed with her examples and description in the invite.

That said, I will be discussing a topic along the vein of “finding out who did something stupid last week.”  And the TSQL Tuesday topic once again works out well this month for me because I have a back log of blog topics for this subject that I have been meaning to write.  So let’s get at least one of them knocked out of the way with this entry into the blog party of the month.

OzcurtBehold the Great and Powerful Audit

Have you ever come into work and been blasted first thing with “everything is slow!”

Then you start to dig and see your email reports and you begin to wonder why you have 50% of your clustered indexes jacked up to 99% fragmentation?  You know that sort of thing never happens suddenly in your environment because you have your finger on the pulse of all things DB related – or so you thought.

You check your index defrag log and see that it ran perfectly fine last night and you even see the report on fragmentation from after the defrag process was completed.  You begin to really scratch your head.  You know that you have no process that does any bulk loads or ETL batches after the index defrag process.  Nothing is really making any sense.  Then a lightbulb goes off and you check your report on database sizes.  You see that the database is now several hundred GB smaller and there is no free space left in the data files.  Free space that you had left in the data file intentionally.

300px-Steam_eruptionNow you are steamed and about to blow your top like this geyser.  Who has done something so stupid as to mess around with your data files?  We Senior DBA’s are somewhat territorial with the databases we support and we don’t like people making changes they haven’t discussed with us.  So it is perfectly understandable if you are a bit steamed by an event such as this.

The question now is: What are you going to do about it?  How are you going to find who jacked up your data files?

The answer is in the default trace.

SELECT
	T.DatabaseName,
	T.TextData,
	T.HostName,
	T.ApplicationName,
	T.LoginName,
	T.StartTime,
    CASE T.IsSystem
		WHEN 1 THEN 'YES'
		ELSE 'NO'
		END AS IsSystem
    /* Check IsSystem to help ensure Autoshrink didn't cause the shrink 
       MSDN indicates 1=system and 0=user
       My testing shows Null = user and 0 did not occur */
FROM sys.FN_TRACE_GETTABLE(CONVERT(VARCHAR(150), 
	( SELECT TOP 1 f.[VALUE]                                                 
		  FROM    sys.FN_TRACE_GETINFO(NULL) f
		  WHERE   f.property = 2
		)), DEFAULT) T
WHERE T.TextData LIKE '%SHRINKFILE%'
	AND T.EventClass = 116
	/* Event Class 116 occurs when a DBCC command is issued */
	;

The optimal solution may not be to have to rely on the default trace to capture an event that occurred in the past, but rather to proactively monitor it through other means (perhaps an extended event for example).  But in the event you need to capture the information because you were caught off-guard, you may be able to trap the information from the default trace.

When I run the above query, I get the following results (yes I intentionally shrunk my msdb database explicitly for this example).

results

Now when you have a sysadmin or a junior dba or that manager with too many permissions doing things they shouldn’t be doing, you have some evidence to show them.  You also can go to the person that “did something really stupid” and offer them a chance to learn why it might have been a bad thing – but do it nicely.  You don’t want to be the one that looks like a jerk.

Lost that Single-User Connection?

You have changed your database to single_user mode to do a task.  As you go about your business, you lose track of which connection was the single_user connection.  You start closing connections and mistakenly close the session that was your single_user connection.  Now you are unable to start a new single_user session.  It would seem that somebody or something has taken your connection.

Today, I am going to discuss some things you may do to get around this problem.

The first thing that may come to mind when you encounter this is “Oh crap!”  Well, no need to get too terribly worried (not unless you really hosed something up and you are trying to fix it real quick before the boss notices).

The next thing you may think of trying is how to circumvent the single_user mode.  And during that thought process you may be thinking that single_user does not really mean single_user so you might try something like start a DAC session.  Well, let’s go through that and see what would happen in a DAC session if your single_user session is stolen.

I am going to skip the part of setting a database into single_user mode because we are presuming that the condition already exists.  To start a DAC session, I am going to point you to a previous article I did on the topic – here.

To ensure I am using a DAC session, I am going to issue the following query.  This will ensure I am in the right session and that DAC is in use.

SELECT s.group_id,e.name, CAST(g.name AS NVARCHAR(20)) AS ResourceGroup, s.session_id 
		, s.login_time, CAST(s.HOST_NAME AS NVARCHAR(20)) AS HostName 
		, CAST(s.program_name AS NVARCHAR(20)) AS ProgramName 
		,s.original_login_name 
		,s.is_user_process 
		,s.STATUS 
	FROM sys.dm_exec_sessions s 
		INNER JOIN sys.dm_resource_governor_workload_groups g 
			ON g.group_id = s.group_id 
		INNER JOIN sys.dm_exec_connections ec 
			ON s.session_id = ec.session_id 
		LEFT OUTER JOIN sys.endpoints e  
			ON ec.[endpoint_id]=e.[endpoint_id] 
WHERE s.session_id = @@SPID 
ORDER BY g.NAME; 
GO

In my case, this results in an endpoint with the name “Dedicated Admin Connection” and a spid of 84.  Good, I am in the correct session for the rest of this test.  Next, I will issue a Use database statement.  I have created a test database called ClinicDB.  So I will issue the following.

USE ClinicDB;
GO

I get the following result.

Msg 924, LEVEL 14, STATE 1, Line 1
DATABASE 'ClinicDB' IS already OPEN and can ONLY have one USER AT a TIME.

So, that blows that idea right out of the water.  It shouldn’t really have been a consideration in the first place because single_user really means just that – single_user.

Now What?

Well, what do you think we could do now to circumvent this little problem and get that single_user session back?

That requires a little investigative work.  It is time to find out who has taken the single_user session and politely ask them to give it up.  To make that task a little easier, we could modify the previous query to find out who has that single_user session (thus limiting how many people we have to ask).  I have modified the following query to use sys.sysprocesses so I could limit the results to the ClinicDB.  This is a limitation of SQL 2008 R2 and older versions.  Getting the database reliably means using sysprocesses.  Despite the database_id being available in other related DMVs, it’s just not that easy.  One would think you could use sys.dm_exec_requests.  But if a request is not active, an entry won’t exist for that session.  This problem is fixed in SQL 2012 since the sys.dm_exec_connections DMV now has the database_id field.  Enough of that birdwalk and on to the query.

SELECT s.group_id,e.name, CAST(g.name AS NVARCHAR(20)) AS ResourceGroup, s.session_id ,DB_NAME(r.dbid) AS DBName
		, s.login_time, CAST(s.HOST_NAME AS NVARCHAR(20)) AS HostName
		, CAST(s.program_name AS NVARCHAR(20)) AS ProgramName 
		,s.original_login_name 
		,s.is_user_process 
		,s.STATUS 
	FROM sys.dm_exec_sessions s 
		INNER JOIN sys.dm_resource_governor_workload_groups g 
			ON g.group_id = s.group_id 
		INNER JOIN sys.dm_exec_connections ec 
			ON s.session_id = ec.session_id 
		INNER JOIN sys.sysprocesses r
			ON r.spid = s.session_id
		LEFT OUTER JOIN sys.endpoints e  
			ON ec.[endpoint_id]=e.[endpoint_id]
	WHERE DB_NAME(r.dbid) = 'ClinicDB'
ORDER BY g.NAME; 
GO

I chose not to do an entirely new query to simply demonstrate that it was possible with a very small tweak to what has been already used.

Now that you know (in my case I can see that I have a session open with ID = 80 that is connected to that single_user database), I can walk over to the person (knowing his/her login id and computer name) and politely ask them to disconnect.

In the end, this is really an easy thing to resolve.  Sure it may take some people skills – but that doesn’t make the task too terribly difficult.  Next time this happens to you, just remember you can run a quick query to find who has sniped that single_user session.

Audit Configuration Changes

Do you know the last time a Server Property was changed on your instances of SQL Server?

Are you wondering when the setting for max degree of parallelism was changed?

Do you know who changed the setting?

In some environments there are a lot of hands trying to help mix the pot.  Sometimes more hands can make light work.  This is not always the case though.  More hands in the mix can be a troublesome thing.  Especially when things start changing and the finger pointing starts but nobody really knows for sure who made the change or when the change was made.

I know, that is why there is this concept called change management.  If you make a change to a setting, it should be documented, approved and communicated.  Unfortunately the process does not always dictate the work flow and it may be circumvented.  This is why auditing is a tool that is in place and should be in place – like it or not.

Auditing can be a very good tool.  Like many things – too much of a good thing is not a good thing.  Too much auditing can be more of a hindrance than help.  You don’t want to cause interference by auditing too many things.  You also don’t want too much data that the facts get blurred.  I hope that this script strikes more of a balance with just the right amount of data being captured to be of use.

The basic principle to auditing server configs is to find what values changes, when they were changed and by whom.  There are ancillary details that can be helpful in tracking the source of the change such as the hostname of the source computer making the change.  These are all things that we should capture.  But if a setting hasn’t changed – then we need not necessarily report that the setting was unchanged (it should go without saying).

So for this, I created a couple of tables and then a script that I can add to a job to run on a regular basis.  I can put the script in a stored procedure should I desire.  I’ll leave that to you as an exercise to perform.

USE AdminDB;
GO
SET NOCOUNT ON;
 
DECLARE @ConfigLastUpdateDate	DATETIME
		,@PreviousPollDate		DATETIME
		,@MaxPollDate			DATETIME
		,@PATH					NVARCHAR(260);
 
SELECT @PATH = REVERSE(SUBSTRING(REVERSE([PATH]), 
						CHARINDEX('\', REVERSE([path])), 260)) + N'LOG.trc'
	FROM sys.traces 
	WHERE is_default = 1;
 
IF OBJECT_ID('tempdb..#DBCCConfig') IS NOT NULL DROP TABLE #DBCCConfig;
	CREATE TABLE #DBCCConfig(
		[ParentObject] VARCHAR (100),
		[Object]       VARCHAR (100),
		[Field]        VARCHAR (100),
		[Value]        VARCHAR (100)); 
 
INSERT INTO #DBCCConfig (ParentObject, Object, Field, Value)
	EXECUTE ('DBCC CONFIG WITH TABLERESULTS');
 
WITH cte AS
(
SELECT cfgupddate = MAX(CASE WHEN t1.Field = 'cfgupddate' THEN t1.Value ELSE NULL END),
        cfgupdtime = MAX(CASE WHEN t1.Field = 'cfgupdtime' THEN t1.Value ELSE NULL END)
FROM #DBCCConfig t1
WHERE   Field IN ('cfgupddate', 'cfgupdtime')
)
SELECT  @ConfigLastUpdateDate = CONVERT(DATETIME,t3.configure_upd_dt)
FROM    cte t1
        CROSS APPLY (SELECT cfgupddate = DATEADD(DAY, CONVERT(INT, t1.cfgupddate), '1900-01-01')) t2
        CROSS APPLY (SELECT configure_upd_dt = DATEADD(ms, CONVERT(INT, t1.cfgupdtime)*3.3, t2.cfgupddate)) t3;
 
IF NOT EXISTS (SELECT Name 
					FROM sys.objects 
					WHERE name = 'SysConfigAudit')
	CREATE TABLE SysConfigAudit (
		configuration_id	int
		,name	NVARCHAR(256)
		,value	sql_variant
		,minimum	sql_variant
		,maximum	sql_variant
		,value_in_use	sql_variant
		,description	NVARCHAR(MAX)
		,is_dynamic	bit
		,is_advanced	BIT
		,PollDate		DATE
		,LastConfigUpdtDate	DATETIME)
 
IF NOT EXISTS (SELECT Name 
					FROM sys.objects 
					WHERE name = 'SysConfigChangeLog')
	CREATE TABLE SysConfigChangeLog (
		configuration_id	int
		,name	NVARCHAR(256)
		,CurrValue	SQL_VARIANT
		,PrevValue	SQL_VARIANT
		,description	NVARCHAR(MAX)
		,PollDate		DATE
		,LastConfigUpdtDate	DATETIME
		,PrevConfigUpdtDate	DATETIME
		,ChangeDate	DATETIME
		,ChangeBy	NVARCHAR(256)
		,HostName	NVARCHAR(256)
		,ChangeSPID	SQL_VARIANT
		,Changedata	NVARCHAR(2000)
		,ApplicationName NVARCHAR(256)
		,Severity INT
		,ERROR SQL_VARIANT
		,ChangeBySessionLogin	NVARCHAR(256))
IF NOT EXISTS (SELECT Name 
					FROM sys.key_constraints 
					WHERE name ='PK_SysConfigChangeLog' 
						AND OBJECT_NAME(parent_object_id) = 'SysConfigChangeLog')
BEGIN
ALTER TABLE dbo.SysConfigChangeLog ADD CONSTRAINT
	PK_SysConfigChangeLog PRIMARY KEY CLUSTERED 
	(
	configuration_id,
	ChangeDate DESC
	) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) 
		ON Admin_Data
 
END
 
SET @MaxPollDate = ISNULL((SELECT TOP 1 PollDate
								FROM dbo.SysConfigAudit 
								ORDER BY PollDate DESC),'1/1/1900')
 
/* Write the configurations out to audit table 
*/
IF CONVERT(DATE,GETDATE()) <> @MaxPollDate
BEGIN
	INSERT INTO SysConfigAudit
			( configuration_id ,
			  name ,
			  value ,
			  minimum ,
			  maximum ,
			  value_in_use ,
			  description ,
			  is_dynamic ,
			  is_advanced,
			  PollDate,
			  LastConfigUpdtDate
			)
	SELECT configuration_id,name
			,value
			,minimum,maximum
			,value_in_use
			,description
			,is_dynamic,is_advanced
			,GETDATE(),@ConfigLastUpdateDate
		FROM master.sys.configurations;
END
 
/* Recast MaxPollDate */
SET @MaxPollDate = ISNULL((SELECT TOP 1 PollDate
								FROM dbo.SysConfigAudit 
								ORDER BY PollDate DESC),'1/1/1900')
 
SET @PreviousPollDate = ISNULL((SELECT TOP 1 PollDate 
									FROM dbo.SysConfigAudit 
									WHERE Polldate <> @MaxPollDate 
									ORDER BY PollDate DESC),'1/1/1900');
 
/*	A configuration has changed and a reboot has occurred 
	causing the updtdate to be written to the config block of the page 10
	The configuration may not be written to the page, but we will write it to
	the table anyway and then compare to the default trace file in all cases.
	If there are any values changed, then write those to the change log with
	the login of the person who changed the value
*/
 
BEGIN
 
WITH presel AS (
SELECT df.LoginName,df.TextData,df.StartTime,df.HostName,df.Severity,df.DatabaseName,df.SPID,df.ERROR
		,df.SessionLoginName,df.ApplicationName
		,SUBSTRING(df.TextData
					,CHARINDEX('''',df.TextData)+1
					,CHARINDEX('''',df.TextData
								,CHARINDEX('''',df.TextData)+1)-CHARINDEX('''',df.TextData)-1)
				AS ConfigName
		,ROW_NUMBER() OVER (PARTITION BY SUBSTRING(df.TextData
													,CHARINDEX('''',df.TextData)+1
													,CHARINDEX('''',df.TextData
															,CHARINDEX('''',df.TextData)+1)-CHARINDEX('''',df.TextData)-1) 
								ORDER BY df.StartTime DESC) AS RowNum
	FROM ::fn_trace_gettable( @path, DEFAULT )  df
	WHERE 1=1
		AND df.TextData LIKE '%Configuration option%'
		AND df.ApplicationName NOT IN ('SQLAgent - Initial Boot Probe','SQLAgent - Enabling/disabling Agent XPs')
		AND df.Severity IS NOT NULL
)
 
INSERT INTO SysConfigChangeLog (configuration_id,name,CurrValue,PrevValue,description,PollDate	
							,LastConfigUpdtDate	,PrevConfigUpdtDate	,ChangeDate,ChangeBy
							,HostName,ChangeSPID,Changedata,ApplicationName,Severity
							,ERROR,ChangeBySessionLogin)
SELECT CUR.configuration_id,cur.NAME,cur.VALUE AS CurrValue,Prev.VALUE AS PrevValue,cur.description
		,cur.polldate, cur.LastConfigUpdtDate,prev.LastConfigUpdtDate AS PrevConfigUpdtDate
		,df.StartTime AS ChangeDate,df.LoginName AS ChangeBy,df.HostName,df.SPID AS ChangeSPID
		,df.TextData AS ChangeData,df.ApplicationName,df.Severity,df.ERROR
		,df.SessionLoginName AS ChangeBySessionLogin
	FROM AdminDB.dbo.SysConfigAudit CUR
		INNER JOIN AdminDB.dbo.SysConfigAudit Prev
			ON CUR.configuration_id = Prev.configuration_id
			AND CUR.PollDate = @MaxPollDate
			AND Prev.PollDate = @PreviousPollDate
		CROSS APPLY presel  df
	WHERE df.RowNum = 1
		AND df.ConfigName = CUR.NAME
		AND CUR.VALUE <> Prev.VALUE	
		AND NOT EXISTS (SELECT configuration_id
							FROM SysConfigChangeLog
							WHERE Name = Cur.NAME
								AND CurrValue = CUR.VALUE
								AND PrevValue = Prev.VALUE
								AND ChangeDate = df.StartTime
								AND ChangeBy = df.LoginName)
		;
END
 
SELECT *
	FROM dbo.SysConfigChangeLog
	ORDER BY ChangeDate DESC;

Here I am trapping the config settings on a daily basis (as the script is written for now). I then cross reference the current settings against the previous settings.  Then I check the default trace to see if any of those settings exist in the trace file.

The default trace captures the data related to configuration changes.  On busy systems, it is still possible for these settings to roll out of the trace files.  For those systems, we may need to make some alterations to the script (such as running more frequently and changing the comparisons to account for smaller time intervals than a day break).

To ensure proper comparison between the audit tables and the trace file, note the substring function employed.  I can capture the configuration name and then join to the audit tables on configuration name.

This has proven useful to me so far in tracking who did what to which setting and when they did it.

I hope you will find good use for it as well.

SSRS Subscriptions Report

As a part of my series leading up to Christmas 2012, I shared a script to Report on SSRS Subscriptions.  It was soon found to have a bug with SQL Server 2008 R2 SP2.  IN the comments on that post, I promised to post an updated script.  Here is that update – without the bug.

DECLARE @ReportName VARCHAR(100)
SET @ReportName = NULL;
 
CREATE TABLE #morepower (MonthDate BIGINT,N BIGINT,PowerN BIGINT PRIMARY KEY CLUSTERED
							,NameofMonth VARCHAR(25),WkDay VARCHAR(25))
;
 
WITH powers(powerN, n) AS (
	SELECT POWER(2,number), number 
		FROM master.dbo.spt_values 
		WHERE type = 'P' AND number < 31)
 
INSERT INTO #morepower ( MonthDate ,N,PowerN ,NameofMonth ,WkDay)
	SELECT ROW_NUMBER() OVER (ORDER BY N) AS MonthDate,N,PowerN
			,CASE WHEN N BETWEEN 0 AND 11 
				THEN DATENAME(MONTH,DATEADD(MONTH,N+1,0)-1)
				ELSE NULL
				END AS NameofMonth
			,CASE WHEN N BETWEEN 0 AND 6
				THEN DATENAME(weekday,DATEADD(DAY,n+1,0)-2)
				ELSE NULL
				END AS WkDay
		FROM powers
 
SELECT DISTINCT s.ScheduleID,Ca.PATH AS ReportManagerPath,Ca.Name AS ReportName
		, U.UserName AS SubscriptionCreator
		,Su.Description AS SubscriptionDescription,S.StartDate,Su.LastRunTime
		,CASE 
				WHEN s.RecurrenceType = 1 THEN 'One Off'
				WHEN s.RecurrenceType = 2 THEN 'Hour'
				WHEN s.RecurrenceType = 4 THEN 'Daily'
				WHEN s.RecurrenceType = 5 THEN 'Monthly' 
				WHEN s.RecurrenceType = 6 THEN 'Week of Month' 
			END AS RecurrenceType
		,s.EventType
		,ISNULL(REPLACE(REPLACE(STUFF(
					(SELECT ', ['+CONVERT(VARCHAR(20),MonthDate)+']' AS [TEXT()] 
						FROM #morepower m1 
						WHERE m1.powerN < s.DaysofMonth+1 
							AND s.DaysofMonth & m1.powerN <>0 
						ORDER BY N FOR XML PATH(''), TYPE).VALUE('.','VARCHAR(MAX)')
			   , 1, 2, ''),'[',''),']','')
			,'N/A') AS DaysofMonth
		,ISNULL(c1.NameOfMonth,'N/A') AS MonthString
		,ISNULL(c2.WkDays,'N/A') AS DaysofWeek
		,CASE MonthlyWeek
				WHEN 1 THEN 'First'
				WHEN 2 THEN 'Second'
				WHEN 3 THEN 'Third'
				WHEN 4 THEN 'Fourth'
				WHEN 5 THEN 'Last'
				ELSE 'N/A'
			END AS MonthlyWeek
		,ISNULL(CONVERT(VARCHAR(10),s.DaysInterval),'N/A') AS DaysInterval
		,ISNULL(CONVERT(VARCHAR(10),s.MinutesInterval),'N/A') AS MinutesInterval
		,ISNULL(CONVERT(VARCHAR(10),s.WeeksInterval),'N/A') AS WeeksInterval
	FROM #morepower mp, dbo.Schedule s
		INNER JOIN ReportSchedule RS
			ON S.ScheduleID = RS.ScheduleID
		INNER JOIN CATALOG Ca
			ON Ca.ItemID = RS.ReportID
		INNER JOIN Subscriptions Su
			ON Su.SubscriptionID = RS.SubscriptionID
		INNER JOIN Users U
			ON U.UserID = S.CreatedById
			OR U.UserID = Su.OwnerID
	CROSS APPLY (SELECT s.ScheduleID,REPLACE(REPLACE(STUFF(
							(SELECT ', ['+ NameofMonth + ']' AS [TEXT()] 
								FROM #morepower m1 ,dbo.Schedule s1
								WHERE m1.NameofMonth IS NOT NULL 
									AND m1.powerN & s1.MONTH <>0 
									AND s1.ScheduleID = s.ScheduleID
								ORDER BY N FOR XML PATH(''), TYPE).VALUE('.','VARCHAR(MAX)')
							, 1, 2, ''),'[',''),']','') AS NameOfMonth)c1
	CROSS APPLY (SELECT s.ScheduleID,REPLACE(REPLACE(STUFF(
							(SELECT ', [' + WkDay + ']' AS [TEXT()] 
								FROM #morepower m1 ,dbo.Schedule s2
								WHERE m1.WkDay IS NOT NULL 
									AND DaysOfWeek & m1.powerN <>0
									AND  s2.ScheduleID = s.ScheduleID
								ORDER BY N FOR XML PATH(''), TYPE).VALUE('.','VARCHAR(MAX)')
							, 1, 2, ''),'[',''),']','') AS WkDays) c2
	WHERE Ca.Name = ISNULL(@ReportName,Ca.Name);
 
DROP TABLE #morepower;

The inline code seen above likes to reformat and and will throw an error due to capitalization of the function value and text().  Download the script here: SSRS_SubscriptionsV1_5

On the Ninth Day…

It’s the end of the world as we know it.  And as the song goes…I feel fine!  But hey, we didn’t start the fire.

Those are a couple of songs that pop into my head every time somebody starts talking doomsday and doomsday prophecies.

If you are reading this, I dare say that the world is still turning.  And that is a good thing because that gives us a chance to talk about the 12 days of pre-Christmas.

Today we will be talking about a tool that can be at the DBAs disposal to help in tracking performance as well as problems.

First there are a couple of items of housekeeping.  First item is that I only realized with this post that the post on the first day was wrong.  I had miscalculated my twelve days to end on Christmas day.  That is wrong!  Counting down from that first post on the 13th means the 12th day will end up on December 24th.  Maybe I will throw in a bonus 13th day post, or delay a post for a day, or just go with it.  It will be a mystery!

Second item is naturally the daily recap of the 12 days to date.

On the Ninth Day of pre-Christmas…

My DBA gave to me a Data Collection!

If only my DBA had told me that I would need to have Management Data Warehouse (MDW) preconfigured.  Well, that is not a problem.  We can handle that too.  For a tutorial on how to setup MDW, I am going to provide a link to one that has been very well written by Kalen Delaney.

The article written by Kalen covers the topic of MDW very well all the way from setting up the MDW, to setting up the Data Collectors, and all the way down to the default reports you can run for MDW.  The MDW and canned Data Collectors can provide some really useful information for your environment.

What I want to share though is a means to add custom data collections to your MDW.  To create a custom collection, we need to rely on two stored procedures provided to us by Microsoft.  Those stored procedures are: sp_syscollector_create_collection_item and sp_syscollector_create_collection_set.  Both of these stored procedures are found in the, you guessed it, msdb database.

Each of these stored procedures has a number of parameters to help in the creation of an appropriate data collection.  When creating a data collection, you will first create the collection set, and then you add collection items to that set.

There are a few notable parameters for each stored procedure that I will cover.  Otherwise, you can refer back to the links for each of the stored procedures to see more information about the parameters.

Starting with the sp_syscollector_create_collection_set stored procedure, I want to point out the @schedule_name, @name, and @collection_mode parameters.  The name is pretty obvious – make sure you have a distinguishable name that is descriptive (my opinion) or at least have good documentation.  The collection mode has two values.  As noted in the documentation, you will want to choose one value over the other depending on the intended use of this data collector.  If running continuously, just remember to run in cached mode.  And lastly is the schedule name.  This will help determine how frequently the job runs.

Unfortunately, the schedule names are not found in the documentation, but rather you are directed to query the sysschedules tables.  To help you find those schedules, here is a quick query.

SELECT schedule_id,name
	FROM dbo.sysschedules
	WHERE name LIKE '%collector%';

Now on to the sp_syscollector_create_collection_item stored procedure.  There are three parameters that I want to lightly touch on.  For the rest, you can refer back to the documentation.  The parameters of interest here are @parameters, @frequency and @collector_type_uid.  Starting with the frequency parameter, this tells the collector how often to upload the data to the MDW if running in cached mode.  Be careful here to select an appropriate interval.  Next is the parameters parameter which is really the workhorse of the collection item.  In the case of the custom data collector that I will show in a bit, this is where the tsql query will go.

Last parameter to discuss is the collector type uid.  Like the schedule for the previous proc, the documentation for this one essentially refers you to a system view - syscollector_collector_types.  Here is a quick query to see the different collector types.

SELECT name
	FROM syscollector_collector_types;

The collector type that I will be using for this example is Generic T-SQL Query Collector Type.  A discussion on the four types of collectors can be reserved for another time.

Let’s move on to the example now.  This custom data collector is designed to help troubleshoot deadlock problems.  The means I want to accomplish this is by querying the system_health extended event session.

I can query for deadlock information direct to the system_health session using a query like the following.

SELECT CAST(
                  REPLACE(
                        REPLACE(XEventData.XEvent.VALUE('(data/value)[1]', 'varchar(max)'), 
                        '', ''),
                  '','')
            AS VARCHAR(4000)) AS DeadlockGraph
FROM
(SELECT CAST(target_data AS XML) AS TargetData
FROM sys.dm_xe_session_targets st
join sys.dm_xe_sessions s ON s.address = st.event_session_address
WHERE name = 'system_health') AS DATA
CROSS APPLY TargetData.nodes ('//RingBufferTarget/event') AS XEventData (XEvent)
WHERE XEventData.XEvent.VALUE('@name', 'varchar(4000)') = 'xml_deadlock_report';

You may notice that I have converted to varchar(4000) from XML.  This is in large part to make sure the results will play nicely with the data collector.  Now to convert that to a query that can be used in the @parameters parameter, we get the following.

<ns:TSQLQueryCollector xmlns:ns="DataCollectorType"><Query><Value>
 
SELECT CAST(
                  REPLACE(
                        REPLACE(XEventData.XEvent.VALUE(''(DATA/VALUE)[1]'', ''VARCHAR(MAX)''), 
                        '''', ''''),
                  '''','''')
            AS VARCHAR(4000)) AS DeadlockGraph
FROM
(SELECT CAST(target_data AS XML) AS TargetData
FROM sys.dm_xe_session_targets st
join sys.dm_xe_sessions s ON s.address = st.event_session_address
WHERE name = ''system_health'') AS DATA
CROSS APPLY TargetData.nodes (''//RingBufferTarget/event'') AS XEventData (XEvent)
WHERE XEventData.XEvent.VALUE(''@name'', ''VARCHAR(4000)'') = ''xml_deadlock_report'' 
 
</Value><OutputTable>systemhealthdeadlock</OutputTable></Query></ns:TSQLQueryCollector>

With this query, we are loading the necessary schema nodes that correlate to the Data Collector Type that we chose.  Since this parameter is XML, the schema must match or you will get an error.  We are now ready to generate a script that can create a deadlock data collector.

BEGIN TRANSACTION
BEGIN Try
DECLARE @collection_set_id_1 INT
DECLARE @collection_set_uid_2 UNIQUEIDENTIFIER
EXEC [msdb].[dbo].[sp_syscollector_create_collection_set]
	  @name=N'systemhealthdeadlock'
	, @collection_mode=1
	, @description=N'systemhealthdeadlock'
	, @logging_level=1
	, @days_until_expiration=14
	, @schedule_name=N'CollectorSchedule_Every_15min'
	, @collection_set_id=@collection_set_id_1 OUTPUT
	, @collection_set_uid=@collection_set_uid_2 OUTPUT
 
SELECT @collection_set_id_1, @collection_set_uid_2
 
DECLARE @collector_type_uid_3 UNIQUEIDENTIFIER
SELECT @collector_type_uid_3 = collector_type_uid 
	FROM [msdb].[dbo].[syscollector_collector_types] 
	WHERE name = N'Generic T-SQL Query Collector Type';
 
DECLARE @collection_item_id_4 INT
EXEC [msdb].[dbo].[sp_syscollector_create_collection_item]
	  @name=N'systemhealthdeadlock'
	, @PARAMETERS=N'<ns:TSQLQueryCollector xmlns:ns="DataCollectorType"><Query><Value>
 
SELECT CAST(
                  REPLACE(
                        REPLACE(XEventData.XEvent.value(''(data/value)[1]'', ''varchar(max)''), 
                        '''', ''''),
                  '''','''')
            AS varchar(4000)) AS DeadlockGraph
FROM
(SELECT CAST(target_data AS XML) AS TargetData
from sys.dm_xe_session_targets st
join sys.dm_xe_sessions s on s.address = st.event_session_address
where name = ''system_health'') AS Data
CROSS APPLY TargetData.nodes (''//RingBufferTarget/event'') AS XEventData (XEvent)
where XEventData.XEvent.value(''@name'', ''varchar(4000)'') = ''xml_deadlock_report'' 
 
</Value><OutputTable>systemhealthdeadlock</OutputTable></Query></ns:TSQLQueryCollector>'
	, @collection_item_id=@collection_item_id_4 OUTPUT
	, @frequency=30
	, @collection_set_id=@collection_set_id_1
	, @collector_type_uid=@collector_type_uid_3
 
SELECT @collection_item_id_4
 
COMMIT TRANSACTION;
END Try
BEGIN Catch
ROLLBACK TRANSACTION;
DECLARE @ErrorMessage NVARCHAR(4000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
DECLARE @ErrorNumber INT;
DECLARE @ErrorLine INT;
DECLARE @ErrorProcedure NVARCHAR(200);
SELECT @ErrorLine = ERROR_LINE(),
       @ErrorSeverity = ERROR_SEVERITY(),
       @ErrorState = ERROR_STATE(),
       @ErrorNumber = ERROR_NUMBER(),
       @ErrorMessage = ERROR_MESSAGE(),
       @ErrorProcedure = ISNULL(ERROR_PROCEDURE(), '-');
RAISERROR (14684, @ErrorSeverity, 1 , @ErrorNumber, @ErrorSeverity, @ErrorState, @ErrorProcedure, @ErrorLine, @ErrorMessage);
 
END Catch;
 
GO

Upon creation, this will create a SQL Agent job with the defined schedule.  Since this is a non-cached data collector set, the Agent job will adhere to the schedule specified and upload data on that interval.

Now all we need to do is generate a deadlock to see if it is working.  It is also a good idea to introduce you to the table that will be created due to this data collector.  Once we create this collector set, a new table will be created in the MDW database.  In the case of this collector set, we have a table with the schema and name of custom_snapshots.systemhealthdeadlock.

This new table will have three columns.  One column represents the DeadlockGraph as we retrieved from the query we provided to the @parameters parameter.  The remaining columns are data collector columns for the collection date and the snapshot id.

Now that we have covered all of that, your favorite deadlock query has had enough time to finally fall victim to a deadlock.  We should also have some information recorded in the custom_snapshots.systemhealthdeadlock table relevant to the deadlock information (if not, it will be there once the agent job runs, or you can run a snapshot from SSMS of the data collector).  With a quick query, we can start looking into the deadlock problem.

SELECT collection_time,CAST(DeadlockGraph AS XML) AS DeadlockGraph, snapshot_id
	FROM mdw.custom_snapshots.systemhealthdeadlock;

This query will give me a few entries (since I went overkill and created a bunch of deadlocks).  If I click the DeadlockGraph cell in the result sets, I can then view the XML of the DeadlockGraph, as in the following.

<deadlock>
  <victim-list>
    <victimProcess id="process5a4ebc8" />
  </victim-list>
  <process-list>
    <process id="process5a4ebc8" taskpriority="0" logused="0" waitresource="KEY: 26:72057594048020480 (b4903b2250cc)" waittime="609" ownerId="2803145" transactionname="user_transaction" lasttranstarted="2012-12-20T22:32:09.987" XDES="0x8008d950" lockMode="X" schedulerid="8" kpid="13656" status="suspended" spid="87" sbid="0" ecid="0" priority="0" trancount="4" lastbatchstarted="2012-12-20T22:38:50.020" lastbatchcompleted="2012-12-20T22:38:50.020" lastattention="2012-12-20T22:38:24.217" clientapp="Microsoft SQL Server Management Studio - Query" hostname="SomeServer" hostpid="7604" loginname="SomeLogin" isolationlevel="serializable (4)" xactid="2803145" currentdb="26" lockTimeout="4294967295" clientoption1="673187936" clientoption2="390200">
      <executionStack>
        <frame procname="" line="10" stmtstart="40" sqlhandle="0x02000000eac1af36f412db4e21d9dcc86feb261fa6bcd230" />
        <frame procname="" line="10" stmtstart="356" stmtend="518" sqlhandle="0x0200000095b4ee32a25e9724dd73fd6894c60748af6c136b" />
      </executionStack>
      <inputbuf>
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
WHILE 1=1 
BEGIN 
BEGIN TRAN
UPDATE Purchasing.Vendor
SET CreditRating = 2
WHERE BusinessEntityID = 1492;
 
UPDATE Purchasing.Vendor
SET CreditRating = 1
WHERE BusinessEntityID = 1494;
 
COMMIT TRAN 
END
   </inputbuf>
    </process>
    <process id="process5a44bc8" taskpriority="0" logused="0" waitresource="KEY: 26:72057594048020480 (ade87e3a717c)" waittime="609" ownerId="2878446" transactionname="user_transaction" lasttranstarted="2012-12-20T22:38:50.020" XDES="0xa9abd950" lockMode="X" schedulerid="7" kpid="15008" status="suspended" spid="86" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2012-12-20T22:38:47.887" lastbatchcompleted="2012-12-20T22:38:47.887" lastattention="2012-12-20T22:36:21.247" clientapp="Microsoft SQL Server Management Studio - Query" hostname="SomeServer" hostpid="7604" loginname="SomeLogin" isolationlevel="serializable (4)" xactid="2878446" currentdb="26" lockTimeout="4294967295" clientoption1="673187936" clientoption2="390200">
      <executionStack>
        <frame procname="" line="9" stmtstart="40" sqlhandle="0x02000000eac1af36f412db4e21d9dcc86feb261fa6bcd230" />
        <frame procname="" line="9" stmtstart="352" stmtend="510" sqlhandle="0x020000000c4b9412577ec884cbd51882e5310dd340216739" />
      </executionStack>
      <inputbuf>
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
WHILE 1=1 
BEGIN 
BEGIN TRAN
UPDATE Purchasing.Vendor
SET CreditRating = 1
WHERE BusinessEntityID = 1494;
UPDATE Purchasing.Vendor
SET CreditRating = 2
WHERE BusinessEntityID = 1492;
COMMIT TRAN 
END
   </inputbuf>
    </process>
  </process-list>
  <resource-list>
    <keylock hobtid="72057594048020480" dbid="26" objectname="" indexname="" id="lock5aeec80" mode="X" associatedObjectId="72057594048020480">
      <owner-list>
        <owner id="process5a44bc8" mode="X" />
      </owner-list>
      <waiter-list>
        <waiter id="process5a4ebc8" mode="X" requestType="wait" />
      </waiter-list>
    </keylock>
    <keylock hobtid="72057594048020480" dbid="26" objectname="" indexname="" id="lock6164d80" mode="X" associatedObjectId="72057594048020480">
      <owner-list>
        <owner id="process5a4ebc8" mode="X" />
      </owner-list>
      <waiter-list>
        <waiter id="process5a44bc8" mode="X" requestType="wait" />
      </waiter-list>
    </keylock>
  </resource-list>
</deadlock>

Code to generate deadlock courtesy of “SQL Server 2012 T-SQL Recipes: A Problem-Solution Approach” by Jason Brimhall, Wayne Sheffield et al (Chapter 12, pages 267-268).  If you examine the deadlock graph you will see the code that generated the deadlock.

Since this is being pulled from the RingBuffer target of the system_health, it can prove useful to store that data into a table such as I have done.  The reason being that the Ringbuffer can be overwritten, and with good timing on the data collector, we can preserve this information for later retrieval and troubleshooting.  Deadlocks don’t always happen at the most opportune time and even less likely to occur when we are staring at the screen waiting for them to happen.

As you read more about the stored procedures used to create a data collector, you will see that there is a retention parameter.  This helps prevent the table from getting too large on us.  We can also ensure that an appropriate retention is stored for these custom collectors.

Conclusion

Creating a custom data collector can be very handy for a DBA.  Especially in times of troubleshooting.  These collectors are also quite useful for trending and analysis.  Consider this a tool in the chest for the poor man. ;)

Enjoy and stay tuned!

All scripts and references were for SQL 2008 R2.  The Deadlock script was pulled from the 2012 book, but the script runs the same for the 2008 version of the AdventureWorks database.

On the Eighth Day…

Today’s post is merely an illusion.  The illusion being that we have finally stopped talking about the msdb database.  I’ll explain about that later in this post.

This should be a good addition to the script toolbox for those Mere Mortal DBAs out there supporting their corporate SSRS environment.  Everybody could use a script now and then that helps them better support their environment and perform their DBA duties, right?

No reading ahead now.  We’ll get to the script soon enough.  First, we have a bit of business to cover just as we normally do.

We need to quickly recap the first seven days thus far (after all, the song does a recap with each day).

 

  1. Runaway Jobs – 7th Day
  2. Maintenance Plan Gravage - 6th Day
  3. Table Compression - 5th Day
  4. Exercise for msdb - 4th Day
  5. Backup, Job and Mail History Cleanup - 3rd Day
  6. Service Broker Out of Control - 2nd Day
  7. Maint Plan Logs - 1st Day

On the Eighth Day of pre-Christmas…

My DBA gave to me a means to see Report Subscriptions and their schedules.

One of the intriguing points that we find with having a reporting environment is that we also need to report on that reporting environment.  And one of the nuisances of dealing with a Reporting Services Environment is that data like report subscription schedules is not very human friendly.

Part of the issue is that you need to be fluent with math.  Another part of the issue is that you need to be a little familiar with bitwise operations in SQL Server.  That said, it is possible to get by without understanding both very well.  And as a last resort, there is always the manual method of using Report Manager to check the subscriptions for each of the reports that have been deployed to that server.  Though, I think you will find this to be a bit tedious if you have a large number of reports.

I have seen more than one script that provides the schedule information for the subscriptions without using math and just relying on the bitwise operations.  This tends to produce a lot of repetitive code.  The method works, I’m just not that interested in the repetitive nature employed.

Within SQL Server you should notice that in several tables, views, and processes employ the powers of 2 or base 2 or binary number system.  This is natural since this is so integral within computer science in general.  Powers of 2 translates to binary fairly easily and then integrates so well with bitwise operations.

The following table demonstrates the powers of 2 and conversion to binary.

power of 2 value binary
0 1 1
1 2 10
2 4 100
3 8 1000
4 16 10000
5 32 100000
6 64 1000000
7 128 10000000
8 256 100000000

To get numbers and values between the binary results or the value results listed above is a matter of addition.  We add the value from a power of 2 to another power of 2.  So if I need a value of 7, then I need 2^0 + 2^1 + 2^2.  This results in a binary value of 0111.  Now this is where the need for bit comparisons comes into play so we will use some bitwise operations (read more here) to figure out quickly which values are used to reach an end value of 7 (so I don’t need to really know a lot of math there ;) ).

How does this Apply to Schedules?

This background has everything to do with scheduling in SSRS.  Within the ReportServer database, there is a table called Schedule in the dbo schema.  This table has multiple columns that store pieces of the Subscription Schedule.  The three key columns are DaysofWeek, DaysofMonth and Month.  The values stored in these columns are all sums of the powers of 2 necessary to represent multiple days or months.

For instance, you may see the following values

DaysOfWeek DaysOfMonth Month
62 135283073 2575

These values are not friendly to normal every day mere mortal DBAs.  The values from the preceding table are shown below with the human friendly data they represent.

DaysOfWeek DaysOfMonth Month
62 135283073 2575
Monday,Tuesday,Wednesday,Thursday,Friday 1,8,9,15,21,28 January,February,March,April,October,December

That is better for us to understand, but not necessarily better to store in the database.  So, I hope you can see the value of storing it in a numeric representation that is easily deciphered through math and TSQL.

Without further adieu, we have a script to report on these schedules without too much repetition.

DECLARE @ReportName VARCHAR(100)
SET @ReportName = NULL;
 
CREATE TABLE #morepower (MonthDate BIGINT,N BIGINT,PowerN BIGINT PRIMARY KEY CLUSTERED
							,NameofMonth VARCHAR(25),WkDay VARCHAR(25))
;
WITH E1(N) AS ( --=== Create Ten 1's
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 --10
               ),
      E2(N) AS (SELECT 1 FROM E1 a, E1 b),   --100
cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT N)) FROM E2  UNION ALL SELECT 0
),powers(powerN,n) AS (SELECT POWER(2,N),N FROM cteTally WHERE N < 31)
 
INSERT INTO #morepower ( MonthDate ,N,PowerN ,NameofMonth ,WkDay)
	SELECT ROW_NUMBER() OVER (ORDER BY N) AS MonthDate,N,PowerN
			,CASE WHEN N BETWEEN 0 AND 11 
				THEN DATENAME(MONTH,DATEADD(MONTH,N+1,0)-1)
				ELSE NULL
				END AS NameofMonth
			,CASE WHEN N BETWEEN 0 AND 6
				THEN DATENAME(weekday,DATEADD(DAY,n+1,0)-2)
				ELSE NULL
				END AS WkDay
		FROM powers
 
SELECT DISTINCT s.ScheduleID,Ca.PATH AS ReportManagerPath,Ca.Name AS ReportName
		, U.UserName AS SubscriptionCreator
		,Su.Description AS SubscriptionDescription,S.StartDate,Su.LastRunTime
		,CASE 
			WHEN s.RecurrenceType = 1 THEN 'One Off'
			WHEN s.RecurrenceType = 2 THEN 'Hour'
			WHEN s.RecurrenceType = 4 THEN 'Daily'
			WHEN s.RecurrenceType = 5 THEN 'Monthly' 
			WHEN s.RecurrenceType = 6 THEN 'Week of Month' 
		END AS RecurrenceType
		,s.EventType
		,ISNULL(STUFF(
				(SELECT ','+CONVERT(VARCHAR(50),MonthDate) AS [TEXT()] 
					FROM #morepower m1 
					WHERE m1.powerN < s.DaysofMonth+1 
						AND s.DaysofMonth & m1.powerN <>0 
					ORDER BY N FOR XML PATH('')),1,1,''),'N/A') AS DaysofMonth
		,ISNULL(c1.NameOfMonth,'N/A') AS MonthString
		,ISNULL(c2.WkDays,'N/A') AS DaysofWeek
		,CASE MonthlyWeek
				WHEN 1 THEN 'First'
				WHEN 2 THEN 'Second'
				WHEN 3 THEN 'Third'
				WHEN 4 THEN 'Fourth'
				WHEN 5 THEN 'Last'
				ELSE 'N/A'
			END AS MonthlyWeek
		,ISNULL(CONVERT(VARCHAR(10),s.DaysInterval),'N/A') AS DaysInterval
		,ISNULL(CONVERT(VARCHAR(10),s.MinutesInterval),'N/A') AS MinutesInterval
		,ISNULL(CONVERT(VARCHAR(10),s.WeeksInterval),'N/A') AS WeeksInterval
	FROM #morepower mp, dbo.Schedule s
		INNER JOIN ReportSchedule RS
			ON S.ScheduleID = RS.ScheduleID
		INNER JOIN CATALOG Ca
			ON Ca.ItemID = RS.ReportID
		INNER JOIN Subscriptions Su
			ON Su.SubscriptionID = RS.SubscriptionID
		INNER JOIN Users U
			ON U.UserID = S.CreatedById
			OR U.UserID = Su.OwnerID
	CROSS APPLY (SELECT s.ScheduleID,STUFF(
							(SELECT ','+NameofMonth AS [TEXT()] 
								FROM #morepower m1 ,dbo.Schedule s1
								WHERE m1.NameofMonth IS NOT NULL 
									AND m1.powerN & s1.MONTH <>0 
									AND s1.ScheduleID = s.ScheduleID
								ORDER BY N FOR XML PATH('')),1,1,'') AS NameOfMonth)c1
	CROSS APPLY (SELECT s.ScheduleID,STUFF(
							(SELECT ','+WkDay AS [TEXT()] 
								FROM #morepower m1 ,dbo.Schedule s2
								WHERE m1.WkDay IS NOT NULL 
									AND DaysOfWeek & m1.powerN <>0
									AND  s2.ScheduleID = s.ScheduleID
								ORDER BY N FOR XML PATH('')),1,1,'') AS WkDays) c2
	WHERE Ca.Name = ISNULL(@ReportName,Ca.Name);
 
DROP TABLE #morepower;

Consider this as V1 of the script with expected changes coming forth.

I have set this up so a specific report name can be provided or not.  If not provided, the query will return all scheduling information for all reports.

Through the use of a numbers table (done through the CTE), I have been able to create a map table for each of the necessary values to be parsed from the schedule later in the script.  In the creation of that map table, note the use of the power function.  This map table was the critical piece in my opinion to create a script that could quickly decipher the values in the schedule and provide something readable to the DBA.

Conclusion

I did this script because I feel it important to know what reports are running and when they are running.  Add that management also likes to know that information, so there is value to it.  But, I found scripts on the web that used the bitwise operation piece but a lot of repetitive code to determine each Weekday and month.

An alternative would be to perform a query against the msdb database since Scheduled reports are done via a SQL Agent job.  I hope you find this report useful and that you can put it to use.

On the Seventh Day…

So far this series has been a bit of fun.  We have discussed a fair amount of useful things to date.  And here we are only on the seventh day.

Today we will be reverting our attention back to the msdb database.  This time not as a means of maintaining the under-carriage but rather to help us try and become better DBAs.

Sometimes to be a better DBA, we have to be a bit proactive and less reactive.

We’ll get to that shortly.  As we have done just about every day so far though, we need to recap real quick what we have to date in the 12 Days of pre-Christmas.

 

  1. Maintenance Plan Gravage – Day 6
  2. Table Compression - Day 5
  3. Exercise for msdb - Day 4
  4. Backup, Job and Mail History Cleanup - 3rd Day
  5. Service Broker Out of Control - 2nd Day
  6. Maint Plan Logs - 1st Day

On the Seventh Day of pre-Christmas…

My DBA gave to me an early Nuclear Fallout detection and warning system.  Ok, maybe not quite that extensive – but it could sure feel like it if something were to slip through the crevasses and the business management started breathing down your neck.

Have you ever had a SQL Job run longer than it should have?  Has the job run long enough that it ran past the next scheduled start time?  These are both symptoms that you might want to monitor for and track in your environment.  If you are watching for it – you should be able to preemptively strike with warning flares sent out to the interested parties.  Then you would be more of the hero than the scapegoat.  And we all want to be on the winning side of that conversation without the confrontation from time to time ;) .

Today, I am going to share a quick script that can be used to help monitor for jobs that have waltzed well beyond the next scheduled start time.  First though, there is a script available out there for the jobs that have run beyond normal run times.  You can check out this article by Thomas LaRock to get a good script to check and log long running jobs.

Though the criteria are similar – we do have two different needs that need to be reported on and monitored.  This is why we have this second script.  If you have a job that should run every 15 minutes but you find that the job has been running non-stop for 45 minutes, that is quite a problem.  And many times we should alert on something like that.

So here is the quick script.

WITH preagg AS (   
SELECT sj.name,MAX(sa.last_executed_step_date) AS last_executed_step_date
		,MAX(sa.start_execution_date) AS start_execution_date
		,MAX(sa.stop_execution_date) AS stop_execution_date
		,MAX(sa.next_scheduled_run_date) AS next_scheduled_run_date
		,MAX(sh.run_status) AS run_status
		,MAX(ca.next_run_date_time) AS Next_Scheduled_RunDate
		,MAX(CAST(STUFF(STUFF(REPLACE(STR(sh.run_duration, 6), ' ', '0')
							, 3, 0, ':')
						, 6, 0, ':')
				AS VARCHAR)) AS Run_Duration
	FROM msdb.dbo.sysjobs sj
		INNER JOIN msdb.dbo.sysjobactivity sa
			ON sj.job_id = sa.job_id
		LEFT OUTER JOIN msdb.dbo.sysjobhistory sh
			ON sa.job_history_id = sh.instance_id
		INNER JOIN msdb.dbo.sysjobschedules sjs 
			ON sj.job_id = sjs.job_id
        INNER JOIN msdb.dbo.sysschedules ss 
			ON sjs.schedule_id = ss.schedule_id
        CROSS APPLY (SELECT CONVERT(DATETIME, 
                    CASE WHEN sjs.next_run_date = 0 THEN NULL
                    ELSE RIGHT('00000000' + CONVERT(VARCHAR(8), sjs.next_run_date),8) 
						+ ' ' + 
						STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR(6), sjs.next_run_time),6)
									, 3, 0, ':')
							, 6, 0, ':')
                    END)
            ) ca(next_run_date_time)
	WHERE (sa.start_execution_date > 0
		AND sa.stop_execution_date IS NULL)
		And (ISNULL(sa.next_scheduled_run_date,GETDATE()) <> ca.next_run_date_time
			OR ISNULL(sh.run_status,4) in (1,4))
	/*
		Status of the job execution:
		0 = Failed
		1 = Succeeded
		2 = Retry
		3 = Canceled
		4 = In progress
	*/
	GROUP BY sj.name
)
SELECT *
	FROM preagg
	WHERE next_scheduled_run_date IS NULL
	ORDER BY Next_Scheduled_RunDate

From here, it is a matter of creating a SQL job to run this script.  The schedule is dependent on job frequency and your expected needs to monitor.  It can very easily be set to run every five minutes.

Once the job is created then it is a matter of setting up a mechanism to alert.  There are any number of ways to do that as well.  I leave that to any who use the script to determine how best to implement for their own environment.

Conclusion

This script will catch jobs that are running and have exceeded the next scheduled start time.  This is accomplished through the filters in the where clause.  With a couple of date comparisons as well as filtering on job_status we can get the desired results.

 

On the Sixth Day…

What better way to kick off the sixth day of pre-Christmas than with six slices of foie-gras?  No animals have been hurt in the making of this post!

This could be a pretty festive dish thanks in part to those geese.  I enjoy a little foie gras every now and again.

No, this was not just a clever ploy to bring fat into another post.  Rather, I feel that foie gras is more of a delicacy than eggs.

Today, we will be talking of a delicacy in the database world.  It is a feature that should help every DBA sleep better at night.  Sadly, this delicacy can be tainted by another feature that may just give you hear burn instead.

First though, let’s recap.

  1. Table Compression – Day 5
  2. Exercise for msdb - Day 4
  3. Backup, Job and Mail History Cleanup - 3rd Day
  4. Service Broker Out of Control - 2nd Day
  5. Maint Plan Logs - 1st Day

On the Sixth Day of pre-Christmas…

Picture Courtesy of AllCreatures.org

My DBA gave to me a way to ensure my databases are not corrupt.

Sadly, the method chosen is to force feed (gravage) the checkdb through an SSIS style maintenance plan.  Have you ever tried to run checkdb through a maintenance plan?  It doesn’t always work so well.

Many times, when running checkdb, you may run into this pretty little error message.

Msg 5808, Level 16, State 1, Line 2
Ad hoc update to system catalogs is not supported.

Well, that is just downright wrong.  If I am provided a tool to create maintenance plans, then the tool should not generate an error such as that when I want to run consistency checks against the database.  This is akin to force feeding us something that isn’t all that good for our health.  There was even a connect item filed for this behavior, here.

So, what is it that causes this behavior?

Well, there are a couple of things that contribute to this behavior.  This can be reproduced from tsql as well.  To find what is causing this behavior, I used a tool that is a bit more reliable.  To recreate the failure, I created a small test database and then created a maintenance plan to run consistency checks for that database.  Then the reliable tool I used is Profiler.

Next up is to run the profiler with a filter for the test database, then to start the maintenance plan.  It shouldn’t take too long to have the maintenance plan complete.  When it completes, it is time to investigate the TSQL generated by Profiler and it should become apparent pretty quick what TSQL is being run during a maintenance plan that causes the checkdb to fail with the above mentioned error.

Are you ready?  This code is pretty complex and perplexing.  You better sit down so you don’t fall from the surprise and shock.

EXEC sys.SP_CONFIGURE N'user options', 0 
RECONFIGURE

Why would a maintenance plan need to run that snippet every time that a checkdb is performed?

Now, there is another piece to this puzzle.  This error is thrown when another configuration setting is present.  If we change that setting, the error no longer happens.  Soooo, a quick fix would be to change that setting.  The setting in question is “Allow Updates”.  It has a value of 1 and must be changed to a value of 0.  Since SQL 2005, we want it to be a 0 anyway.

EXEC SP_CONFIGURE 'allow updates' ,0
RECONFIGURE WITH override

Now, another option for this issue would be that the maintenance plan not run the sp_configure at all.  Or, if it is deemed necessary, that it be changed to the following.

EXEC sys.SP_CONFIGURE N'user options', 0 
RECONFIGURE WITH override

The best option in my opinion is that a maintenance plan not be used for consistency checks.  I’d prefer to see a custom built routine, a routine using sp_msforeachdb, or the maintenance plan routines developed by Ola Hallengren that can be found here.  All of these methods require that a little more thought be put into the consistency checking of your database, but that means you will get better sleep through the night.

«page 1 of 4




Calendar
April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  
Content
SQLHelp

SQLHelp


Welcome , today is Wednesday, April 16, 2014