Table Space updated again

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: March 28, 2014

Today we have another update for an age old script on this site.  You can find the last update to the script here.

This time we have a bit of a bug update.  The details of that bug are in the notes for the script.

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
 
/*
Purpose:
To Evaluate table sizes combined with index space consumption to determine higher cost tables in 
terms of storage, resources and maintenance needs.
 
ModifiedDate	ModifiedBy	Description
2013-11-21		JB			Tables without Indexes had a Null Value in the Output
							Fixed the output to sum properly for the NULLS in absence of an index
 
*/
BEGIN TRAN
 
DECLARE @dbsize DECIMAL(19,2)
        ,@logsize DECIMAL(19,2)
 
SET NOCOUNT ON
 
/*
**  Summary data.
*/
BEGIN
        SELECT @dbsize = SUM(CONVERT(DECIMAL(19,2),CASE WHEN type = 0 THEN SIZE ELSE 0 END)) / 128.0
                , @logsize = SUM(CONVERT(DECIMAL(19,2),CASE WHEN type = 1 THEN SIZE ELSE 0 END)) / 128.0
                FROM sys.database_files
 
END;
        WITH FirstPass AS (
                SELECT OBJECT_ID,
                        ReservedPage = CONVERT(DECIMAL(19,2),SUM(reserved_page_count)) / 128.0,
                        UsedPage = CONVERT(DECIMAL(19,2),SUM(used_page_count)) / 128.0,
                        PageCnt = SUM(
                        CONVERT(DECIMAL(19,2),CASE
                                WHEN (index_id < 2)
                                        THEN (used_page_count)
                                ELSE lob_used_page_count + row_overflow_used_page_count
                                END
                        )) * 8/1024,
                        iPageCnt = SUM(
                        CONVERT(DECIMAL(19,2),CASE
                                WHEN (index_id > 1)
                                        THEN (used_page_count)
                                ELSE 0
                                END
                        )) * 8/1024,
                        RowCnt = SUM(
                        CASE
                                WHEN (index_id < 2)
                                        THEN row_count
                                ELSE 0
                        END
                        )
                FROM sys.dm_db_partition_stats
                WHERE 1 = 1
					--AND OBJECTPROPERTY(object_id,'IsMSShipped') = 0
					AND index_id < 2
                GROUP BY OBJECT_ID
        ),IndexPass AS (
                SELECT OBJECT_ID,
                        iReservedPage = CONVERT(DECIMAL(19,2),SUM(reserved_page_count)) / 128.0
                        ,iUsedPage = CONVERT(DECIMAL(19,2),SUM(used_page_count)) / 128.0
                        ,iPageCnt = SUM(
                        CONVERT(DECIMAL(19,2),CASE
                                WHEN (index_id > 1)
                                        THEN (used_page_count)
                                ELSE 0
                                END
                        )) / 128.0
                        ,RowCnt = SUM(
                        CASE
                                WHEN (index_id < 2)
                                        THEN row_count
                                ELSE 0
                        END
                        )
                FROM sys.dm_db_partition_stats
                WHERE 1 = 1 
					--AND OBJECTPROPERTY(object_id,'IsMSShipped') = 0
					AND index_id > 1
                GROUP BY OBJECT_ID
        ),InternalTables AS (
                SELECT ps.OBJECT_ID,
                        ReservedPage = CONVERT(DECIMAL(19,2),SUM(reserved_page_count)) / 128.0
                        ,UsedPage = CONVERT(DECIMAL(19,2),SUM(used_page_count)) / 128.0
                FROM sys.dm_db_partition_stats  ps
                        INNER Join sys.internal_tables it
                                ON it.OBJECT_ID = ps.OBJECT_ID
                                And it.internal_type IN (202,204,211,212,213,214,215,216)
                WHERE it.parent_id = ps.OBJECT_ID
					--And OBJECTPROPERTY(ps.object_id,'IsMSShipped') = 0
                GROUP BY ps.OBJECT_ID
        ),Summary AS (
                SELECT
                        OBJECT_NAME (F.OBJECT_ID) AS ObjName 
                        ,SCHEMA_NAME(o.schema_id) AS SchemaName
                        ,CASE WHEN OBJECTPROPERTY(F.OBJECT_ID,'IsMSShipped') = 1 
							THEN 'YES' 
							ELSE 'NO' 
							END AS IsMsShipped
                        ,NumRows = MAX(F.RowCnt)
                        ,ReservedPageMB = SUM(ISNULL(F.ReservedPage,0) + ISNULL(i.ReservedPage,0))
                        ,DataSizeMB = SUM(F.PageCnt)
                        --,IndexSizeMB = SUM(CASE WHEN (F.UsedPage + ISNULL(i.UsedPage,0)) > F.PageCnt
                        --                                THEN ((F.UsedPage + ISNULL(i.UsedPage,0)) - F.PageCnt) ELSE 0 END) ,-- Equivalent of max_record_size from sys.dm_db_index_physical_stats
						,IndexSizeMB = SUM(ISNULL(ip.iPageCnt,0))
                        ,UnusedSpace = SUM(CASE WHEN (F.ReservedPage + ISNULL(i.ReservedPage,0)) > (F.UsedPage + ISNULL(i.UsedPage,0))
                                THEN ((F.ReservedPage + ISNULL(i.ReservedPage,0)) - (F.UsedPage + ISNULL(i.UsedPage,0))) ELSE 0 END)
                                + (SUM(ISNULL(ip.iReservedPage,0)) - SUM(ISNULL(ip.iUsedPage,0)))
                        ,IndexReservedMB = SUM(ISNULL(ip.iReservedPage,0))
                        ,dbsizeMB = @dbsize
                        ,LogSizeMB = @logsize
                FROM FirstPass F
					INNER JOIN sys.objects o
						ON F.OBJECT_ID = o.OBJECT_ID
					LEFT Outer Join InternalTables i
                        ON i.OBJECT_ID = F.OBJECT_ID
                    LEFT OUTER JOIN IndexPass ip
						ON F.OBJECT_ID = ip.OBJECT_ID
                GROUP BY F.OBJECT_ID,o.schema_id
        ),TotalUnused AS (
				SELECT SUM(ISNULL(UnusedSpace,0)) AS UnusedSpace
						,SUM(ISNULL(reservedpageMB,0))+SUM(ISNULL(IndexReservedMB,0)) AS Usedr
					FROM Summary
		)
        SELECT ObjName,SchemaName,IsMsShipped,NumRows, ReservedPageMB, ISNULL(DataSizeMB,0) AS DataSizeMB, ISNULL(IndexSizeMB,0) AS IndexSizeMB
						,ISNULL(S.UnusedSpace,0) AS UnusedSpace, dbsizeMB, LogSizeMB
						,TU.UnusedSpace AS TotalTableFreeSpace 
                        ,dbsizeMB - TU.Usedr AS DataFileFreeSpace  
                        /*within 1.5gb on a 1.76tb database or .000085% variance or 99.999915% accuracy */
                        ,PercentofDBPhysFile = ((ISNULL(IndexSizeMB,0) + ISNULL(DataSizeMB,0)) / @dbsize) * 100
                        ,PercentofDBUsedSpace = ((ISNULL(IndexSizeMB,0) + ISNULL(DataSizeMB,0)) / (@dbsize - TU.UnusedSpace)) * 100
 
        FROM Summary S
			CROSS APPLY TotalUnused TU
        ORDER BY PercentofDBUsedSpace DESC
 
ROLLBACK

Table Hierarchy updated

Recently a need resurfaced to explore the foreign key tree (hierarchy/genealogy) as it related to a specific table within a database.  As I pulled out the script from the repository, I realized there was some unfinished work to be done.  But there was also some polish that needed to be added.  This is an update to the most recent posting of that script.  You can see several revisions in the series at this link or the group here.

Some of the changes involve formatting and and labeling.  I added a new column called “Direction” to help understand the relationship of the key to the table in question.  I also changed up the FKGenealogy (formerly called SortCol) to reflect the source table more accurately in the case when the key comes from an ancestor rather than a descendant.  The Level of the FK was also modified to help understand a little better how far away the ancestor was in relationship to the origin table.

A final adjustment also comes from the Genealogy attribute.  Ancestors were all starting at the wrong point in the lineage.  I adjusted that so the lineage can be seen from the point in the tree that the ancestor is related rather than as a root direct from the origin table.

All in all, this script should make more sense to the user than the previous versions.

DECLARE    @StartParentTable    VARCHAR(256);
 
SELECT @StartParentTable = 'tblCampaignMain';
 
WITH Posterity (ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy) AS (
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
          ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
          ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
          ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
          ,0 AS FKLevel
          ,CAST('.'+CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX))+'.' AS VARCHAR(MAX))
      FROM sys.foreign_key_columns sfkc
          INNER Join sys.foreign_keys sfk
              ON sfkc.constraint_object_id = sfk.OBJECT_ID
      WHERE OBJECT_NAME(sfk.referenced_object_id) = @StartParentTable
  UNION All
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
          ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
          ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
          ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
          ,FKLevel + 1
          ,STUFF(('.' + F.FKGenealogy + CAST(CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX)) + '.' AS VARCHAR(MAX))
                      ),1,1,'')
      FROM sys.foreign_keys sfk
          INNER Join Posterity F
              ON OBJECT_NAME(sfk.referenced_object_id) = F.ChildTable
              And F.FKGenealogy NOT LIKE '%'+CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX))+'%'
          INNER Join sys.foreign_key_columns sfkc
              ON sfkc.constraint_object_id = sfk.OBJECT_ID
      WHERE OBJECT_NAME(sfk.referenced_object_id) <> @StartParentTable
          And sfk.referenced_object_id <> sfk.parent_object_id
), Ancestry (ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy) AS (
  SELECT DISTINCT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
      ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
      ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
      ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
      ,(FKLevel * -1)-1 AS FKLevel
      ,F.FKGenealogy + CAST(CAST(OBJECT_NAME(sfk.Parent_object_id) AS VARCHAR(MAX))+'.' AS VARCHAR(MAX))
  FROM Posterity F
      INNER Join sys.foreign_keys sfk
          ON F.ChildTable = OBJECT_NAME(sfk.parent_object_id)
          And F.ParentTable <> OBJECT_NAME(sfk.referenced_object_id)
      INNER Join sys.foreign_key_columns sfkc
          ON sfkc.constraint_object_id = sfk.OBJECT_ID
  UNION All
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
      ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
      ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
      ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
      ,F.FKLevel -1
      ,STUFF(('.' + FKGenealogy + CAST(CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX)) + '.' AS VARCHAR(MAX))
                  ),1,1,'')
  FROM Ancestry F
      INNER Join sys.foreign_keys sfk
          ON F.ParentTable = OBJECT_NAME(sfk.parent_object_id)
      INNER Join sys.foreign_key_columns sfkc
          ON sfkc.constraint_object_id = sfk.OBJECT_ID
  WHERE F.ParentTable not in (SELECT ParentTable FROM Posterity)
      And sfk.referenced_object_id <> sfk.parent_object_id
      And F.ChildTable not IN (SELECT ChildTable FROM Posterity)
)
 
SELECT ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy
	,CASE 
		WHEN FKLevel > 0 THEN 'Descendant' 
		WHEN FKLevel = 0 THEN 'Root'
		END AS Direction
FROM Posterity
UNION All
SELECT ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy
	,'Ancestor' AS Direction
FROM Ancestry
  ORDER BY FKGenealogy ASC, Direction DESC
  OPTION (maxrecursion 500);

Last Time CheckDB was Run

Categories: Corner, News, Professional, SSC
Comments: 8 Comments
Published on: January 17, 2013

Corrupt PagesBack in November 2012, I shared a story about checking the last known good checkdb in the boot page of a database.  You can read that story here.

This is an important enough topic that it is worth repeating frequently if I wanted to do that.  If  for no other reason than to continue to hammer at how important it is to both run checkdb and know the last time that checkdb was run successfully.

Alas, I am writing to fix a few things with the script that I shared in that last past.

I run this script on every server I touch to get a report for the last known good checkdb for every single database.  I had been running the script flawlessly across many servers without error.  Then it happened.  The script failed with a nasty error.

After a bit of looking, it became apparent my flaw in the script.  I had not written the script with CS (case sensitivity) in mind.  I touch so few CS servers, that I sometimes forget to check for that.  Slap my hands and I will work on that going forward.

So here is the update to the script.

CREATE TABLE #temp (
       Id INT IDENTITY(1,1), 
       ParentObject VARCHAR(255),
       [OBJECT] VARCHAR(255),
       Field VARCHAR(255),
       [VALUE] VARCHAR(255)
)
 
CREATE TABLE #DBCCRes (
       Id INT IDENTITY(1,1)PRIMARY KEY CLUSTERED, 
       DBName sysname ,
       dbccLastKnownGood DATETIME,
       RowNum	INT
)
 
DECLARE
	@DBName SYSNAME,
	@SQL    VARCHAR(512);
 
DECLARE dbccpage CURSOR
	LOCAL STATIC FORWARD_ONLY READ_ONLY
	FOR SELECT name
		FROM sys.databases
	WHERE 1 = 1
		AND STATE = 0
		--And name NOT IN ('tempdb')
	;
 
OPEN dbccpage;
FETCH NEXT FROM dbccpage INTO @DBName;
WHILE @@FETCH_STATUS = 0
BEGIN
	SET @SQL = 'Use [' + @DBName +'];' + CHAR(10)+ CHAR(13)
	SET @SQL = @SQL + 'DBCC Page ( ['+ @DBName +'],1,9,3) WITH TABLERESULTS;' + CHAR(10)+ CHAR(13)
 
	INSERT INTO #temp
		EXECUTE (@SQL);
	SET @SQL = ''
 
	INSERT INTO #DBCCRes
			( DBName, dbccLastKnownGood,RowNum )
		SELECT @DBName, VALUE
				, ROW_NUMBER() OVER (PARTITION BY Field ORDER BY VALUE) AS Rownum
			FROM #temp
			WHERE Field = 'dbi_dbccLastKnownGood';
 
	TRUNCATE TABLE #temp;
 
	FETCH NEXT FROM dbccpage INTO @DBName;
END
CLOSE dbccpage;
DEALLOCATE dbccpage;
 
SELECT DBName,dbccLastKnownGood
	FROM #DBCCRes
	WHERE RowNum = 1;
 
DROP TABLE #temp
DROP TABLE #DBCCRes

SSRS Subscriptions Report

As a part of my series leading up to Christmas 2012, I shared a script to Report on SSRS Subscriptions.  It was soon found to have a bug with SQL Server 2008 R2 SP2.  IN the comments on that post, I promised to post an updated script.  Here is that update – without the bug.

DECLARE @ReportName VARCHAR(100)
SET @ReportName = NULL;
 
CREATE TABLE #morepower (MonthDate BIGINT,N BIGINT,PowerN BIGINT PRIMARY KEY CLUSTERED
							,NameofMonth VARCHAR(25),WkDay VARCHAR(25))
;
 
WITH powers(powerN, n) AS (
	SELECT POWER(2,number), number 
		FROM master.dbo.spt_values 
		WHERE type = 'P' AND number < 31)
 
INSERT INTO #morepower ( MonthDate ,N,PowerN ,NameofMonth ,WkDay)
	SELECT ROW_NUMBER() OVER (ORDER BY N) AS MonthDate,N,PowerN
			,CASE WHEN N BETWEEN 0 AND 11 
				THEN DATENAME(MONTH,DATEADD(MONTH,N+1,0)-1)
				ELSE NULL
				END AS NameofMonth
			,CASE WHEN N BETWEEN 0 AND 6
				THEN DATENAME(weekday,DATEADD(DAY,n+1,0)-2)
				ELSE NULL
				END AS WkDay
		FROM powers
 
SELECT DISTINCT s.ScheduleID,Ca.PATH AS ReportManagerPath,Ca.Name AS ReportName
		, U.UserName AS SubscriptionCreator
		,Su.Description AS SubscriptionDescription,S.StartDate,Su.LastRunTime
		,CASE 
				WHEN s.RecurrenceType = 1 THEN 'One Off'
				WHEN s.RecurrenceType = 2 THEN 'Hour'
				WHEN s.RecurrenceType = 4 THEN 'Daily'
				WHEN s.RecurrenceType = 5 THEN 'Monthly' 
				WHEN s.RecurrenceType = 6 THEN 'Week of Month' 
			END AS RecurrenceType
		,s.EventType
		,ISNULL(REPLACE(REPLACE(STUFF(
					(SELECT ', ['+CONVERT(VARCHAR(20),MonthDate)+']' AS [TEXT()] 
						FROM #morepower m1 
						WHERE m1.powerN < s.DaysofMonth+1 
							AND s.DaysofMonth & m1.powerN <>0 
						ORDER BY N FOR XML PATH(''), TYPE).VALUE('.','VARCHAR(MAX)')
			   , 1, 2, ''),'[',''),']','')
			,'N/A') AS DaysofMonth
		,ISNULL(c1.NameOfMonth,'N/A') AS MonthString
		,ISNULL(c2.WkDays,'N/A') AS DaysofWeek
		,CASE MonthlyWeek
				WHEN 1 THEN 'First'
				WHEN 2 THEN 'Second'
				WHEN 3 THEN 'Third'
				WHEN 4 THEN 'Fourth'
				WHEN 5 THEN 'Last'
				ELSE 'N/A'
			END AS MonthlyWeek
		,ISNULL(CONVERT(VARCHAR(10),s.DaysInterval),'N/A') AS DaysInterval
		,ISNULL(CONVERT(VARCHAR(10),s.MinutesInterval),'N/A') AS MinutesInterval
		,ISNULL(CONVERT(VARCHAR(10),s.WeeksInterval),'N/A') AS WeeksInterval
	FROM #morepower mp, dbo.Schedule s
		INNER JOIN ReportSchedule RS
			ON S.ScheduleID = RS.ScheduleID
		INNER JOIN CATALOG Ca
			ON Ca.ItemID = RS.ReportID
		INNER JOIN Subscriptions Su
			ON Su.SubscriptionID = RS.SubscriptionID
		INNER JOIN Users U
			ON U.UserID = S.CreatedById
			OR U.UserID = Su.OwnerID
	CROSS APPLY (SELECT s.ScheduleID,REPLACE(REPLACE(STUFF(
							(SELECT ', ['+ NameofMonth + ']' AS [TEXT()] 
								FROM #morepower m1 ,dbo.Schedule s1
								WHERE m1.NameofMonth IS NOT NULL 
									AND m1.powerN & s1.MONTH <>0 
									AND s1.ScheduleID = s.ScheduleID
								ORDER BY N FOR XML PATH(''), TYPE).VALUE('.','VARCHAR(MAX)')
							, 1, 2, ''),'[',''),']','') AS NameOfMonth)c1
	CROSS APPLY (SELECT s.ScheduleID,REPLACE(REPLACE(STUFF(
							(SELECT ', [' + WkDay + ']' AS [TEXT()] 
								FROM #morepower m1 ,dbo.Schedule s2
								WHERE m1.WkDay IS NOT NULL 
									AND DaysOfWeek & m1.powerN <>0
									AND  s2.ScheduleID = s.ScheduleID
								ORDER BY N FOR XML PATH(''), TYPE).VALUE('.','VARCHAR(MAX)')
							, 1, 2, ''),'[',''),']','') AS WkDays) c2
	WHERE Ca.Name = ISNULL(@ReportName,Ca.Name);
 
DROP TABLE #morepower;

The inline code seen above likes to reformat and and will throw an error due to capitalization of the function value and text().  Download the script here: SSRS_SubscriptionsV1_5

On the Ninth Day…

It’s the end of the world as we know it.  And as the song goes…I feel fine!  But hey, we didn’t start the fire.

Those are a couple of songs that pop into my head every time somebody starts talking doomsday and doomsday prophecies.

If you are reading this, I dare say that the world is still turning.  And that is a good thing because that gives us a chance to talk about the 12 days of pre-Christmas.

Today we will be talking about a tool that can be at the DBAs disposal to help in tracking performance as well as problems.

First there are a couple of items of housekeeping.  First item is that I only realized with this post that the post on the first day was wrong.  I had miscalculated my twelve days to end on Christmas day.  That is wrong!  Counting down from that first post on the 13th means the 12th day will end up on December 24th.  Maybe I will throw in a bonus 13th day post, or delay a post for a day, or just go with it.  It will be a mystery!

Second item is naturally the daily recap of the 12 days to date.

On the Ninth Day of pre-Christmas…

My DBA gave to me a Data Collection!

If only my DBA had told me that I would need to have Management Data Warehouse (MDW) preconfigured.  Well, that is not a problem.  We can handle that too.  For a tutorial on how to setup MDW, I am going to provide a link to one that has been very well written by Kalen Delaney.

The article written by Kalen covers the topic of MDW very well all the way from setting up the MDW, to setting up the Data Collectors, and all the way down to the default reports you can run for MDW.  The MDW and canned Data Collectors can provide some really useful information for your environment.

What I want to share though is a means to add custom data collections to your MDW.  To create a custom collection, we need to rely on two stored procedures provided to us by Microsoft.  Those stored procedures are: sp_syscollector_create_collection_item and sp_syscollector_create_collection_set.  Both of these stored procedures are found in the, you guessed it, msdb database.

Each of these stored procedures has a number of parameters to help in the creation of an appropriate data collection.  When creating a data collection, you will first create the collection set, and then you add collection items to that set.

There are a few notable parameters for each stored procedure that I will cover.  Otherwise, you can refer back to the links for each of the stored procedures to see more information about the parameters.

Starting with the sp_syscollector_create_collection_set stored procedure, I want to point out the @schedule_name, @name, and @collection_mode parameters.  The name is pretty obvious – make sure you have a distinguishable name that is descriptive (my opinion) or at least have good documentation.  The collection mode has two values.  As noted in the documentation, you will want to choose one value over the other depending on the intended use of this data collector.  If running continuously, just remember to run in cached mode.  And lastly is the schedule name.  This will help determine how frequently the job runs.

Unfortunately, the schedule names are not found in the documentation, but rather you are directed to query the sysschedules tables.  To help you find those schedules, here is a quick query.

SELECT schedule_id,name
	FROM dbo.sysschedules
	WHERE name LIKE '%collector%';

Now on to the sp_syscollector_create_collection_item stored procedure.  There are three parameters that I want to lightly touch on.  For the rest, you can refer back to the documentation.  The parameters of interest here are @parameters, @frequency and @collector_type_uid.  Starting with the frequency parameter, this tells the collector how often to upload the data to the MDW if running in cached mode.  Be careful here to select an appropriate interval.  Next is the parameters parameter which is really the workhorse of the collection item.  In the case of the custom data collector that I will show in a bit, this is where the tsql query will go.

Last parameter to discuss is the collector type uid.  Like the schedule for the previous proc, the documentation for this one essentially refers you to a system view - syscollector_collector_types.  Here is a quick query to see the different collector types.

SELECT name
	FROM syscollector_collector_types;

The collector type that I will be using for this example is Generic T-SQL Query Collector Type.  A discussion on the four types of collectors can be reserved for another time.

Let’s move on to the example now.  This custom data collector is designed to help troubleshoot deadlock problems.  The means I want to accomplish this is by querying the system_health extended event session.

I can query for deadlock information direct to the system_health session using a query like the following.

SELECT CAST(
                  REPLACE(
                        REPLACE(XEventData.XEvent.VALUE('(data/value)[1]', 'varchar(max)'), 
                        '', ''),
                  '','')
            AS VARCHAR(4000)) AS DeadlockGraph
FROM
(SELECT CAST(target_data AS XML) AS TargetData
FROM sys.dm_xe_session_targets st
join sys.dm_xe_sessions s ON s.address = st.event_session_address
WHERE name = 'system_health') AS DATA
CROSS APPLY TargetData.nodes ('//RingBufferTarget/event') AS XEventData (XEvent)
WHERE XEventData.XEvent.VALUE('@name', 'varchar(4000)') = 'xml_deadlock_report';

You may notice that I have converted to varchar(4000) from XML.  This is in large part to make sure the results will play nicely with the data collector.  Now to convert that to a query that can be used in the @parameters parameter, we get the following.

<ns:TSQLQueryCollector xmlns:ns="DataCollectorType"><Query><Value>
 
SELECT CAST(
                  REPLACE(
                        REPLACE(XEventData.XEvent.VALUE(''(DATA/VALUE)[1]'', ''VARCHAR(MAX)''), 
                        '''', ''''),
                  '''','''')
            AS VARCHAR(4000)) AS DeadlockGraph
FROM
(SELECT CAST(target_data AS XML) AS TargetData
FROM sys.dm_xe_session_targets st
join sys.dm_xe_sessions s ON s.address = st.event_session_address
WHERE name = ''system_health'') AS DATA
CROSS APPLY TargetData.nodes (''//RingBufferTarget/event'') AS XEventData (XEvent)
WHERE XEventData.XEvent.VALUE(''@name'', ''VARCHAR(4000)'') = ''xml_deadlock_report'' 
 
</Value><OutputTable>systemhealthdeadlock</OutputTable></Query></ns:TSQLQueryCollector>

With this query, we are loading the necessary schema nodes that correlate to the Data Collector Type that we chose.  Since this parameter is XML, the schema must match or you will get an error.  We are now ready to generate a script that can create a deadlock data collector.

BEGIN TRANSACTION
BEGIN Try
DECLARE @collection_set_id_1 INT
DECLARE @collection_set_uid_2 UNIQUEIDENTIFIER
EXEC [msdb].[dbo].[sp_syscollector_create_collection_set]
	  @name=N'systemhealthdeadlock'
	, @collection_mode=1
	, @description=N'systemhealthdeadlock'
	, @logging_level=1
	, @days_until_expiration=14
	, @schedule_name=N'CollectorSchedule_Every_15min'
	, @collection_set_id=@collection_set_id_1 OUTPUT
	, @collection_set_uid=@collection_set_uid_2 OUTPUT
 
SELECT @collection_set_id_1, @collection_set_uid_2
 
DECLARE @collector_type_uid_3 UNIQUEIDENTIFIER
SELECT @collector_type_uid_3 = collector_type_uid 
	FROM [msdb].[dbo].[syscollector_collector_types] 
	WHERE name = N'Generic T-SQL Query Collector Type';
 
DECLARE @collection_item_id_4 INT
EXEC [msdb].[dbo].[sp_syscollector_create_collection_item]
	  @name=N'systemhealthdeadlock'
	, @PARAMETERS=N'<ns:TSQLQueryCollector xmlns:ns="DataCollectorType"><Query><Value>
 
SELECT CAST(
                  REPLACE(
                        REPLACE(XEventData.XEvent.value(''(data/value)[1]'', ''varchar(max)''), 
                        '''', ''''),
                  '''','''')
            AS varchar(4000)) AS DeadlockGraph
FROM
(SELECT CAST(target_data AS XML) AS TargetData
from sys.dm_xe_session_targets st
join sys.dm_xe_sessions s on s.address = st.event_session_address
where name = ''system_health'') AS Data
CROSS APPLY TargetData.nodes (''//RingBufferTarget/event'') AS XEventData (XEvent)
where XEventData.XEvent.value(''@name'', ''varchar(4000)'') = ''xml_deadlock_report'' 
 
</Value><OutputTable>systemhealthdeadlock</OutputTable></Query></ns:TSQLQueryCollector>'
	, @collection_item_id=@collection_item_id_4 OUTPUT
	, @frequency=30
	, @collection_set_id=@collection_set_id_1
	, @collector_type_uid=@collector_type_uid_3
 
SELECT @collection_item_id_4
 
COMMIT TRANSACTION;
END Try
BEGIN Catch
ROLLBACK TRANSACTION;
DECLARE @ErrorMessage NVARCHAR(4000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
DECLARE @ErrorNumber INT;
DECLARE @ErrorLine INT;
DECLARE @ErrorProcedure NVARCHAR(200);
SELECT @ErrorLine = ERROR_LINE(),
       @ErrorSeverity = ERROR_SEVERITY(),
       @ErrorState = ERROR_STATE(),
       @ErrorNumber = ERROR_NUMBER(),
       @ErrorMessage = ERROR_MESSAGE(),
       @ErrorProcedure = ISNULL(ERROR_PROCEDURE(), '-');
RAISERROR (14684, @ErrorSeverity, 1 , @ErrorNumber, @ErrorSeverity, @ErrorState, @ErrorProcedure, @ErrorLine, @ErrorMessage);
 
END Catch;
 
GO

Upon creation, this will create a SQL Agent job with the defined schedule.  Since this is a non-cached data collector set, the Agent job will adhere to the schedule specified and upload data on that interval.

Now all we need to do is generate a deadlock to see if it is working.  It is also a good idea to introduce you to the table that will be created due to this data collector.  Once we create this collector set, a new table will be created in the MDW database.  In the case of this collector set, we have a table with the schema and name of custom_snapshots.systemhealthdeadlock.

This new table will have three columns.  One column represents the DeadlockGraph as we retrieved from the query we provided to the @parameters parameter.  The remaining columns are data collector columns for the collection date and the snapshot id.

Now that we have covered all of that, your favorite deadlock query has had enough time to finally fall victim to a deadlock.  We should also have some information recorded in the custom_snapshots.systemhealthdeadlock table relevant to the deadlock information (if not, it will be there once the agent job runs, or you can run a snapshot from SSMS of the data collector).  With a quick query, we can start looking into the deadlock problem.

SELECT collection_time,CAST(DeadlockGraph AS XML) AS DeadlockGraph, snapshot_id
	FROM mdw.custom_snapshots.systemhealthdeadlock;

This query will give me a few entries (since I went overkill and created a bunch of deadlocks).  If I click the DeadlockGraph cell in the result sets, I can then view the XML of the DeadlockGraph, as in the following.

<deadlock>
  <victim-list>
    <victimProcess id="process5a4ebc8" />
  </victim-list>
  <process-list>
    <process id="process5a4ebc8" taskpriority="0" logused="0" waitresource="KEY: 26:72057594048020480 (b4903b2250cc)" waittime="609" ownerId="2803145" transactionname="user_transaction" lasttranstarted="2012-12-20T22:32:09.987" XDES="0x8008d950" lockMode="X" schedulerid="8" kpid="13656" status="suspended" spid="87" sbid="0" ecid="0" priority="0" trancount="4" lastbatchstarted="2012-12-20T22:38:50.020" lastbatchcompleted="2012-12-20T22:38:50.020" lastattention="2012-12-20T22:38:24.217" clientapp="Microsoft SQL Server Management Studio - Query" hostname="SomeServer" hostpid="7604" loginname="SomeLogin" isolationlevel="serializable (4)" xactid="2803145" currentdb="26" lockTimeout="4294967295" clientoption1="673187936" clientoption2="390200">
      <executionStack>
        <frame procname="" line="10" stmtstart="40" sqlhandle="0x02000000eac1af36f412db4e21d9dcc86feb261fa6bcd230" />
        <frame procname="" line="10" stmtstart="356" stmtend="518" sqlhandle="0x0200000095b4ee32a25e9724dd73fd6894c60748af6c136b" />
      </executionStack>
      <inputbuf>
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
WHILE 1=1 
BEGIN 
BEGIN TRAN
UPDATE Purchasing.Vendor
SET CreditRating = 2
WHERE BusinessEntityID = 1492;
 
UPDATE Purchasing.Vendor
SET CreditRating = 1
WHERE BusinessEntityID = 1494;
 
COMMIT TRAN 
END
   </inputbuf>
    </process>
    <process id="process5a44bc8" taskpriority="0" logused="0" waitresource="KEY: 26:72057594048020480 (ade87e3a717c)" waittime="609" ownerId="2878446" transactionname="user_transaction" lasttranstarted="2012-12-20T22:38:50.020" XDES="0xa9abd950" lockMode="X" schedulerid="7" kpid="15008" status="suspended" spid="86" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2012-12-20T22:38:47.887" lastbatchcompleted="2012-12-20T22:38:47.887" lastattention="2012-12-20T22:36:21.247" clientapp="Microsoft SQL Server Management Studio - Query" hostname="SomeServer" hostpid="7604" loginname="SomeLogin" isolationlevel="serializable (4)" xactid="2878446" currentdb="26" lockTimeout="4294967295" clientoption1="673187936" clientoption2="390200">
      <executionStack>
        <frame procname="" line="9" stmtstart="40" sqlhandle="0x02000000eac1af36f412db4e21d9dcc86feb261fa6bcd230" />
        <frame procname="" line="9" stmtstart="352" stmtend="510" sqlhandle="0x020000000c4b9412577ec884cbd51882e5310dd340216739" />
      </executionStack>
      <inputbuf>
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
WHILE 1=1 
BEGIN 
BEGIN TRAN
UPDATE Purchasing.Vendor
SET CreditRating = 1
WHERE BusinessEntityID = 1494;
UPDATE Purchasing.Vendor
SET CreditRating = 2
WHERE BusinessEntityID = 1492;
COMMIT TRAN 
END
   </inputbuf>
    </process>
  </process-list>
  <resource-list>
    <keylock hobtid="72057594048020480" dbid="26" objectname="" indexname="" id="lock5aeec80" mode="X" associatedObjectId="72057594048020480">
      <owner-list>
        <owner id="process5a44bc8" mode="X" />
      </owner-list>
      <waiter-list>
        <waiter id="process5a4ebc8" mode="X" requestType="wait" />
      </waiter-list>
    </keylock>
    <keylock hobtid="72057594048020480" dbid="26" objectname="" indexname="" id="lock6164d80" mode="X" associatedObjectId="72057594048020480">
      <owner-list>
        <owner id="process5a4ebc8" mode="X" />
      </owner-list>
      <waiter-list>
        <waiter id="process5a44bc8" mode="X" requestType="wait" />
      </waiter-list>
    </keylock>
  </resource-list>
</deadlock>

Code to generate deadlock courtesy of “SQL Server 2012 T-SQL Recipes: A Problem-Solution Approach” by Jason Brimhall, Wayne Sheffield et al (Chapter 12, pages 267-268).  If you examine the deadlock graph you will see the code that generated the deadlock.

Since this is being pulled from the RingBuffer target of the system_health, it can prove useful to store that data into a table such as I have done.  The reason being that the Ringbuffer can be overwritten, and with good timing on the data collector, we can preserve this information for later retrieval and troubleshooting.  Deadlocks don’t always happen at the most opportune time and even less likely to occur when we are staring at the screen waiting for them to happen.

As you read more about the stored procedures used to create a data collector, you will see that there is a retention parameter.  This helps prevent the table from getting too large on us.  We can also ensure that an appropriate retention is stored for these custom collectors.

Conclusion

Creating a custom data collector can be very handy for a DBA.  Especially in times of troubleshooting.  These collectors are also quite useful for trending and analysis.  Consider this a tool in the chest for the poor man. ;)

Enjoy and stay tuned!

All scripts and references were for SQL 2008 R2.  The Deadlock script was pulled from the 2012 book, but the script runs the same for the 2008 version of the AdventureWorks database.

On the Eighth Day…

Today’s post is merely an illusion.  The illusion being that we have finally stopped talking about the msdb database.  I’ll explain about that later in this post.

This should be a good addition to the script toolbox for those Mere Mortal DBAs out there supporting their corporate SSRS environment.  Everybody could use a script now and then that helps them better support their environment and perform their DBA duties, right?

No reading ahead now.  We’ll get to the script soon enough.  First, we have a bit of business to cover just as we normally do.

We need to quickly recap the first seven days thus far (after all, the song does a recap with each day).

 

  1. Runaway Jobs – 7th Day
  2. Maintenance Plan Gravage - 6th Day
  3. Table Compression - 5th Day
  4. Exercise for msdb - 4th Day
  5. Backup, Job and Mail History Cleanup - 3rd Day
  6. Service Broker Out of Control - 2nd Day
  7. Maint Plan Logs - 1st Day

On the Eighth Day of pre-Christmas…

My DBA gave to me a means to see Report Subscriptions and their schedules.

One of the intriguing points that we find with having a reporting environment is that we also need to report on that reporting environment.  And one of the nuisances of dealing with a Reporting Services Environment is that data like report subscription schedules is not very human friendly.

Part of the issue is that you need to be fluent with math.  Another part of the issue is that you need to be a little familiar with bitwise operations in SQL Server.  That said, it is possible to get by without understanding both very well.  And as a last resort, there is always the manual method of using Report Manager to check the subscriptions for each of the reports that have been deployed to that server.  Though, I think you will find this to be a bit tedious if you have a large number of reports.

I have seen more than one script that provides the schedule information for the subscriptions without using math and just relying on the bitwise operations.  This tends to produce a lot of repetitive code.  The method works, I’m just not that interested in the repetitive nature employed.

Within SQL Server you should notice that in several tables, views, and processes employ the powers of 2 or base 2 or binary number system.  This is natural since this is so integral within computer science in general.  Powers of 2 translates to binary fairly easily and then integrates so well with bitwise operations.

The following table demonstrates the powers of 2 and conversion to binary.

power of 2 value binary
0 1 1
1 2 10
2 4 100
3 8 1000
4 16 10000
5 32 100000
6 64 1000000
7 128 10000000
8 256 100000000

To get numbers and values between the binary results or the value results listed above is a matter of addition.  We add the value from a power of 2 to another power of 2.  So if I need a value of 7, then I need 2^0 + 2^1 + 2^2.  This results in a binary value of 0111.  Now this is where the need for bit comparisons comes into play so we will use some bitwise operations (read more here) to figure out quickly which values are used to reach an end value of 7 (so I don’t need to really know a lot of math there ;) ).

How does this Apply to Schedules?

This background has everything to do with scheduling in SSRS.  Within the ReportServer database, there is a table called Schedule in the dbo schema.  This table has multiple columns that store pieces of the Subscription Schedule.  The three key columns are DaysofWeek, DaysofMonth and Month.  The values stored in these columns are all sums of the powers of 2 necessary to represent multiple days or months.

For instance, you may see the following values

DaysOfWeek DaysOfMonth Month
62 135283073 2575

These values are not friendly to normal every day mere mortal DBAs.  The values from the preceding table are shown below with the human friendly data they represent.

DaysOfWeek DaysOfMonth Month
62 135283073 2575
Monday,Tuesday,Wednesday,Thursday,Friday 1,8,9,15,21,28 January,February,March,April,October,December

That is better for us to understand, but not necessarily better to store in the database.  So, I hope you can see the value of storing it in a numeric representation that is easily deciphered through math and TSQL.

Without further adieu, we have a script to report on these schedules without too much repetition.

DECLARE @ReportName VARCHAR(100)
SET @ReportName = NULL;
 
CREATE TABLE #morepower (MonthDate BIGINT,N BIGINT,PowerN BIGINT PRIMARY KEY CLUSTERED
							,NameofMonth VARCHAR(25),WkDay VARCHAR(25))
;
WITH E1(N) AS ( --=== Create Ten 1's
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 --10
               ),
      E2(N) AS (SELECT 1 FROM E1 a, E1 b),   --100
cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT N)) FROM E2  UNION ALL SELECT 0
),powers(powerN,n) AS (SELECT POWER(2,N),N FROM cteTally WHERE N < 31)
 
INSERT INTO #morepower ( MonthDate ,N,PowerN ,NameofMonth ,WkDay)
	SELECT ROW_NUMBER() OVER (ORDER BY N) AS MonthDate,N,PowerN
			,CASE WHEN N BETWEEN 0 AND 11 
				THEN DATENAME(MONTH,DATEADD(MONTH,N+1,0)-1)
				ELSE NULL
				END AS NameofMonth
			,CASE WHEN N BETWEEN 0 AND 6
				THEN DATENAME(weekday,DATEADD(DAY,n+1,0)-2)
				ELSE NULL
				END AS WkDay
		FROM powers
 
SELECT DISTINCT s.ScheduleID,Ca.PATH AS ReportManagerPath,Ca.Name AS ReportName
		, U.UserName AS SubscriptionCreator
		,Su.Description AS SubscriptionDescription,S.StartDate,Su.LastRunTime
		,CASE 
			WHEN s.RecurrenceType = 1 THEN 'One Off'
			WHEN s.RecurrenceType = 2 THEN 'Hour'
			WHEN s.RecurrenceType = 4 THEN 'Daily'
			WHEN s.RecurrenceType = 5 THEN 'Monthly' 
			WHEN s.RecurrenceType = 6 THEN 'Week of Month' 
		END AS RecurrenceType
		,s.EventType
		,ISNULL(STUFF(
				(SELECT ','+CONVERT(VARCHAR(50),MonthDate) AS [TEXT()] 
					FROM #morepower m1 
					WHERE m1.powerN < s.DaysofMonth+1 
						AND s.DaysofMonth & m1.powerN <>0 
					ORDER BY N FOR XML PATH('')),1,1,''),'N/A') AS DaysofMonth
		,ISNULL(c1.NameOfMonth,'N/A') AS MonthString
		,ISNULL(c2.WkDays,'N/A') AS DaysofWeek
		,CASE MonthlyWeek
				WHEN 1 THEN 'First'
				WHEN 2 THEN 'Second'
				WHEN 3 THEN 'Third'
				WHEN 4 THEN 'Fourth'
				WHEN 5 THEN 'Last'
				ELSE 'N/A'
			END AS MonthlyWeek
		,ISNULL(CONVERT(VARCHAR(10),s.DaysInterval),'N/A') AS DaysInterval
		,ISNULL(CONVERT(VARCHAR(10),s.MinutesInterval),'N/A') AS MinutesInterval
		,ISNULL(CONVERT(VARCHAR(10),s.WeeksInterval),'N/A') AS WeeksInterval
	FROM #morepower mp, dbo.Schedule s
		INNER JOIN ReportSchedule RS
			ON S.ScheduleID = RS.ScheduleID
		INNER JOIN CATALOG Ca
			ON Ca.ItemID = RS.ReportID
		INNER JOIN Subscriptions Su
			ON Su.SubscriptionID = RS.SubscriptionID
		INNER JOIN Users U
			ON U.UserID = S.CreatedById
			OR U.UserID = Su.OwnerID
	CROSS APPLY (SELECT s.ScheduleID,STUFF(
							(SELECT ','+NameofMonth AS [TEXT()] 
								FROM #morepower m1 ,dbo.Schedule s1
								WHERE m1.NameofMonth IS NOT NULL 
									AND m1.powerN & s1.MONTH <>0 
									AND s1.ScheduleID = s.ScheduleID
								ORDER BY N FOR XML PATH('')),1,1,'') AS NameOfMonth)c1
	CROSS APPLY (SELECT s.ScheduleID,STUFF(
							(SELECT ','+WkDay AS [TEXT()] 
								FROM #morepower m1 ,dbo.Schedule s2
								WHERE m1.WkDay IS NOT NULL 
									AND DaysOfWeek & m1.powerN <>0
									AND  s2.ScheduleID = s.ScheduleID
								ORDER BY N FOR XML PATH('')),1,1,'') AS WkDays) c2
	WHERE Ca.Name = ISNULL(@ReportName,Ca.Name);
 
DROP TABLE #morepower;

Consider this as V1 of the script with expected changes coming forth.

I have set this up so a specific report name can be provided or not.  If not provided, the query will return all scheduling information for all reports.

Through the use of a numbers table (done through the CTE), I have been able to create a map table for each of the necessary values to be parsed from the schedule later in the script.  In the creation of that map table, note the use of the power function.  This map table was the critical piece in my opinion to create a script that could quickly decipher the values in the schedule and provide something readable to the DBA.

Conclusion

I did this script because I feel it important to know what reports are running and when they are running.  Add that management also likes to know that information, so there is value to it.  But, I found scripts on the web that used the bitwise operation piece but a lot of repetitive code to determine each Weekday and month.

An alternative would be to perform a query against the msdb database since Scheduled reports are done via a SQL Agent job.  I hope you find this report useful and that you can put it to use.

On the Seventh Day…

So far this series has been a bit of fun.  We have discussed a fair amount of useful things to date.  And here we are only on the seventh day.

Today we will be reverting our attention back to the msdb database.  This time not as a means of maintaining the under-carriage but rather to help us try and become better DBAs.

Sometimes to be a better DBA, we have to be a bit proactive and less reactive.

We’ll get to that shortly.  As we have done just about every day so far though, we need to recap real quick what we have to date in the 12 Days of pre-Christmas.

 

  1. Maintenance Plan Gravage – Day 6
  2. Table Compression - Day 5
  3. Exercise for msdb - Day 4
  4. Backup, Job and Mail History Cleanup - 3rd Day
  5. Service Broker Out of Control - 2nd Day
  6. Maint Plan Logs - 1st Day

On the Seventh Day of pre-Christmas…

My DBA gave to me an early Nuclear Fallout detection and warning system.  Ok, maybe not quite that extensive – but it could sure feel like it if something were to slip through the crevasses and the business management started breathing down your neck.

Have you ever had a SQL Job run longer than it should have?  Has the job run long enough that it ran past the next scheduled start time?  These are both symptoms that you might want to monitor for and track in your environment.  If you are watching for it – you should be able to preemptively strike with warning flares sent out to the interested parties.  Then you would be more of the hero than the scapegoat.  And we all want to be on the winning side of that conversation without the confrontation from time to time ;) .

Today, I am going to share a quick script that can be used to help monitor for jobs that have waltzed well beyond the next scheduled start time.  First though, there is a script available out there for the jobs that have run beyond normal run times.  You can check out this article by Thomas LaRock to get a good script to check and log long running jobs.

Though the criteria are similar – we do have two different needs that need to be reported on and monitored.  This is why we have this second script.  If you have a job that should run every 15 minutes but you find that the job has been running non-stop for 45 minutes, that is quite a problem.  And many times we should alert on something like that.

So here is the quick script.

WITH preagg AS (   
SELECT sj.name,MAX(sa.last_executed_step_date) AS last_executed_step_date
		,MAX(sa.start_execution_date) AS start_execution_date
		,MAX(sa.stop_execution_date) AS stop_execution_date
		,MAX(sa.next_scheduled_run_date) AS next_scheduled_run_date
		,MAX(sh.run_status) AS run_status
		,MAX(ca.next_run_date_time) AS Next_Scheduled_RunDate
		,MAX(CAST(STUFF(STUFF(REPLACE(STR(sh.run_duration, 6), ' ', '0')
							, 3, 0, ':')
						, 6, 0, ':')
				AS VARCHAR)) AS Run_Duration
	FROM msdb.dbo.sysjobs sj
		INNER JOIN msdb.dbo.sysjobactivity sa
			ON sj.job_id = sa.job_id
		LEFT OUTER JOIN msdb.dbo.sysjobhistory sh
			ON sa.job_history_id = sh.instance_id
		INNER JOIN msdb.dbo.sysjobschedules sjs 
			ON sj.job_id = sjs.job_id
        INNER JOIN msdb.dbo.sysschedules ss 
			ON sjs.schedule_id = ss.schedule_id
        CROSS APPLY (SELECT CONVERT(DATETIME, 
                    CASE WHEN sjs.next_run_date = 0 THEN NULL
                    ELSE RIGHT('00000000' + CONVERT(VARCHAR(8), sjs.next_run_date),8) 
						+ ' ' + 
						STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR(6), sjs.next_run_time),6)
									, 3, 0, ':')
							, 6, 0, ':')
                    END)
            ) ca(next_run_date_time)
	WHERE (sa.start_execution_date > 0
		AND sa.stop_execution_date IS NULL)
		And (ISNULL(sa.next_scheduled_run_date,GETDATE()) <> ca.next_run_date_time
			OR ISNULL(sh.run_status,4) in (1,4))
	/*
		Status of the job execution:
		0 = Failed
		1 = Succeeded
		2 = Retry
		3 = Canceled
		4 = In progress
	*/
	GROUP BY sj.name
)
SELECT *
	FROM preagg
	WHERE next_scheduled_run_date IS NULL
	ORDER BY Next_Scheduled_RunDate

From here, it is a matter of creating a SQL job to run this script.  The schedule is dependent on job frequency and your expected needs to monitor.  It can very easily be set to run every five minutes.

Once the job is created then it is a matter of setting up a mechanism to alert.  There are any number of ways to do that as well.  I leave that to any who use the script to determine how best to implement for their own environment.

Conclusion

This script will catch jobs that are running and have exceeded the next scheduled start time.  This is accomplished through the filters in the where clause.  With a couple of date comparisons as well as filtering on job_status we can get the desired results.

 

On the Sixth Day…

What better way to kick off the sixth day of pre-Christmas than with six slices of foie-gras?  No animals have been hurt in the making of this post!

This could be a pretty festive dish thanks in part to those geese.  I enjoy a little foie gras every now and again.

No, this was not just a clever ploy to bring fat into another post.  Rather, I feel that foie gras is more of a delicacy than eggs.

Today, we will be talking of a delicacy in the database world.  It is a feature that should help every DBA sleep better at night.  Sadly, this delicacy can be tainted by another feature that may just give you hear burn instead.

First though, let’s recap.

  1. Table Compression – Day 5
  2. Exercise for msdb - Day 4
  3. Backup, Job and Mail History Cleanup - 3rd Day
  4. Service Broker Out of Control - 2nd Day
  5. Maint Plan Logs - 1st Day

On the Sixth Day of pre-Christmas…

Picture Courtesy of AllCreatures.org

My DBA gave to me a way to ensure my databases are not corrupt.

Sadly, the method chosen is to force feed (gravage) the checkdb through an SSIS style maintenance plan.  Have you ever tried to run checkdb through a maintenance plan?  It doesn’t always work so well.

Many times, when running checkdb, you may run into this pretty little error message.

Msg 5808, Level 16, State 1, Line 2
Ad hoc update to system catalogs is not supported.

Well, that is just downright wrong.  If I am provided a tool to create maintenance plans, then the tool should not generate an error such as that when I want to run consistency checks against the database.  This is akin to force feeding us something that isn’t all that good for our health.  There was even a connect item filed for this behavior, here.

So, what is it that causes this behavior?

Well, there are a couple of things that contribute to this behavior.  This can be reproduced from tsql as well.  To find what is causing this behavior, I used a tool that is a bit more reliable.  To recreate the failure, I created a small test database and then created a maintenance plan to run consistency checks for that database.  Then the reliable tool I used is Profiler.

Next up is to run the profiler with a filter for the test database, then to start the maintenance plan.  It shouldn’t take too long to have the maintenance plan complete.  When it completes, it is time to investigate the TSQL generated by Profiler and it should become apparent pretty quick what TSQL is being run during a maintenance plan that causes the checkdb to fail with the above mentioned error.

Are you ready?  This code is pretty complex and perplexing.  You better sit down so you don’t fall from the surprise and shock.

EXEC sys.SP_CONFIGURE N'user options', 0 
RECONFIGURE

Why would a maintenance plan need to run that snippet every time that a checkdb is performed?

Now, there is another piece to this puzzle.  This error is thrown when another configuration setting is present.  If we change that setting, the error no longer happens.  Soooo, a quick fix would be to change that setting.  The setting in question is “Allow Updates”.  It has a value of 1 and must be changed to a value of 0.  Since SQL 2005, we want it to be a 0 anyway.

EXEC SP_CONFIGURE 'allow updates' ,0
RECONFIGURE WITH override

Now, another option for this issue would be that the maintenance plan not run the sp_configure at all.  Or, if it is deemed necessary, that it be changed to the following.

EXEC sys.SP_CONFIGURE N'user options', 0 
RECONFIGURE WITH override

The best option in my opinion is that a maintenance plan not be used for consistency checks.  I’d prefer to see a custom built routine, a routine using sp_msforeachdb, or the maintenance plan routines developed by Ola Hallengren that can be found here.  All of these methods require that a little more thought be put into the consistency checking of your database, but that means you will get better sleep through the night.

On the Fifth Day…

Today we will have a quick change of pace.  This will be less about maintenance and more about internals.

Today’s topic is one of particular interest to me as well.  The topic is a small section of a presentation I like to give at User Groups and at SQL Saturdays.

Before we take this diversion from the under-carriage to something more related to the engine, let’s have a brief recap of the previous four days of pre-Christmas.

So far the gifts of pre-Christmas have included the following articles.

  1. Exercise for msdb – Day 4
  2. Backup, Job and Mail History Cleanup - 3rd Day
  3. Service Broker Out of Control - 2nd Day
  4. Maint Plan Logs - 1st Day

On the Fifth Day of pre-Christmas…

My DBA gave to me a much more compact database.  Surprisingly it can be as much as much as 20% of the original size.

I know, I know.  I’ve heard it before but this is not the compression used by doublespace and drivespace that we were given many years ago.  This really is much better.

And yes I have heard about the performance factor too.  That is a topic for another discussion.  As an aside, when it comes to performance, I always tell people that they must test for themselves because mileage will vary.

No, what I want to talk about is much different.  I want to talk about the CD Array(at the page level) and the new data types specific to compression that you may encounter within the CD Array.

CD Array Data Types

SQL Server introduces us to 13 data types that are used within the CD Array when Compression has been enabled.  Twelve of these data types can be seen when Row Compression has been enabled on an object.  The thirteenth data type is only applicable when page compression has been implemented.

There is no guarantee that any or all of these data types will be present on a page related to an object that has been compressed using either Row Compression or Page Compression.  If you want to find these data types on a compressed page, you may have to do a little hunting.

To demonstrate that these data types exist and that they can be found, I have a sample script.

USE CompressTest
GO
 
IF EXISTS(SELECT NAME FROM sys.tables WHERE name=N'CDTypes')
BEGIN
	DROP TABLE dbo.CDTypes
END
 
IF EXISTS(SELECT NAME FROM sys.tables WHERE name=N'CDTypes2')
BEGIN
	DROP TABLE dbo.CDTypes2
END
 
CREATE TABLE dbo.CDTypes( 
	SomeNull INT
	,SomeBit BIT
	,Some1Byte INT IDENTITY(1,1) PRIMARY KEY CLUSTERED
	,Some2Byte CHAR(2)
	,Some3Byte CHAR(3)
	,Some4Byte CHAR(4)
	,Some5Byte CHAR(5)
	,Some6Byte CHAR(6)
	,Some7Byte CHAR(7)
	,Some8Byte CHAR(8)
	,SomeLong CHAR(9)
	,SomeBit2 BIT
) WITH (DATA_COMPRESSION=ROW) ;	
 
 SELECT TOP 20000
        SomeID       = IDENTITY(INT,1,1),
        SomeInt      = ABS(CHECKSUM(NEWID()))%50000+1,
        SomeLetters2 = REPLICATE(CHAR(ABS(CHECKSUM(NEWID()))%26+65)
                     + CHAR(ABS(CHECKSUM(NEWID()))%26+65),10),
        SomeMoney    = CAST(ABS(CHECKSUM(NEWID()))%10000 /100.0 AS MONEY),
        SomeDate     = CAST(RAND(CHECKSUM(NEWID()))*3653.0+36524.0 AS DATETIME),
        SomeHex12    = RIGHT(NEWID(),12)
   INTO dbo.CDTypes2
   FROM Master.dbo.SysColumns t1,
        Master.dbo.SysColumns t2;
GO
 
/*
Let's create a Clustered Index
*/
CREATE CLUSTERED INDEX CompressSTable ON  dbo.CDTypes2(SomeID) WITH (DATA_COMPRESSION = PAGE);
GO
 
/* Insert Data */
INSERT INTO dbo.CDTypes
        ( SomeNull, SomeBit,  Some2Byte, Some3Byte,Some4byte, Some5byte
        , Some6byte, Some7byte, Some8byte, SomeLong, Somebit2)
VALUES  (NULL, 0,  '12', '123', '1234', '12345', '123456', '1234567', '12345678', '123456789', 1)
Go
 
DBCC IND(CompressTest, 'CDTypes', 1)
go
 
/* Make sure CDTypes2 is page compressed */
ALTER INDEX CompressSTable ON CDTypes2 REBUILD WITH (DATA_COMPRESSION = PAGE);
 
DBCC IND(CompressTest, 'CDTypes2', 1)
go
 
DBCC TRACEON(3604)
go
/*
Set Trace 3604 To Get Output to SSMS
Then take a look at the Page
 
--or--
Use the tableresults method
Both methods shown here
 
*/
 
DBCC PAGE('CompressTest', 1, 20392, 3) WITH tableresults
go 
DBCC PAGE('CompressTest', 1, 24889, 3) --in this exercise the second page of type 1 is usually page compressed
go

Now let’s take a look at the different data types, starting with the 12 available with Row Compression.

0×00 NULL SomeNull
0×01 EMPTY SomeBit
0×02 ONE_BYTE_SHORT Some1Byte
0×03 TWO_BYTE_SHORT Some2Byte
0×04 THREE_BYTE_SHORT Some3Byte
0×05 FOUR_BYTE_SHORT Some4Byte
0×06 FIVE_BYTE_SHORT Some5Byte
0×07 SIX_BYTE_SHORT Some6Byte
0×08 SEVEN_BYTE_SHORT Some7Byte
0×09 EIGHT_BYTE_SHORT Some8Byte
0x0a LONG SomeLong
0x0b BIT_COLUMN SomeBit2

If we look at page 20392 (yours will likely be different), we will find all of these data types present.  We will also note that this page should show (COMPRESSED) PRIMARY_RECORD – which indicates that the page is Row Compressed.  When hunting for these data types, it is a good idea to make sure the page is compressed first.  In the supplied table, you can see what data type matches to which column in the table we created via the script.  The table also provides a short description of what that data type represents (as you would see in the CD Array).

If we now want to explore and find the 13th data type, we need to look at the second table we created in the attached script – CDTypes2.  Notice that this table has been page compressed.  I even did that twice.  I did this to make sure the data was page compressed (occasionally when testing I could not easily find a page compressed page unless I page compressed a second time).

Much the same as was done with Row Compression, we need to verify that a page is Page Compressed before searching up and down trying to find this 13th type.  To find a Page Compressed page, we need to look in the CompressionInfo section of the page for CI_HAS_DICTIONARY.  If this notation is present, the page is Page Compressed.

Here is a page snippet with the pertinent data type.

The CD array entry for Column 4 is what we are interested in here.  In this particular record, we see the thirteenth data type of 0x0c (ONE_BYTE_PAGE_SYMBOL).  This data type can appear for any of the columns in the CD array and can be different for each record on the page.  This is due to how Page Compression works (which is also a different discussion).

For this last example, I want to backtrack a bit and point out that I used page 24889 to query for the Symbol data type in the CD Array.  This page was different every time I re-ran the test script.  One thing that was consistent with how I setup the exercise is that the page compressed page that had this data type was always the second page of type 1 (data page) in the results from DBCC Ind.

Conclusion

I hope this helped demystify compression in SQL Server – a little.  This is only one piece of compression but an interesting one in my opinion.  Use the provided script and play with it a little bit to get some familiarity.

Stay tuned for the 6th day of pre-Christmas.

On the Fourth Day…

We have come to the fourth day in this mini-series.  So far I have laid down some under-carriage maintenance for one of our favorite databases.  That database being the msdb database.  The maintenance discussed so far has covered a few different tables and processes to try and keep things tidy (at a base level).

Today I want to discuss some under-carriage work for the database of id 4.  Yep, that’s right, we will talk some more about the msdb database today.  What we discuss today, though, may be applicable beyond just the msdb database.  First, let’s recap the previous three days.

  1. Backup, Job and Mail History Cleanup – 3rd Day
  2. Service Broker Out of Control – 2nd Day
  3. Maint Plan Logs – 1st Day

On the Fourth Day of pre-Christmas…

My DBA gave to me an exercise program to help me trim the fat!!

Somehow, that seems like it would be more appealing on bacon.  But the amount of fat shown here is just nasty.  And that is what I think a table using multiple gigabytes in a database is when it holds 0 records.

Just deleting the records we have spoken of over the past three days is not the end of the road with the maintenance of the msdb database.  Often times the pages allocated to these tables don’t deallocate right away.  Sometimes, the pages don’t deallocate for quite some time.

When these pages are cleaned up depends in large part on the ghost cleanup process. Of course, the ghost cleanup is not going to do anything on the tables where records were removed unless it knows that there are some ghost records (records that have been deleted) in the first place.  This doesn’t happen until a scan operation occurs.  You can read a much more detailed explanation on the ghost cleanup process from Paul Randal, here.

Because of this, you can try running a query against the table to help move the process along.  But what if you can’t get a scan operation to occur against the table?

Could you try updating statistics?  Maybe try running updateusage?  How about forcing a rebuild of the Clustered Index?  Or even running Index Rebuilds with LOB_Compaction on?  Or even creating a Clustered Index where one doesn’t exist?  Maybe you could try DBCC CLEANTABLE and see if that works.  Or you could try running DBCC CheckDb on the msdb database and try to get all of the pages read.

While working to reduce the table size for the tables in the following image, I tried several different things to try and force a scan of the pages so I could reduce the table sizes (0 records really should not consume several GB of space).

 

Even letting things simmer for a few days to see if backup operations could help, I still see the following.

This is better than it was, but still seems a bit over the top.  Something that both of these tables have in common is that they have BLOB columns.  To find that for the sysxmitqueue table, you would need to connect to the server using a DAC connection.  If you were to use a DAC connection, you would find that the msgbody column is a varbinary(max).

What I found to work for this particular case was to use DBCC Shrinkfile.  As a one time (hopefully)  maintenance task, it should be ok.  I would never recommend this as a regular maintenance task.  The used space for these tables does not decrease until DBCC Shrinkfile reaches the DbccFilesCompact stage.  This is the second phase of ShrinkFile and may take some time to complete.  You can read more on Shrinkfile here.

Conclusion

Is it necessary to go to these lengths to reclaim space in your msdb database from tables that are excessively large with very few records?  That is up to you and your environment.  In many cases, I would say it is a nice to have but not a necessity.

If you do go to these lengths to reduce space in msdb, then you need to understand that there are costs associated with it.  There will be additional IO as well as index fragmentation that you should be prepared to handle as a consequence of shrinking.

Stay tuned for the next installment.  We will be taking a look into some internals related to Compression.

«page 1 of 5




Calendar
April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  
Content
SQLHelp

SQLHelp


Welcome , today is Thursday, April 24, 2014