Table Hierarchy updated

Recently a need resurfaced to explore the foreign key tree (hierarchy/genealogy) as it related to a specific table within a database.  As I pulled out the script from the repository, I realized there was some unfinished work to be done.  But there was also some polish that needed to be added.  This is an update to the most recent posting of that script.  You can see several revisions in the series at this link or the group here.

Some of the changes involve formatting and and labeling.  I added a new column called “Direction” to help understand the relationship of the key to the table in question.  I also changed up the FKGenealogy (formerly called SortCol) to reflect the source table more accurately in the case when the key comes from an ancestor rather than a descendant.  The Level of the FK was also modified to help understand a little better how far away the ancestor was in relationship to the origin table.

A final adjustment also comes from the Genealogy attribute.  Ancestors were all starting at the wrong point in the lineage.  I adjusted that so the lineage can be seen from the point in the tree that the ancestor is related rather than as a root direct from the origin table.

All in all, this script should make more sense to the user than the previous versions.

DECLARE    @StartParentTable    VARCHAR(256);
 
SELECT @StartParentTable = 'tblCampaignMain';
 
WITH Posterity (ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy) AS (
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
          ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
          ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
          ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
          ,0 AS FKLevel
          ,CAST('.'+CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX))+'.' AS VARCHAR(MAX))
      FROM sys.foreign_key_columns sfkc
          INNER Join sys.foreign_keys sfk
              ON sfkc.constraint_object_id = sfk.OBJECT_ID
      WHERE OBJECT_NAME(sfk.referenced_object_id) = @StartParentTable
  UNION All
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
          ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
          ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
          ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
          ,FKLevel + 1
          ,STUFF(('.' + F.FKGenealogy + CAST(CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX)) + '.' AS VARCHAR(MAX))
                      ),1,1,'')
      FROM sys.foreign_keys sfk
          INNER Join Posterity F
              ON OBJECT_NAME(sfk.referenced_object_id) = F.ChildTable
              And F.FKGenealogy NOT LIKE '%'+CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX))+'%'
          INNER Join sys.foreign_key_columns sfkc
              ON sfkc.constraint_object_id = sfk.OBJECT_ID
      WHERE OBJECT_NAME(sfk.referenced_object_id) <> @StartParentTable
          And sfk.referenced_object_id <> sfk.parent_object_id
), Ancestry (ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy) AS (
  SELECT DISTINCT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
      ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
      ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
      ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
      ,(FKLevel * -1)-1 AS FKLevel
      ,F.FKGenealogy + CAST(CAST(OBJECT_NAME(sfk.Parent_object_id) AS VARCHAR(MAX))+'.' AS VARCHAR(MAX))
  FROM Posterity F
      INNER Join sys.foreign_keys sfk
          ON F.ChildTable = OBJECT_NAME(sfk.parent_object_id)
          And F.ParentTable <> OBJECT_NAME(sfk.referenced_object_id)
      INNER Join sys.foreign_key_columns sfkc
          ON sfkc.constraint_object_id = sfk.OBJECT_ID
  UNION All
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
      ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
      ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
      ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
      ,F.FKLevel -1
      ,STUFF(('.' + FKGenealogy + CAST(CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX)) + '.' AS VARCHAR(MAX))
                  ),1,1,'')
  FROM Ancestry F
      INNER Join sys.foreign_keys sfk
          ON F.ParentTable = OBJECT_NAME(sfk.parent_object_id)
      INNER Join sys.foreign_key_columns sfkc
          ON sfkc.constraint_object_id = sfk.OBJECT_ID
  WHERE F.ParentTable not in (SELECT ParentTable FROM Posterity)
      And sfk.referenced_object_id <> sfk.parent_object_id
      And F.ChildTable not IN (SELECT ChildTable FROM Posterity)
)
 
SELECT ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy
	,CASE 
		WHEN FKLevel > 0 THEN 'Descendant' 
		WHEN FKLevel = 0 THEN 'Root'
		END AS Direction
FROM Posterity
UNION All
SELECT ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, FKGenealogy
	,'Ancestor' AS Direction
FROM Ancestry
  ORDER BY FKGenealogy ASC, Direction DESC
  OPTION (maxrecursion 500);

T-SQL Tuesday #050: Automating Database Restores

Comments: 1 Comment
Published on: January 14, 2014

Here it is time for the party of the month for the SQL Server acolytes and I was running a bit behind.  Why?  Well, that was due in part to me rushing around trying to do some of what this months topic is.  Some would call that pretty could timing.

TSQL2sDay150x150Hemanth.D (blog|twitter) is hosting the T-SQL Tuesday blogging party this month, and he wants all of us to post about Automation. As Hemanth.D put it on his blog in his own words:

You could write about, what options you would consider when automating something? Where do you draw the line? What are our preferred tools for automation? T-SQL, PowerShell, VBScript or Batch files(?) or maybe just share something that you automated in the last couple of years.

You can read the invite he posted here.

History

As Hemanth.D mentioned in his invitation, this is not the first time this topic has come up for TSQLTuesday.  As it would happen, I also participated in the first go around with my contribution about sizing databases on limited information.  You can read that here.

This time around, I have a little bit of a different topic to approach.  I hadn’t considered this until after having read that Wayne Sheffield wrote about his efforts to verify backup files via script (automation).  You can read what Wayne wrote at this link.

Having read that, it seemed painfully obvious to me that I should go ahead and write about my efforts to automate backup restores.  After all, if you are verifying the backup files existence, you might also want to test the backups to ensure they are good.  Besides, we all need to test our backups anyway, right?

Present

I have a few different methods I have used over the years to automate restores.  In one iteration, the restore operations were hard coded in a procedure that was scheduled to run on a nightly or weekly basis.  It probably was also just hard coded to be specific to a database.  That kind of setup is not super useful except for that one database.

With that in mind, I worked on several iterations to help create a script for myself that would automate the restores of any database, with any number of backup files, to a different server, and not have file paths/names/anything hard-coded.  Well – there ended up being one thing hard-coded but that can be changed easily enough.

I decided on a script that would read the backup path for the most recent backup file from the msdb database of the server where the database was backed up.  I do this via a linked server that can be created/destroyed in the proc or that can reside permanently (I prefer to leave it in place).  Take the filepath of that backup file and restore it to the destination server.  All of this via tsql.

Now a caveat with this solution is that the backup path works best if it is a UNC path.  Even if you are backing up the database to the local server, backing up to UNC means that the restore operation can just grab that path and not encounter errors due to drive mappings (e.g. backed up to D but the D on the restore server is the cd-rom drive).

What if you don’t want to restore the source database with the same name to the new server?  Well, that has also been considered and a parameter can be passed to the stored procedure to allow for a new database name.  What if the default file paths are different?  That consideration has been made too!  All of that said, more testing is always welcome.

The script can be evaluated from here.

Next steps:

With the script, the next things to do would be to create SQL Agent jobs to run the script on a routine basis.  Test the script and verify it.

User of this script assumes all risk.

Where have all the Orphans gone?

Categories: Corner, News, Professional, SSC, SSSOLV
Comments: 2 Comments
Published on: October 3, 2012

Do your database users appear to be having an out of body experience?  Have they lost all ties to their login?

Does it feel like maybe what this baby rhino might be experiencing – just trying to find their place in your database?

Well, have we got a little snippet for you then.  This cure-all elixir … er script may be just what you need for those ghostly ghastly things called orphaned users.

Everybody has heard of them.  Everybody seems to want a quick fix for them.  But before you can fix your orphanage / database of these orphans – you have to know that they are there.

 

Cute Rhino

Well, not really – looks more like a hairless rat at this age.  But that is ok – I like rhinos of all ages.  But that is really beside the point.  I also like quick little scripts that will help me get a better picture of any databases that I may have inherited or that I must work on (which I had not previously worked on).

This script is naturally another one of those tools for the admin toolbox.  It is cursor based – but that is ok.  I might update it later to use a set based method and that new foreachdb that I have blogged about previously.

The reason for the looping is to run this quickly against all databases on an instance and immediately know where the orphans might be hiding.  This script is merely to report – it does not fix the orphans.  That is for you to work through.  Also of interest is that this script is not designed to work on SQL 2000 (it’s getting kinda up there in age anyway).

So here it is…another tool for the toolbox to help you become a better more efficient DBA.

CREATE TABLE #Orphans 
 (
  RowID			INT NOT NULL PRIMARY KEY CLUSTERED IDENTITY(1,1) ,
  DBName		VARCHAR(100),
  OrphanUser	VARCHAR (100),
  OrphanSid		VARBINARY(85)
 )           
 
DECLARE
     @DBName SYSNAME,
     @SQL NVARCHAR(MAX) = N'';
 
DECLARE dbnams CURSOR
  LOCAL STATIC FORWARD_ONLY READ_ONLY
     FOR
     SELECT name
		FROM sys.databases
		WHERE state_desc NOT IN ('SUSPECT','OFFLINE')
			AND name NOT IN ('tempdb','model')
 
OPEN dbnams
FETCH NEXT FROM dbnams INTO @DBName;
 
WHILE @@FETCH_STATUS = 0
BEGIN		
SET @SQL = 'SELECT ''' + @DBName + ''' as DBName,dp.name AS OrphanUser, dp.sid AS OrphanSid
FROM [' + @DBName + '].sys.database_principals dp
LEFT OUTER JOIN sys.server_principals sp 
    ON dp.sid = sp.sid 
WHERE sp.sid IS NULL 
    AND dp.type_desc = ''SQL_USER''
    AND dp.principal_id > 4;'
 
INSERT INTO #Orphans
        ( DBName ,
          OrphanUser ,
          OrphanSid
        )
EXECUTE (@SQL)
 
FETCH NEXT FROM dbnams INTO @DBName;
 
END
CLOSE dbnams;
DEALLOCATE dbnams;
 
SELECT O.RowID,O.DBName,O.OrphanUser,O.OrphanSid
	FROM #Orphans O
	ORDER BY O.DBName,O.OrphanUser
 
DROP TABLE #Orphans;
GO

SQL Hide ‘n Seek

Categories: Corner, News, Professional, SSC
Comments: No Comments
Published on: July 11, 2012

When was the last time you had to find something within your database?  Did it feel more like a child’s game or an arduous task?

Child's Game or Arduous Task

What would you say if it could be more like child’s play (like the baby elephant is portraying) than a chore?

Child’s Play

The simplest solution may be to use a tool from a respectable vendor.  One such tool that comes to mind is SQL Search from RedGate.  You can find out more about that tool here.

SQL Search does require that an application be installed on the machine on which you will be using the search feature.  The tool is fine and I will leave it up to you to use or not use it.

Alternative

Instead of installing an application onto your machine, you could always write your own script.  The information is readily available within SQL Server for you to find the objects you seek.

For example, if I wanted to find any tables that had a particular column, I could use the following.

DECLARE @ColName VARCHAR(128)
 
SET @ColName = 'yourcolumntosearch'
SELECT t.name AS TableName, c.name AS ColumnName, ty.name AS DataType, c.max_length
		,'('+ CONVERT(VARCHAR,c.PRECISION) +','+ CONVERT(VARCHAR,c.scale)+')' AS Precision_Scale
		,d.definition AS DefaultConstraint
		,c.collation_name
	FROM sys.tables t
		INNER Join sys.columns c
			ON t.OBJECT_ID = c.OBJECT_ID
		INNER Join sys.types ty
			ON c.system_type_id = ty.system_type_id
		LEFT Outer Join sys.default_constraints d
			ON d.parent_object_id = c.OBJECT_ID
			and d.parent_column_id = c.column_id
	WHERE c.NAME = @ColName
	ORDER BY t.name,c.column_id;

Granted, this query returns a bit more information than you require.  Personally, I like to see the additional information related to the columns as I am doing a search through a database.  I always find it interesting to find columns of the same name and intent but to have a different definition within the database.

And if you desire to find code within the database that contains a particular column name, then something like the following could be helpful.

DECLARE @searchstring VARCHAR(128)
 
SET @searchstring = '%*%'
 
SELECT SO.name AS CodeName, st.Query,SO.type_desc
FROM sys.objects AS SO
INNER JOIN sys.sql_modules AS SM 
	ON SM.OBJECT_ID = SO.OBJECT_ID
CROSS APPLY (
					SELECT 
						REPLACE
						(
							REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
							REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
							REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
								CONVERT
								(
									NVARCHAR(MAX),
									N'--' + NCHAR(13) + NCHAR(10) + sqm.definition + NCHAR(13) + NCHAR(10) + N'--' COLLATE Latin1_General_Bin2
								),
								NCHAR(31),N'?'),NCHAR(30),N'?'),NCHAR(29),N'?'),NCHAR(28),N'?'),NCHAR(27),N'?'),NCHAR(26),N'?'),NCHAR(25),N'?'),NCHAR(24),N'?'),NCHAR(23),N'?'),NCHAR(22),N'?'),
								NCHAR(21),N'?'),NCHAR(20),N'?'),NCHAR(19),N'?'),NCHAR(18),N'?'),NCHAR(17),N'?'),NCHAR(16),N'?'),NCHAR(15),N'?'),NCHAR(14),N'?'),NCHAR(12),N'?'),
								NCHAR(11),N'?'),NCHAR(8),N'?'),NCHAR(7),N'?'),NCHAR(6),N'?'),NCHAR(5),N'?'),NCHAR(4),N'?'),NCHAR(3),N'?'),NCHAR(2),N'?'),NCHAR(1),N'?'),
							NCHAR(0),
							N''
						) AS [processing-instruction(query)]
						FROM sys.sql_modules AS sqm 
							WHERE sqm.OBJECT_ID = SM.OBJECT_ID
					FOR XML
						PATH(''),
						TYPE
				) AS st(Query)
WHERE definition like @searchstring
ORDER BY name

Now, what if I want to search code and tables at the same time for a particular column name usage?  Well, I could take advantage of the following.

DECLARE @ColName VARCHAR(128)
 
SET @ColName = 'yourcolumnhere';
 
SELECT t.name AS TableName, c.name AS ColumnName, ty.name AS DataType, c.max_length
		,'('+ CONVERT(VARCHAR,c.PRECISION) +','+ CONVERT(VARCHAR,c.scale)+')' AS Precision_Scale
		,d.definition AS DefaultConstraint
		,c.collation_name
		,sq.CodeName,sq.Query AS ColNamePresentinCode,sq.type_desc AS CodeType
	FROM sys.tables t
		INNER Join sys.columns c
			ON t.OBJECT_ID = c.OBJECT_ID
		INNER Join sys.types ty
			ON c.system_type_id = ty.system_type_id
		LEFT Outer Join sys.default_constraints d
			ON d.parent_object_id = c.OBJECT_ID
			and d.parent_column_id = c.column_id
		CROSS APPLY (SELECT SO.name AS CodeName, st.Query,SO.type_desc
			FROM sys.objects AS SO
			INNER JOIN sys.sql_modules AS SM 
				ON SM.OBJECT_ID = SO.OBJECT_ID
			CROSS APPLY (
					SELECT 
						REPLACE
						(
							REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
							REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
							REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
								CONVERT
								(
									NVARCHAR(MAX),
									N'--' + NCHAR(13) + NCHAR(10) + sqm.definition + NCHAR(13) + NCHAR(10) 
									+ N'--' COLLATE Latin1_General_Bin2
								)
								,NCHAR(31),N'?'),NCHAR(30),N'?'),NCHAR(29),N'?'),NCHAR(28),N'?'),NCHAR(27),N'?')
								,NCHAR(26),N'?'),NCHAR(25),N'?'),NCHAR(24),N'?'),NCHAR(23),N'?'),NCHAR(22),N'?')
								,NCHAR(21),N'?'),NCHAR(20),N'?'),NCHAR(19),N'?'),NCHAR(18),N'?'),NCHAR(17),N'?')
								,NCHAR(16),N'?'),NCHAR(15),N'?'),NCHAR(14),N'?'),NCHAR(12),N'?'),NCHAR(11),N'?')
								,NCHAR(8),N'?'),NCHAR(7),N'?'),NCHAR(6),N'?'),NCHAR(5),N'?'),NCHAR(4),N'?'),NCHAR(3),N'?')
								,NCHAR(2),N'?'),NCHAR(1),N'?'),NCHAR(0)
							,N''
						) AS [processing-instruction(query)]
						FROM sys.sql_modules AS sqm 
							WHERE sqm.OBJECT_ID = SM.OBJECT_ID
					FOR XML
						PATH(''),
						TYPE
				) AS st(Query)
			WHERE definition like '%' + @ColName + '%'
				AND SM.definition LIKE '%' + t.name + '%') AS sq
	WHERE c.NAME = @ColName
	ORDER BY t.name,c.column_id;

Conclusion

Now, I have a script that will return a row for each time a column appears in a proc.  I can correlate which table and column matches to the proc and get the results I need quickly.

Now, you can take this and have a little fun with it.

A Trio of Functions

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: January 17, 2012

I found myself perusing an execution plan the other day.  I know, big surprise there.  This execution plan showed me some interesting things I had never really paid much attention to in the past.  When I started paying attention to these things, I found myself jumping down a rabbit hole.

It all started with a bit of curiosity to see if I could make an “admin” script perform a bit better.  The execution plans started showing some table valued functions that I knew I hadn’t included in the query.  Subsequently, I found myself wondering – what is that?

The items that made me curious were all table valued functions.  There were three of them (different) in this particular plan.  I started looking hither and thither to find these functions.  It didn’t take long to figure out that I could find them in the mssqlsystemresource database.  So I proceeded to making a copy of the database and attaching a copy of it for further learning opportunities.

The three functions are:

SYSSESSIONS

FNGETSQL

SYSCONNECTIONS

Knowing the query and based on these names, I began looking in the appropriate DMOs to see what I could find.  Here are the scripts for each of those DMO’s.

USE [mssqlsystemresource_test]
GO
 
/****** Object:  View [sys].[dm_exec_connections]    Script Date: 01/16/2012 22:39:32 ******/
SET ANSI_NULLS ON
GO
 
SET QUOTED_IDENTIFIER ON
GO
 
CREATE VIEW [sys].[dm_exec_connections] AS
	SELECT *
	FROM OPENROWSET(TABLE SYSCONNECTIONS)
 
GO
 
/****** Object:  View [sys].[dm_exec_sessions]    Script Date: 01/16/2012 22:39:37 ******/
SET ANSI_NULLS ON
GO
 
SET QUOTED_IDENTIFIER ON
GO
 
CREATE VIEW [sys].[dm_exec_sessions] AS
	SELECT *
	FROM OPENROWSET(TABLE SYSSESSIONS)
 
GO
 
/****** Object:  UserDefinedFunction [sys].[dm_exec_sql_text]    Script Date: 01/16/2012 22:39:55 ******/
SET ANSI_NULLS ON
GO
 
SET QUOTED_IDENTIFIER ON
GO
 
CREATE FUNCTION [sys].[dm_exec_sql_text](@handle VARBINARY(64))
RETURNS TABLE
AS
	RETURN SELECT * FROM OPENROWSET(TABLE FNGETSQL, @handle)
 
GO

Cool.  I can now see the internals of each of the DMOs – sort of.  You see, there is an OPENROWSET call in each of these objects.  Each call uses an undocumented feature called TABLE.  This is an internal command used by the engine and you won’t find much on it (mostly people asking what it is and Microsoft saying they won’t tell).

Here is the fun part.  If you try to run that code outside of querying the DMO, you will receive error messages.  If you try to create a new view utilizing the Openrowset, it will fail.  It is reserved for internal usage.  With that said, just continue to use the DMO and you will be fine.  Personally, I was curious to find out how it worked so I tried a bit to find it.

So there you have it.  If you are curious what is the internal makings of these DMOs, you can script them from the resource database.  Alternatively, you could also run sp_helptext.  I like to check these things from the resource database.  It feels more like an adventure.  Have fun with it and see what you will learn.

Missing Indexes

Comments: 9 Comments
Published on: January 12, 2012

SQL Server has means built into it to track possible missing indexes.  This used to be found through the use of the Index Tuning Wizard.  The process has improved over time (you can sort of see that from my April Fools post).

As luck would have it, I was recently asked to help fix a query that was somewhat related to the whole process.  You see, since SQL Server 2005, there are DMOs that help to track metadata related to column and index usage.  And if there a query is repeated enough that doesn’t have a good matching index, then the engine may think that a new index is needed.  This potential index information is recorded and becomes visible via the DMOs.

The query that I was asked to help fix was a dynamic query within a cursor that read information from the DMOs in order to generate some missing index information.  That particular query was failing for a couple of reasons, but on the same token it gave me an idea to modify and adapt the query to something more in line with what I might use.  After all, the queries that I used were in need of updating and this gets me started in that direction.

First, a little on why the query was failing.  A common problem with dynamic queries is the placement of quotes and having enough quotes in all required locations.  When you start nesting more levels into a dynamic query, the more confusing the quotes can get.  When running into something like this, I like to print the statement that I am trying to build dynamically.  If it doesn’t look right, then adjust the quotes until it looks right.

The second reason it was failing was a simple oversight.  Whether building dynamic queries or just using variables in your code, make sure you use properly sized variables.  In this case, the dynamic query variable was substantially inadequate.  The use of a print statement also helps to catch these types of errors rather quickly.

There were also a few things that would cause me to not use the original query in any environment.  The first problem is that the script contains a column which is the create statement for each proposed index.  In this create statement, all indexes were given the same name.  That would be a bit of a problem.

The next issue is my concern with the creation of indexes without ensuring that the index is going to provide greater benefit than cost.  Better stated is that the creation of these indexes just because the script spewed them out is no better than to create all of the indexes proposed by the Database Engine Tuning Advisor.  For this, I added a cautionary statement next to every index create statement.

So with these tweaks, as well as other less significant tweaks, here is the query.

DECLARE AllDatabases CURSOR FOR
SELECT [name] FROM master.dbo.sysdatabases WHERE dbid > 4
 
OPEN AllDatabases
 
DECLARE @DBNameVar NVARCHAR(128),@STATEMENT NVARCHAR(MAX)
 
FETCH NEXT FROM AllDatabases INTO @DBNameVar
WHILE (@@FETCH_STATUS = 0)
BEGIN
PRINT N'--CHECKING DATABASE ' + @DBNameVar
SET @STATEMENT = N'USE [' + @DBNameVar + ']'+ CHAR(13) +';' +CHAR(13)
+ N'
SELECT SO.name
		, ((CONVERT(Numeric(19,6), migs.user_seeks)+CONVERT(Numeric(19,6), migs.unique_compiles))
			*CONVERT(Numeric(19,6), migs.avg_total_user_cost)
			*CONVERT(Numeric(19,6), migs.avg_user_impact/100.0)) AS Impact
		,''DO NOT AUTO CREATE THESE INDEXES - Creating these indexes could be as bad as blindly using DTA'' as Note
		, ''CREATE NONCLUSTERED INDEX IDX_'' + SO.name +''_'' + STUFF (
		(SELECT ''_'' + column_name FROM sys.dm_db_missing_index_columns(mid.index_handle) WHERE column_usage IN (''Equality'',''InEquality'') FOR XML PATH (''''))
		, 1, 1, '''')  + '' ON ['+@DBNameVar+'].'' + schema_name(SO.schema_id) + ''.'' + SO.name COLLATE DATABASE_DEFAULT + '' ( '' + IsNull(mid.equality_columns, '''') + CASE WHEN mid.inequality_columns IS NULL
		THEN ''''
		ELSE CASE WHEN mid.equality_columns IS NULL
		THEN ''''
		ELSE '','' END + mid.inequality_columns END + '' ) '' + CASE WHEN mid.included_columns IS NULL
		THEN ''''
		ELSE ''INCLUDE ('' + mid.included_columns + '')'' END + '';'' AS CreateIndexStatement
		, mid.equality_columns
		, mid.inequality_columns
		, mid.included_columns
	FROM sys.dm_db_missing_index_group_stats AS migs
		INNER JOIN sys.dm_db_missing_index_groups AS mig
			ON migs.group_handle = mig.index_group_handle
		INNER JOIN sys.dm_db_missing_index_details AS mid
			ON mig.index_handle = mid.index_handle
			AND mid.database_id = DB_ID()
		INNER JOIN sys.objects SO WITH (nolock)
			ON mid.OBJECT_ID = SO.OBJECT_ID
	WHERE (migs.group_handle IN
			(
			SELECT TOP (500) group_handle
			FROM sys.dm_db_missing_index_group_stats WITH (nolock)
			ORDER BY ((CONVERT(Numeric(19,6), migs.user_seeks)+CONVERT(Numeric(19,6), migs.unique_compiles))
				*CONVERT(Numeric(19,6), migs.avg_total_user_cost)
				*CONVERT(Numeric(19,6), migs.avg_user_impact/100.0)) DESC))
		AND OBJECTPROPERTY(SO.OBJECT_ID, ''isusertable'')=1
	ORDER BY 2 DESC , 3 DESC' 
 
PRINT @STATEMENT
EXEC SP_EXECUTESQL @STATEMENT
PRINT CHAR(13) + CHAR(13)
FETCH NEXT FROM AllDatabases INTO @DBNameVar
END
 
CLOSE AllDatabases
DEALLOCATE AllDatabases

As I post this message, as I tend to do, I am looking for ways to improve upon the query and make it better.  This script should only be used with caution.  It is to provide an insight into potential missing indexes in each database.  A score is assigned to each potential index.  It is with the highest score indexes, that I typically begin analysis to improve performance.  I typically start from a query and execution plan to performance tune.  There are times when an alternative starting point is necessary.  This script is a tool for those times.  Please head the warning that these should be created with extreme caution.

Database Data and Log Size Info

Categories: News, Professional, Scripts, SSC
Tags: ,
Comments: 5 Comments
Published on: December 5, 2011

Have you ever wanted or needed to get the data and log file size for all of the databases in your server?

I have seen scripts that would get this information in the past.  All of the scripts seemed to use either a cursor or the proc from Microsoft called sp_MSforeachdb.  Having seen some of these scripts, I wondered if there might be a different way of doing it (that was also different than you see in my TableSpace Scripts).

After putting some thought into it, I decided on a plan of attack and went to work on building a query that would do the trick.  But before I continue, I will advise that running a script like this on large databases may take a while to execute.

Keeping these things in mind (potential slowness when run on large db and wanting to try something different), I came up with this method (tested on Instances with small databases as well as Instances with 2TB Databases).

DECLARE @MaxDataSize	INT
SELECT @MaxDataSize = MAX(d.cntr_value)/1024/1024 --Convert to GB
    FROM sys.dm_os_performance_counters d 
    WHERE d.OBJECT_NAME like '%databases%' 
       and d.counter_name = 'Data File(s) Size (KB)'
    GROUP BY d.counter_name,d.OBJECT_NAME
 
DECLARE 
@DbSize TABLE (Database_Id INT, DataUsedMB FLOAT)
 
IF @MaxDataSize > 250 --If Largest DB is >250GB then use forEachdb, else use Physical Index Stats
BEGIN
	INSERT INTO @DbSize (Database_Id,DataUsedMB)
	EXEC sp_MSforeachdb 
	'select db_id(''?'') , (SUM(ps.reserved_page_count)*8)/1024 from ?.sys.dm_db_partition_stats ps'; 
 
	SELECT 
        DB_ID(instance_name) Database_Id 
        , instance_name Database_Name 
        ,CAST(pvt.[DATA FILE(s) SIZE (KB)] AS FLOAT)/1024 AS DataFileSizeMB
        ,isnull(ca.DataUsedMB,.1) AS DataUsedMB
        ,(CAST(pvt.[DATA FILE(s) SIZE (KB)] AS FLOAT)/1024) - isnull(ca.DataUsedMB,.1) AS DataFreeMB
        , CASE WHEN pvt.[DATA FILE(s) SIZE (KB)] = 0
			THEN 0
			ELSE ROUND((((CAST(pvt.[DATA FILE(s) SIZE (KB)] AS FLOAT)/1024) - isnull(ca.DataUsedMB,.1)) / (CAST(pvt.[DATA FILE(s) SIZE (KB)] AS FLOAT)/1024))*100, 2) 
			END DataFreePercent
        ,CAST(pvt.[LOG FILE(s) SIZE (KB)] AS FLOAT)/1024 AS LogFileSizeMB
        ,CAST(pvt.[LOG FILE(s) Used SIZE (KB)] AS FLOAT)/1024 AS LogFileUsedMB
        ,(CAST(pvt.[LOG FILE(s) SIZE (KB)] AS FLOAT) - pvt.[LOG FILE(s) Used SIZE (KB)])/1024 LogFreeSizeMB
        ,pvt.[PERCENT LOG Used] AS LogUsedPercent
        ,100-pvt.[PERCENT LOG Used] AS LogFreePercent
    FROM sys.dm_os_performance_counters d 
    PIVOT (SUM(cntr_value) FOR counter_name IN ([DATA FILE(s) SIZE (KB)],[LOG FILE(s) SIZE (KB)],[LOG FILE(s) Used SIZE (KB)],[PERCENT LOG Used])) pvt
    LEFT Outer Join (SELECT Database_Id,Null AS page_count,DataUsedMB FROM @DbSize) CA
		ON (ca.database_id = DB_ID(instance_name))
    WHERE OBJECT_NAME like '%databases%'
		and pvt.[DATA FILE(s) SIZE (KB)] IS not null
		And pvt.[LOG FILE(s) SIZE (KB)] IS not null
		And pvt.[LOG FILE(s) Used SIZE (KB)] IS not null
		And pvt.[PERCENT LOG Used] IS not null
		and instance_name <> '_Total'
    ORDER BY Database_ID
END
ELSE
BEGIN
SELECT 
        DB_ID(instance_name) Database_Id 
        , instance_name Database_Name 
        ,CAST(pvt.[DATA FILE(s) SIZE (KB)] AS FLOAT)/1024 AS DataFileSizeMB
        ,isnull(ca.DataUsedMB,.1) AS DataUsedMB
        ,(CAST(pvt.[DATA FILE(s) SIZE (KB)] AS FLOAT)/1024) - isnull(ca.DataUsedMB,.1) AS DataFreeMB
        , CASE WHEN pvt.[DATA FILE(s) SIZE (KB)] = 0
			THEN 0
			ELSE ROUND((((CAST(pvt.[DATA FILE(s) SIZE (KB)] AS FLOAT)/1024) - isnull(ca.DataUsedMB,.1)) / (CAST(pvt.[DATA FILE(s) SIZE (KB)] AS FLOAT)/1024))*100, 2) 
			END DataFreePercent
        ,CAST(pvt.[LOG FILE(s) SIZE (KB)] AS FLOAT)/1024 AS LogFileSizeMB
        ,CAST(pvt.[LOG FILE(s) Used SIZE (KB)] AS FLOAT)/1024 AS LogFileUsedMB
        ,(CAST(pvt.[LOG FILE(s) SIZE (KB)] AS FLOAT) - pvt.[LOG FILE(s) Used SIZE (KB)])/1024 LogFreeSizeMB
        ,pvt.[PERCENT LOG Used] AS LogUsedPercent
        ,100-pvt.[PERCENT LOG Used] AS LogFreePercent
    FROM sys.dm_os_performance_counters d 
    PIVOT (SUM(cntr_value) FOR counter_name IN ([DATA FILE(s) SIZE (KB)],[LOG FILE(s) SIZE (KB)],[LOG FILE(s) Used SIZE (KB)],[PERCENT LOG Used])) pvt
    LEFT Outer Join (SELECT ips.database_id,SUM(page_count) AS page_count,CAST((SUM(page_count)*8) AS FLOAT)/1024 AS DataUsedMB
		FROM sys.dm_db_index_physical_stats(NULL,NULL,NULL,NULL,'sampled') ips
		GROUP BY ips.database_id
		) CA 
		ON (ca.database_id = DB_ID(instance_name))
    WHERE OBJECT_NAME like '%databases%'
		and pvt.[DATA FILE(s) SIZE (KB)] IS not null
		And pvt.[LOG FILE(s) SIZE (KB)] IS not null
		And pvt.[LOG FILE(s) Used SIZE (KB)] IS not null
		And pvt.[PERCENT LOG Used] IS not null
		and instance_name <> '_Total'
    ORDER BY Database_ID
END

You will see that I did not entirely eliminate the looping mechanism.  Truth be told, it is so much faster on servers with Large Databases.

Also take note of the DMV that is in use in this query.  I am taking advantage of the performance monitor stats that are exposed through the DMV sys.dm_os_performance_counters.  One caveat to this method, is that this DMV shows us the size of the resource database as well.  I think it is fine to report that information back – but it won’t change much over time.  It is for that purpose that I use the Left Join in the query.

The other trick that I utilized is to Pivot that performance monitor data.  I think this works better than to write a bunch of sub-queries to generate the same sort of data layout.

You have probably also noted that I have chosen 250GB as the tipping point in this query.  There is no particular reason for that size – just a large enough database size to make the point.  For some, the tipping point may be a much smaller database size.  Feel free to change this value to suit your needs.

Some other caveats.  The perfmon cntr_value data is presented as an Int.  For more accurate math operations, I chose to cast many of these to a Float.

I also ran into a case where the perfmon data was reporting 0 as the Data File Size for one of my databases.  I had to throw a case statement in for the Data File Used Percent in order to avoid a Divide by 0 error.

I also wrapped the DataUsedMB in an ISNULL.  This was due to the model and mssqlsystemresource databases not having data in the dm_db_physical_stats function.  I could have left those as NULL, but wanted to show something for them both.

Check it out and Enjoy!

Table Hierarchy goes CS

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: November 16, 2011

At the urging of a friend, this script is being updated for those that are dealing with Case Sensitivity.  The first few rounds, I neglected Case Sensitivity and never tested for that.  It makes sense to have this script updated for that if anybody out there is using it.

The updates are simple enough, it is just frustrating if you run into an error caused by CS and then you waste time troubleshooting it.  Believe me, it has happened to me recently – and I don’t much like it.

Without further ado, here is the udpated script:

DECLARE    @StartParentTable    VARCHAR(256)
 
SELECT @StartParentTable = 'Calendar'
 
;With Hierarchy (ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, SortCol) AS (
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
          ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
          ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
          ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
          ,0 AS FKLevel
          ,CAST('.'+CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX))+'.' AS VARCHAR(MAX))
      FROM sys.foreign_key_columns sfkc
          INNER Join sys.foreign_keys sfk
              ON sfkc.constraint_object_id = sfk.OBJECT_ID
      WHERE OBJECT_NAME(sfk.referenced_object_id) = @StartParentTable
  UNION All
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
          ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
          ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
          ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
          ,FKLevel + 1
          ,STUFF(('.' + F.SortCol + CAST(CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX)) + '.' AS VARCHAR(MAX))
                      ),1,1,'')
      FROM sys.foreign_keys sfk
          INNER Join Hierarchy F
              ON OBJECT_NAME(sfk.referenced_object_id) = F.ChildTable
              And F.SortCol NOT LIKE '%'+CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX))+'%'
          INNER Join sys.foreign_key_columns sfkc
              ON sfkc.constraint_object_id = sfk.OBJECT_ID
      WHERE OBJECT_NAME(sfk.referenced_object_id) <> @StartParentTable
          And sfk.referenced_object_id <> sfk.parent_object_id
), Ancestry (ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, SortCol) AS (
  SELECT DISTINCT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
      ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
      ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
      ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
      ,-1 AS FKLevel
      ,CAST('.'+CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX))+'.' AS VARCHAR(MAX))
  FROM Hierarchy F
      INNER Join sys.foreign_keys sfk
          ON F.ChildTable = OBJECT_NAME(sfk.parent_object_id)
          And F.ParentTable <> OBJECT_NAME(sfk.referenced_object_id)
      INNER Join sys.foreign_key_columns sfkc
          ON sfkc.constraint_object_id = sfk.OBJECT_ID
  UNION All
  SELECT OBJECT_NAME(sfk.referenced_object_id) AS ParentTable
      ,COL_NAME(sfkc.referenced_object_id,sfkc.referenced_column_id) AS ParentColumn
      ,OBJECT_NAME(sfk.Parent_object_id) AS ChildTable
      ,COL_NAME(sfkc.parent_object_id,sfkc.parent_column_id) AS ChildColumn
      ,F.FKLevel -1
      ,STUFF(('.' + SortCol + CAST(CAST(OBJECT_NAME(sfk.referenced_object_id) AS VARCHAR(MAX)) + '.' AS VARCHAR(MAX))
                  ),1,1,'')
  FROM Ancestry F
      INNER Join sys.foreign_keys sfk
          ON F.ParentTable = OBJECT_NAME(sfk.parent_object_id)
      INNER Join sys.foreign_key_columns sfkc
          ON sfkc.constraint_object_id = sfk.OBJECT_ID
  WHERE F.ParentTable not in (SELECT ParentTable FROM Hierarchy)
      And sfk.referenced_object_id <> sfk.parent_object_id
      And F.ChildTable not IN (SELECT ChildTable FROM Hierarchy)
)
 
SELECT ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, SortCol
FROM Hierarchy
UNION All
SELECT ParentTable,ParentColumn, ChildTable,ChildColumn, FKLevel, SortCol
FROM Ancestry
  ORDER BY SortCol ASC
  OPTION (maxrecursion 500)

Still on the todo list is to make this bad boy run faster in the event of circular references.  If you find something else with it that you think could use adjusting, let me know.

TSQL Challenge 63 – Update

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: November 14, 2011

If you recall, I like Sudoku.  I even posted a script for solving it via TSQL.  I went so far as to enter my script into a TSQL Challenge.  That all started way back in August.  Today, I have an update!!

I was notified this morning from BeyondRelational.com that I have earned a new badge.  Cool, what’s the badge?  I clicked the link and it took me to this badge.
Huh?  I’m a winner of the SQL Sudoku Challenge?  Awesome!

Looking it over, I am winner #3.  This means I could have done better with my solution.   And looking at the other solution stats, it appears I will need to find time to see what the others did to make their solutions go sooooo fast.  I have some learning to do – woohoo.

So, now that means I need to post my solution.

--
--These Variables are intended to be used as input parameters if made into a proc.
DECLARE @SudokuNo INT = 3 --my script is setup to allow the table to contain multiple puzzles.
		,@SudokuGivens VARCHAR(100) = '53  7    6  195    98    6 8   6   34  8 3  17   2   6 6    28    419  5    8  79'
									--'  15       6     7    9  4   5   1  9   4   8  3   6   2  7    8     7       35  '
									--'   9 1   6     5            7 2   1        29    3    4   6 7 55     8     1     '
		,@FromTableorString	TINYINT = 1 --1 = run from TC63, else run from Input Parm
 
DECLARE @SudTable	TABLE (RowCol INT PRIMARY KEY CLUSTERED, ConcatRow VARCHAR(10))
 
IF @FromTableorString = 1
BEGIN
--Populate Data for missing vectors (Col/Row) with A space
--Use an Isnull and Outer Apply in case there are no givens for a particular row.
	WITH dual(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT N)) FROM ( VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9)) x(N)
	),rowcols AS (SELECT DISTINCT ROW.N AS ROW,Col.N AS Col FROM Dual ROW Cross Apply Dual Col)
 
	--Concatenate into a string if the SudokuSource is a Table
	SELECT @SudokuGivens = (SELECT  Isnull(t.DATA,0)
		FROM TC63 T
		RIGHT Outer Join rowcols D
			ON D.ROW = T.ROW
			And D.Col = T.Col
			And SudokuNo = @SudokuNo
		FOR xml PATH(''))  
 
END
	SELECT @SudokuGivens = REPLACE(@SudokuGivens,'0',' ') --If from table, replace commas.  From a String can have spaces or commas
 
--Solve the Sudoku - into a string
;WITH x( s, ind ) AS
( SELECT @SudokuGivens, CHARINDEX(' ',@SudokuGivens ) AS ind
  UNION all
  SELECT CONVERT(VARCHAR(100),SUBSTRING( s, 1, ind - 1 ) + CONVERT(CHAR(1),z) + SUBSTRING( s, ind + 1 ,81))
       , CHARINDEX(' ', s, ind + 1 ) AS ind
  FROM x
    CROSS APPLY ( VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9)) z (z)
  WHERE ind > 0
  and not exists (SELECT null
					FROM ( VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9)) ww (lp)
					WHERE z = SUBSTRING( s, ( ind - 1)% 9  - 8 + lp * 9, 1 )
						or    z = SUBSTRING( s, ( ( ind - 1 ) / 9 ) * 9 + lp, 1 )
						or    z = SUBSTRING( s, (( ( ind - 1 ) / 3 )%3) * 3
                                      + ( ( ind - 1 ) / 27 ) * 27 + lp
                                      + ( ( lp - 1 ) / 3 ) * 6
                                   , 1 )
                 )
), Sud AS (
--Create a 9 record result set that has the string solution duplicated 9 times.  Then show only relevant 9 data for each row
SELECT TOP 9 SUBSTRING(s, ROW_NUMBER() OVER (ORDER BY s) * 9 - 8, ROW_NUMBER() OVER (ORDER BY s) * 9 - (ROW_NUMBER() OVER (ORDER BY s) * 9 - 9)) AS ConcatRow
FROM x
Cross Apply ( VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9)) y(N)
WHERE ind = 0
)
 
--Populate a Table Variable for further Row/Col manipulation
INSERT INTO @SudTable (RowCol,ConcatRow)
	SELECT Row_Number() OVER (ORDER BY (SELECT 1)),ConcatRow
		FROM Sud
 
--Pivot the data out to produce a 9x9 grid
SELECT @SudokuNo AS SudokuNo,c1.RowCol,[1],[2],[3],[4],[5],[6],[7],[8],[9]
	FROM @SudTable S
	Cross Apply (SELECT RowCol,[1],[2],[3],[4],[5],[6],[7],[8],[9]
					FROM (SELECT S.RowCol
							,ColNo = Row_Number() OVER (Partition BY RowCol ORDER BY ConcatRow)
							,DATA = SUBSTRING(ConcatRow, ROW_NUMBER() OVER (Partition BY S.RowCol ORDER BY ConcatRow), 1)
					FROM @SudTable S
						Cross Apply ( VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9)) x(N)
						) Intr
						Pivot (
						MAX(DATA)
						FOR ColNo IN ([1],[2],[3],[4],[5],[6],[7],[8],[9])
		) PVT) AS C1
	WHERE c1.RowCol = S.RowCol
	ORDER BY S.RowCol ASC

 

Sadly, that is not the most recent version of the script that I had.  I had intended on submitting this version, which is still slightly faster.

--
--These Variables are intended to be used as input parameters if made into a proc.
DECLARE @SudokuNo INT = 3 --my script is setup to allow the table to contain multiple puzzles.
		,@SudokuGivens VARCHAR(100) = '53  7    6  195    98    6 8   6   34  8 3  17   2   6 6    28    419  5    8  79'
									--'  15       6     7    9  4   5   1  9   4   8  3   6   2  7    8     7       35  '
									--'   9 1   6     5            7 2   1        29    3    4   6 7 55     8     1     '
		,@FromTableorString	TINYINT = 1 --1 = run from TC63, else run from Input Parm
 
DECLARE @SudTable	TABLE (RowCol INT PRIMARY KEY CLUSTERED, ConcatRow VARCHAR(10))
 
IF @FromTableorString = 1
BEGIN
--Populate Data for missing vectors (Col/Row) with A space
--Use an Isnull and Outer Apply in case there are no givens for a particular row.
	WITH dual(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT N)) FROM ( VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9)) x(N)
	),rowcols AS (SELECT DISTINCT ROW.N AS ROW,Col.N AS Col FROM Dual ROW Cross Apply Dual Col)
 
	--Concatenate into a string if the SudokuSource is a Table
	SELECT @SudokuGivens = (SELECT  Isnull(t.DATA,0)
		FROM TC63 T
		RIGHT Outer Join rowcols D
			ON D.ROW = T.ROW
			And D.Col = T.Col
			And SudokuNo = @SudokuNo
		FOR xml PATH(''))  
 
END
	SELECT @SudokuGivens = REPLACE(@SudokuGivens,'0',' ') --If from table, replace commas.  From a String can have spaces or commas
 
--Solve the Sudoku - into a string
;WITH x( s, ind ) AS
( SELECT @SudokuGivens, CHARINDEX(' ',@SudokuGivens ) AS ind
  UNION all
  SELECT CONVERT(VARCHAR(100),SUBSTRING( s, 1, ind - 1 ) + CONVERT(CHAR(1),z) + SUBSTRING( s, ind + 1 ,81))
       , CHARINDEX(' ', s, ind + 1 ) AS ind
  FROM x
    CROSS APPLY ( VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9)) z (z)
  WHERE ind > 0
  and not exists (SELECT null
					FROM ( VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9)) ww (lp)
					WHERE z = SUBSTRING( s, ( ind - 1)% 9  - 8 + lp * 9, 1 )
						or    z = SUBSTRING( s, ( ( ind - 1 ) / 9 ) * 9 + lp, 1 )
						or    z = SUBSTRING( s, (( ( ind - 1 ) / 3 )%3) * 3
                                      + ( ( ind - 1 ) / 27 ) * 27 + lp
                                      + ( ( lp - 1 ) / 3 ) * 6
                                   , 1 )
                 )
), Sud AS (
--Create a 9 record result set that has the string solution duplicated 9 times.  Then show only relevant 9 data for each row
SELECT TOP 9 SUBSTRING(s, ROW_NUMBER() OVER (ORDER BY s) * 9 - 8, ROW_NUMBER() OVER (ORDER BY s) * 9 - (ROW_NUMBER() OVER (ORDER BY s) * 9 - 9)) AS ConcatRow
FROM x
Cross Apply ( VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9)) y(N)
WHERE ind = 0
)
 
--Populate a Table Variable for further Row/Col manipulation
INSERT INTO @SudTable (RowCol,ConcatRow)
	SELECT Row_Number() OVER (ORDER BY (SELECT 1)),ConcatRow
		FROM Sud
 
--Pivot the data out to produce a 9x9 grid
SELECT @SudokuNo AS SudokuNo,c1.RowCol,[1],[2],[3],[4],[5],[6],[7],[8],[9]
	FROM @SudTable S
	Cross Apply (SELECT RowCol,[1],[2],[3],[4],[5],[6],[7],[8],[9]
					FROM (SELECT S.RowCol
							,ColNo = Row_Number() OVER (Partition BY RowCol ORDER BY ConcatRow)
							,DATA = SUBSTRING(ConcatRow, ROW_NUMBER() OVER (Partition BY S.RowCol ORDER BY ConcatRow), 1)
					FROM @SudTable S
						Cross Apply ( VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9)) x(N)
						) Intr
						Pivot (
						MAX(DATA)
						FOR ColNo IN ([1],[2],[3],[4],[5],[6],[7],[8],[9])
		) PVT) AS C1
	WHERE c1.RowCol = S.RowCol
	ORDER BY S.RowCol ASC

Still, I am certain that (without having looked at the other winning solutions) this is not on par with the best solutions.  And I have a lot to learn.

Column Level Permissions

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: September 19, 2011

Did you know that you can grant permissions down to the column level in SQL Server?   Well, if you didn’t know that – you do now.

It is actually rather simple to grant permissions at the column level.  This can be demonstrated by the following script.

GRANT SELECT ON ColorPlate (ColorID) TO testU

If you want to check out more on that syntax, read here.

And then…

Why is it important to know that you can do this?  Well, it is quite possible you have some of these permissions already in place.  It is possible you may have inherited something like this.  Just maybe there is a business requirement requiring that certain users or groups only have access to certain data within certain columns.

That brings up a new problem then.  How do you find out what columns have specific permissions applied to certain users?  Well, that is actually pretty straight forward.  We can query the system views and determine column level permissions.

SELECT dp.grantee_principal_id,p.name AS UName
		,dp.permission_name,c.name
		,OBJECT_NAME(o.OBJECT_ID) AS TabName
	FROM sys.database_permissions dp
	INNER JOIN Sys.objects O
		ON dp.major_id = o.OBJECT_ID
	INNER JOIN sys.columns C
		ON c.OBJECT_ID = O.OBJECT_ID
		AND c.column_id = dp.minor_id
	INNER JOIN sys.database_principals P
		ON p.principal_id = dp.grantee_principal_id

The previous query is a really simple version of how to find this information.  As you can see, I am simply returning the UserName, TableName and ColumnName along with the permission in effect on that column.

You should also be able to see that the mapping between these system views is pretty straight forward as well.  Major_id maps to object_id and column_id maps to minor_id.

Conclusion

This query can be of good use to determine permissions in place for columns in tables within your database.  Furthermore, you can even use this query to simply test your curiosity as you check to determine what has been put into effect in the databases you manage.

There are more complex methods to determine these permissions.  With there being more complex methods, I am sure there are also some easier methods.  Let me know what you do to query these permissions.

«page 1 of 6




Calendar
April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, April 18, 2014