Database Data and Log Size Info

Categories: News, Professional, Scripts, SSC
Tags: ,
Comments: 6 Comments
Published on: December 5, 2011

Have you ever wanted or needed to get the data and log file size for all of the databases in your server?

I have seen scripts that would get this information in the past.  All of the scripts seemed to use either a cursor or the proc from Microsoft called sp_MSforeachdb.  Having seen some of these scripts, I wondered if there might be a different way of doing it (that was also different than you see in my TableSpace Scripts).

After putting some thought into it, I decided on a plan of attack and went to work on building a query that would do the trick.  But before I continue, I will advise that running a script like this on large databases may take a while to execute.

Keeping these things in mind (potential slowness when run on large db and wanting to try something different), I came up with this method (tested on Instances with small databases as well as Instances with 2TB Databases).

[codesyntax lang=”tsql”]

[/codesyntax]

You will see that I did not entirely eliminate the looping mechanism.  Truth be told, it is so much faster on servers with Large Databases.

Also take note of the DMV that is in use in this query.  I am taking advantage of the performance monitor stats that are exposed through the DMV sys.dm_os_performance_counters.  One caveat to this method, is that this DMV shows us the size of the resource database as well.  I think it is fine to report that information back – but it won’t change much over time.  It is for that purpose that I use the Left Join in the query.

The other trick that I utilized is to Pivot that performance monitor data.  I think this works better than to write a bunch of sub-queries to generate the same sort of data layout.

You have probably also noted that I have chosen 250GB as the tipping point in this query.  There is no particular reason for that size – just a large enough database size to make the point.  For some, the tipping point may be a much smaller database size.  Feel free to change this value to suit your needs.

Some other caveats.  The perfmon cntr_value data is presented as an Int.  For more accurate math operations, I chose to cast many of these to a Float.

I also ran into a case where the perfmon data was reporting 0 as the Data File Size for one of my databases.  I had to throw a case statement in for the Data File Used Percent in order to avoid a Divide by 0 error.

I also wrapped the DataUsedMB in an ISNULL.  This was due to the model and mssqlsystemresource databases not having data in the dm_db_physical_stats function.  I could have left those as NULL, but wanted to show something for them both.

Check it out and Enjoy!

6 Comments - Leave a comment
  1. Great script Jason!
    Added to my briefcase.
    BTW, you could check out my sp_MsForEachDB replacement here: http://spaghettidba.com/2011/09/09/a-better-sp_msforeachdb/

    Thanks for sharing.

  2. Steve Jones says:

    Wow, that’s a lot of information. It looks like it’s most of what I’d want in a space report. This is the type of thing I’d store, re-run every day, and compare with what was stored to highlight any issues.

  3. Robert says:

    Impressive!

    I executed the query against on of my SQL servers and found an oversight: dashes. Those darn SharePoint databases! lol

Leave a comment

Your email address will not be published. Required fields are marked *










Calendar
December 2011
M T W T F S S
« Nov   Jan »
 1234
567891011
12131415161718
19202122232425
262728293031  
Content
SQLHelp

SQLHelp

  • @MidnightDBA: Server: No SSMS, no powershell, SQL 08 r2. Best TSQL script to create sp_add_job statements, et al, for all jobs on the instance? #sqlhelp
  • @AMtwo: @ajneuman AFAIK, the only way to upgrade is to point SSRS at the DB, and let it do it's thing. What error do you get on upgrade? #sqlhelp
  • @ajneuman: Is there a way to manually upgrade the Report Server databae? It's failing to upgrade from version 147 to 174. #sqlhelp #SSRS #sqlserver2016
  • @AngryPets: @SQLYard #sqlhelp Nope - they just need to be sized to handle the indexes on them. I'd check file stalls vs all files to see if helping/etc.
  • @SQLPrincess: #sqlhelp Any ideas on how to reverse engineer a data flow diagram from queries to show the data transformations?
  • @SQLYard: @AngryPets Thnks so the 3 filegroups different sizes are normal then? I was thinking they needed to be aligned in size. #sqlhelp
  • @AMtwo: @SqlRyan If you want to use source control with your report designer, use Visual Studio + SSDT. #sqlhelp
  • @AngryPets: @SQLYard #sqlhelp [2/2] storage, they aren't as helpful as they could be in 'spin disk' days. But still have benefits. Can be over-used.
  • @AngryPets: @SQLYard #sqlhelp Index filegroups are/can-be a best practice - depending upon data, load, and IO subsystem. With more modern ... [1/2]
  • @SQLYard: Index Filegoups? I took over a server that has 3 index filegrps. 1 grp is Lrge seems other 2 rnt being used.whts best pract on this #sqlhelp

Welcome , today is Friday, September 30, 2016