New Backup Behavior in SQL 2014

Comments: 1 Comment
Published on: April 1, 2014

As has been well publicized, today is the official Release To Manufacturing date for SQL Server 2014.  You can read more about all of that here.

Something that hasn’t received much word is a new feature that is a game changer.  I’m not referring to the advancements with the In-Memory OLTP (aka Hekaton).  The real game changer in my opinion is the way backups will be treated in 2014.

encryptionSQL Server 2014 brings the added functionality of encryption to the database backups.  This is a significant improvement to securing data at rest.  This functionality applies to databases that have been TDE enabled as well as those that are not TDE enabled.  This functionality also applies to backups that are compressed and backups that are not compressed.

The beauty of this functionality is that all backups will be encrypted now by default.  What this means is that you need not configure anything on your end to make it happen.  Despite it being enabled by default, you can change the encryption method should you choose.

Another interesting note with this new default behavior is that all of your database backups will fail out of the box.  You might ask why.  Well, there are some pre-requisites that must be met in order for the encrypted backup to succeed.

Here are those pre-reqs.

  1. Create a Database Master Key for the master database.
  2. Create a certificate or asymmetric Key to use for backup encryption.

If you have not created your DMK, your backups will fail and you will be none the wiser until you try to restore that backup.  That is really the way you want to conduct your duties as a DBA, right?  You guessed it, the backup shows that it succeeds yet it does nothing.

As you move forward with your SQL 2014 implementation, ensure you create those DMKs and ensure your backups are safe.

Oh and in case you haven’t noticed, pay attention to today’s date.

 

Day 4 – Broken Broker

This is the fourth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers

mini-Broker

Brokers

 

 

 

 

On a recent opportunity to restore a database for a client, I experienced something new.  

I thought it was intriguing and it immediately prompted some questions.  First, let’s take a look at the message that popped up during the restore and then on to what was done to resolve the problem.

 

Query notification delivery could not send message on dialog ‘{someguid}.’. Delivery failed for notification ‘anotherguid;andanotherguid‘ because of the following error in service broker: ‘The conversation handle “someguid″ is not found.’

My initial reaction was “Is Service Broker enabled?”  The task should have been a relatively easy straight forward database restore and then to setup replication after that.  My next question that popped up was “Is SB necessary?”

Well the answers that came back were “Yes” and “YES!!!”  Apparently without SB, the application would break in epic fashion.  That is certainly not something that I want to do.  There are enough broke brokers and broke applications without me adding to the list.

Occasionally when this problem arises it means that the Service Broker needs a “reset.”  And in this case it makes a lot of sense.  I had just restored the database and there would be conversations that were no longer valid.  Those should be ended and the service broker “reset.”

The “reset” is rather simple.  First a word of warning – do not run this on your production instance or any instance without an understanding that you are resetting SB and it could be that conversations get hosed.

[codesyntax lang=”tsql”]

[/codesyntax]

For me, this worked like a charm.  There was also substantial reason to proceed with it.  If you encounter this message, this is something you may want to research and determine if it is an appropriate thing to do.

Day 3 – Reviewing Peers

Comments: 1 Comment
Published on: December 27, 2013

This is the third installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time

Remember back in the day when Napster was really popular?  I’m sure it is still popular now – just not in the same league as the early years.  Napster pretty much made some revolutionary changes in file-sharing across the internet.  Now the algorithms and the method have become more advanced and use a hive approach, but it still pretty much boils down to the setup that Napster used – Peer to Peer.

peer-pressure

In the P2P file-share world, every node had a part to play.  If you downloaded a file, your machine could also upload that file or other files to the network for other users.  This approach required full files in order to work.

p2p_net

In the Hive approach, the files are broken up into chunks.  So you still participate on a P2P basis, but you no longer need to have the complete file to participate.  (I am probably over-generalizing, but that is ok – the point is coming soon.)  This helped transfers be quicker and the P2P network/hive to be larger (in case you were wondering).

Now, let’s take that idea and move it down to a smaller chunk of data.  What if we did that with a database and only sent a record at a time to a partner and that partner could send a few records back to the first partner/peer?  Now we have something that could be pretty usable in many scenarios.  One such scenario could be to sync data from the same database at ten different locations (or maybe 100 different locations) so all locations would have current information.

Well, SQL Server does have that technology available for use.  Coincidentally enough, it is called Peer-to-Peer replication.  Truth be told, it is really more of a two transactional replication on steroids.  In SQL 2008, you had to setup transactional replication in order to create the P2P.  But in SQL 2012, there is now an option on the publication types for Peer-to-Peer.

Setting up P2P replication in SQL 2012 is pretty easy to do.  Here is a quick step-through on doing just that.  I will bypass the setup of the distributor and jump straight into setting up the publication through to the point of adding peers.  From that point, it will be left to you to determine what kind of subscription (push/pull) you use and to figure out how to configure those types.

Step-through

The first step is to expand the tree in SSMS until you see replication and then to expand that to see “Local Publications.”  From “Local Publications,” right click and select “New Publication.”

menu

Then it is almost as easy as following the prompts as I will show in the following images.  You need to select the database you wish to be included in the P2P publication.

db_selection

Then it is a matter of selecting the publication type.  Notice here that Peer to Peer has been highlighted.

repl_selection

Of course, no replication is complete without some articles to include in the replication.  In this case, I have chosen to just replicate a few of the articles and not every article in the database.  When replicating data, I recommend being very picky about what articles (objects) get included in the replication.  No sense in over-replicating and sending the entire farm across the wire to Beijing, London, Paris and Moscow.

table_selection

Once the articles are selected, it will be time to setup the agent security.  Again, this is pretty straight forward.  And in my contrived setup, I am just going to rely on the SQL Server Agent Service account.  The screen will inform you that it is not best practice.  I will leave that as a exercise for you to explore.

agent_securitylog_reader_security

With that last piece of configuration, the publication is ready.  Just click your way through to finish.

Once the publication is complete, it is time to add a subscriber to the publication.  That is easily accomplished by right clicking the publication.  Since this is a P2P publication, we need to select “Configure Peer-To-Peer Topology…”

p2p_topology_menu

Selecting that menu option will bring up the Wizard.  First step in the new wizard is to pick the publisher and the publication at that publisher that needs to be have the topology configured.

publication_selection

After selecting the publisher and publication then I can add nodes to the P2P topology by right-clicking the “map” (as I like to call it) area.  Select “Add a New Peer Node” from the menu and then enter the appropriate details for the new subscriber.

add_node

It is here that I will conclude this short tutorial.  Configuring the topology is an exercise best left to each individual circumstance.  Configuring where the pull subscribers will be and where the push subscribers will be is almost an art.  Have fun with it.

I have had the opportunity to use this kind of setup on a large multi-node setup across several sites.  It runs pretty smoothly.  Sometimes it can get to be a hair-raising event when a change gets introduced that borks the schema.  But those are the events that permit you to learn and grow and document what has happened and how to best handle the issues in your environment.

I have even taken a multi-site P2P setup and just added a 1 direction subscriber (as if it were a transactional publication) so the subscriber could just get the information and run reports without pushing changes back up into the rest of the topology.  That also works pretty well.  Document the design and be willing to change it up in case there appears to be latency and too much peer pressure.

12 Days of Christmas 2013 Day 2

Comments: 3 Comments
Published on: December 26, 2013

This is the second installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement

burningtime

Recently I was able to observe an interesting exchange between a couple of key people at a client.  That exchange gave me a bit to ponder.  I wanted to recount a bit of that exchange here.  All names have been, well you know how that goes.

Accountant Joe came in early one wintry morning.  He was gung-ho and ready for the day ahead.  Joe had huge plans to finish counting all of the beans and get his task list done for the day.  You see, taskmaster Judy had been harping on him significantly over the past week to get his beans counted.

On this frosty morning, Joe was zipping along.  As more and more people filed into the office from the various departments, Joe was still contentedly counting his beans.  That only lasted for a few fleeting moments with everybody in the office though.

Suddenly Joe could no longer count the beans.  The beans Joe was counting were served up via the backend database.  And since the beans were running too sow, Joe called the helpdesk to have them fix the database.  A few moments later, Sally called the helpdesk too.  Sally was complaining about things being horribly slow too.  Sally was trying to open the company calendar (Sally is the executive secretary).

More and more calls were coming in to the helpdesk from various departments and every user-base in the company.  The helpdesk was busy fighting this fire or that fire.  Finally news of the slowness is escalated to the DBA Dillon so he could investigate why the beans were so slow on this frosty day.  As Dillon investigated, he noticed that IO stalls were off the charts.  He was seeing IO stalls in the hundred second range instead of the milli-second range like normal.

Like a dilligent DBA, Dillon immediately escalated the issue to the sysops team who was responsible for the SAN (yeah he notified his manager too).  Bill from sysops promptly responded.  Sadly the response was “I am too busy at the moment.

After much pestering, Bill finally became available and was ready to help – 4 hours later.

As it turns out, the SAN that housed all company shares, applications, databases and even Exchange was down to about 30GB free space.  Due to the lack of free space, the SAN degraded performance automatically to try and prevent it from filling up entirely.  Bill knew about this pending failure and had ordered extra storage – which sat on his desk for 2+ weeks.

The entire company was essentially down because Bill ended up being too busy (in a meeting).  Though the issue was eventually resolved – the sting has yet to fade.

When faced with an outage situation, let this story be your gift to remind you of how not to treat the outage.

On the Twelfth Day…

Bacon wrapped frog legs (twelve of them) for Christmas.  No more drumming for these guys!!

What could be better than bacon wrapped frog legs?  Oh yeah, more Virtual lab setup.

We will delve into setting up a SQL Cluster today.  We will also cover some high level tips for dealing with virtual box.  This will be good information and the type of stuff I would have like to have known heading into setting up a Virtual Lab.

Season Cleaning First.

On the Twelfth Day of pre-Christmas…

My DBA brought to me a Virtual SQL Cluster.  And with that cluster, we have a a few tidbits for Using VirtualBox.

The first tidbit is an administration aid.  Occasionally it is good to have similar machines grouped together.  At the same time, it is also necessary to start multiple virtual machines at the same time.  This is done through groups in VirtualBox.

Here you can see some of the groups that I have created.  If I right-click on a machine name, I will be presented a menu that has the Group option.

Once I have a group created, I can get a few different options if I were to highlight the group name I would get different options as shown in the following image.

The notable options here are to “Ungroup”, “Rename Group”, and “Add Machine.”  Another option is “Start.”  Though this option is present for the machine menu, the behavior is different.  This option allows you to start the entire group.  This can be a handy tool when dealing with a cluster for instance.

The next handy tidbit is the snapshot.  A snapshot allows point in time image of the VM to be taken so different configurations can be tested – and then quickly reverted i necessary.  Here is what I have for one of my VMs in the snapshot manager.

From this very same screen you can also see one of the many methods available to create a clone of a virtual machine.  The clone icon is the little button above the right hand pane that looks like a sheep.  Cloning a VM is a quick way to create several machines for various purposes.  As you will hear from many people – you should build a base image first, then run sysprep against it.  Sysprep is necessary in order to help prevent problems down the road.

The next tidbit for today is in regards to the file locations for virtual machines and virtual disks.  I recommend changing the default path for the VM files.  This can be done through the preferences option on the file menu.  Shown in the attachment is what it may look like if you have not changed it.  Notice that the default path goes to your user profile directory.

Ignore the red text on this page for now.  We will not be discussing the Proxy.

The last tip is in the network settings within the preferences that we already have open.  In the network settings, we can find an option to configure DHCP settings for the Host-Only Ethernet Adapter.  These are settings you may want to configure to ensure you have more control over the environment.  It is also helpful when looking to configure those IP settings for the FreeNAS that we have already discussed.

As I wrap up these tidbits, I have decided that this is a lot of information to soak in at this point.  So in the spirit of Christmas, I have decided to finish off the clustering information in a 13th day post.  This final post may or may not be available on Christmas day.  Worst case it will be available on the 26th.

Part of that reason is I want to rebuild my lab following the instructions I will be posting and I need time to test it.  I want the instructions to be of great use.

Please stay tuned as we conclude this series very soon.

On the Eleventh Day…

Yesterday we had an introduction into setting up a virtual lab to help the DBA learn and test new technologies while improving his/her own skill set.

Today we will continue to discuss the building of a virtual lab.  Today we will get a little closer to the SQL portion of things as we will be installing a familiar operating system to SQL Server.

The Operating System will be 2008.  And the version of SQL Server will be 2008 R2.  I chose these specifically because at the time that I built out my lab, I was setting up the environment to help me study for the MCM exams.

As a sidebar, I was just informed by a friend of another blog series that is also currently discussing setting up Virtual Machines in Virtual Box.  Fortunately, his series is based on Windows 2012 and SQL 2012 – so there is a bit of a difference.  The author of that series is Matt Velic and you can read his articles on the topic here.

I’ll be honest, upon hearing that news I had to go check out his articles to make sure I wasn’t doing the exact same thing.  And while there may be a little overlap, it looks like we have different things that we are covering.

And now that brings us to recap time.

On the Eleventh Day of pre-Christmas…

The next pre-requisite for this lab is to install a Domain Controller and Active Directory.  For this Domain Controller, I have the following Virtual Box settings.

  • A single Dynamic Virtual Disk of 20GB
  • 2 Network Adapters (1 NAT and 1 Internal)
  • 1024 MB memory

To install the operating system, we will mount the iso image the same as we did for the FreeNAS in yesterdays post.  This is a Windows setup, and I will not cover that.

Once you have installed the operating system, the first thing to do is to install the guest additions for Virtual Box.

With guest additions installed, next we will turn to the network adapters.  I have two adapters installed for good reason.  One adapter is visible to the virtual network and will be used for the VMs to talk to each other.  The second adapter is installed so I can get windows validated and so patches can be downloaded and installed.

Talking about patches, this is where we want to make sure the operating system is patched.  Run windows update, finish all of the requisite reboots, and then come back to the network control panel.  Prior to installing the domain, disable the external NIC.  We will do this to limit the potential for errors when joining the subsequent machines to the domain.

For the Internal adapter, I will also configure a static IP address as shown here.

Let’s now setup the domain and domain controller on this machine.  From Server Manager, right click roles and select Add Roles.  From the new screen, select Active Directory Domain Services and DNS Server.

You are now ready to configure your domain.  I am going to allow you to use your favorite resource for the directions on configuring a domain in Windows 2008.  After the domain has been configured, then enable the external network adapter.

The final step is to configure DNS.  The main concern in DNS to configure is the reverse lookup zones.  I have three subnets (network address ranges) that I will configure.  The relevance of these three zones will become apparent in the final article of the lab setup mini-series.  The configurations will be along the lines as seen in this next screenshot.

This gets us to where we can start building our SQL Cluster.  We will cover that in the next installment.

On the Tenth Day…

Silver and Gold have a way of really bringing the look and feel of the Christmas season.

Silver and Gold also seem to represent something of greater value.

We are now into the final three articles of the 12 Days of pre-Christmas.  And with these three articles, I hope to bring something that is of more value than anything shared so far.

Of course, the value of these articles is subjective.  I have my opinion as to why these are more valuable.  I hope to convey that opinion as best as possible to help bring out as much value as can be garnered from these articles.

Let’s first recap what we have to date.

On the Tenth Day of pre-Christmas…

My DBA gave me an education.  Sure, everyday so far in this series could possibly be an education.  This is an education via a lab.  Every DBA should have a lab to be able to test features and hone skills.  A lab is a better place to do some of the testing that needs done than the DEV, QA, or even Production environments.

Think about it, do we really want to be testing the setup of clustering in the DEV environment and potentially impact the development cycle?  I’d dare so no.

Unfortunately, reality does not always allow for a lab environment to be accessible to the DBA.  So the DBA needs to make do with other means.  It is due to these types of constraints, that I am devoting the next three days to the setup of a lab.  This lab can even be created on a laptop.  I created this lab on my laptop with only 8GB of ram.  I was quite pleased to see that it performed well enough for my testing purposes.

We will begin with an introduction to the technology used – VirtualBox.  I will also discuss the creation of enough virtual machines to create a SQL Cluster (domain controller, two sql boxes, and a NAS) along with the configuration steps to ensure it will work.

For this lab, we will be using Virtual Box.  You can download Virtual Box here.  And yes, the tool is one that is provided by Oracle.  Two of the reasons I want to use Virtual Box is the ability to install multiple operating systems, and the tool is currently free.  Another benefit is that I can easily import virtual machines created in VMWare as well as Microsoft Virtual Server/Virtual PC (I have not tested any created in Hyper-V).

While you are downloading the Virtual Box app, download the Extension Pack as well.  Links are provided for the extension pack on the same page as the application download.  Be sure to download the Extension Pack for the version of Virtual Box you download.

The version of VirtualBox I will be using for this article is 4.2.2.  As of the writing of this article a new version has been released – 4.2.6.  The differences in versions may cause the instructions in these articles to be inaccurate for 4.2.6.  You can use whichever version you deem appropriate.  I just won’t be covering version 4.2.6 and don’t know if the screens are different or it the settings are different.

You can check your version in the Help.About Menu.

For this lab, we have a few things that will be required prior to setting up the SQL Cluster.  Two big components of this infrastructure are Storage and a Domain.  We are going to simulate shared storage through the use of FreeNAS.  We will be discussing FreeNAS today.

For starters, we can download FreeNAS from here.  You might be able to find a few configuration guides online for FreeNAS.  Most of them seemed to be for really old versions and were less than usable for the version that I had downloaded.  All settings to be discussed today are pertinent to FreeNAS-8.3.0-RELEASE-x64 (r12701M).

To setup FreeNAS, we will need to have a Virtual Machine configured with the following settings.

  • A BSD VM with FreeBSD as the version.
  • Ensure the motherboard settings only has the “Enable IO APIC” setting checked.
  • Three Virtual Disks (1 for NAS OS, 1 for SAN Storage, and another for a Quorum)
  • 512 MB memory
  • 2 Network Adapters (1 Internal and 1 connected to the Host-Only Adapter)

Despite the FreeNAS actual disk requirements being rather small, any fixed disk size less than 2GB causes mount errors.  Any amount of memory less than 512MB also causes a mount problem.  These settings are the minimum configurations to save the hair on your head.

The Network Adapters is more of a strong suggestion.  I was able to get it to work with only one adapter, but it was more hassle than it was worth.  I found it easier to configure for use by the cluster later if I had two adapters.  The two adapter configuration also allows me easier administration from within the VM environment as well as from the host machine.

One other thing to do is to mount the FreeNAS ISO that has been downloaded to the CD drive that is created by default with the VM creation.  I mount the ISO before booting by opening the settings for the VM within Virtual Box.  On the storage screen, highlight the “Empty” CD Icon in the middle then click on the CD Menu Icon on the far right as shown below.

Navigate to the folder where the FreeNAS ISO is saved and then click ok until you are back at the Virtual Box manager screen.  You are now ready to start the machine and finish the install and then configure.

Once powered on, you should eventually come to the following screen.

Select to Install/Upgrade.  From here, you will see a few more prompts such as the next screen to select the installation location.

This should be pretty straight forward installation options for the IT professional.  I will not cover all of the installation prompts.  Once the install is finished, you will need to reboot the VM and un-mount the installation media.  The system will then come to the following screen.

Now that we are at the console screen, the next step is to configure the Network Interfaces.  You can see that I have already done this based on the IP addresses seen at the bottom of the screen.  I will leave the configuration of the IP addresses to you.  Both the internal network and the host-only network will need to be configured.  The host network should be the second adapter.  Keep track of the IP addresses that have been configured.  We will need to use them later.

In a browser window we will now start configuring the storage to be used by our Lab.  In the address bar, we will need to input the address we configured for the host network.  In my case, 192.168.56.103.  When that page loads, the first thing we need to do is change the Admin password.

The default password is empty.  Pick a password you will remember and that is appropriate.  With that done, we can configure the storage.

The Next setting, I want to configure is the iSCSI setting.  In order to use the volumes that we create, we must enable the iSCSI service.  In the top section, click the Services button.  This will open a new tab in the web browser.  On the Services tab, we need to toggle the slider for iSCSI to the “ON” position as shown in the image.

Once toggled, we can configure the iSCSI settings for the volumes we will now create.  From here, we click on the storage tab.  Next, click on the Volume Manager Button.  In order for the disks to be imported, we have to use volume manager.  The Import Volume and Auto Import Volume must serve other purposes – but they don’t work for importing a new volume.  Here is a screenshot demonstrating what needs to be configured.

With the Volume created, a ZFS volume must next be created from within the storage management.  We do this by clicking the “Create ZFS Volume” icon next to the main volume we just created.  This icon is illustrated as the icon on the far right in the next image.

Once that icon is clicked, you will be presented with a new dialog.  The dialog is demonstrated in the above image.  Give the Volume a Name and then give it a size.  Note that you must specify a storage unit (m or g for example) or you will receive a pretty red error message.

Now go back to the Services tab where we enabled iSCSI.  There is a wrench icon next to the toggle to enable the service.  Click on this wrench and a new tab will be opened (again within the FreeNAS webgui) and the focus will be switched to this new tab.  On the “Target Global Configuration” ensure that Discovery Auth Method is set to “Auto.”  If it is not, make the change and click save at the bottom.

Next is the Portals.  The portals should be empty so we will need to add a portal.  By default, only one IP address is displayed for configuration on a new Portal entry.  We want to configure two IP addresses.  First, select 0.0.0.0 from the IP Address drop down on the new window that opened when clicking on “Add Portal.”  Then select “Add extra Portal IP”.

Next is to configure an Initiator.  For this lab, I created on Initiator specifying ALL for the Initiators and Authorized Network as shown here.

With an initiator and a portal in place, we now proceed to the configuration of the Targets.  I have configured three targets and the main difference is in the name.  They should be configured as shown here.

Almost done with the setup for the storage.  It will all be well worth it when we are done.  We need to configure Device Extents and then Associate the targets, then we will be done.

Like with the Targets, I have three device extents configured.  The configuration for each is the same process.  I want to give each a name that is meaningful and then associate the extent to a disk that we imported earlier.

Last for this setup is the Target to Extent association.  This a pretty straight forward configuration.  I named my targets the same as extents so there was no confusion as to which should go with which.

That wraps up the configurations needed to get the storage working so we can configure a cluster later on.  Just getting through this configuration is a pretty big step in getting the lab created for use in your studies and career enhancement.

Next up in this series is to show how to configure (in limited detail) a domain and DNS, and then to install and configure a cluster.  Stay tuned and I will even through in a few tidbits here and there about Virtual Box.

I didn’t include every screenshot possible throughout the setup of FreeNAS and the configuration of iSCSI.  Part of the fun and education of a lab is troubleshooting and learning as you go.  If you run into issues, I encourage you to troubleshoot and research.  It will definitely strengthen your skill-set.

Last Known Good CheckDB

Comments: 4 Comments
Published on: November 20, 2012

Diligent DBAs like to check their databases for a little thing called corruption.  Some of those DBAs are masterful at keeping records and logging that the database was checked and came up with a clean bill of health.

There are many different ways of logging this kind of activity.  Today I will share one such way to track when the last successful run of Checkdb happened.

First a little back story

A question came across twitter on the SQLhelp hashtag. The question was “Does restoring a database clear the dbi_dbccLastKnownGood value on the boot page?”

This question prompted me to do a quick test to see.  The test is simple.  Create a small database, backup the database, run checkdb, check to see what the dbi_dbccLastKnownGood value is, restore the previous backup and check the dbi_dbccLastKnownGood value again.

So here is a quick script

[codesyntax lang=”tsql”]

[/codesyntax]

If you run this little test, you should observe that the date value for dbi_dbccLastKnownGood changes with each test.  Now let’s discuss the question and the answer to that original question about whether or not this value gets cleared.

The value does not get cleared.  The value does not get reset.  The cause for the change in the value that you have observed is due simply to the boot page having been restored.  If CheckDB has never been run on the database, you will get the SQL default date of ‘1900-01-01 00:00:00.000′.

And then…

Now that the back story is told, that brings us to how to track this – at least one such method.  I had to cover the back story since it is what prompted the writing of a method to gather this information in a quick script so I could use it to monitor.  Yes, it is just another tool to throw into the toolbox.  And to reiterate, it is by no means the only way to track or gather this type of information.  Some are more elaborate than others.  It is up to you to choose.

Keeping in mind that we can get the last time that Checkdb completed without a report of corruption, I delve into this cursor based method to retrieve the dbi_dbccLastKnownGood value for all databases within your SQL Server Instance.

[codesyntax lang=”tsql”]

[/codesyntax]

You can use this or find something else if you desire.  The sole purpose of this script is to provide a means to gather quickly the last date known to report a clean bill of health from Checkdb.  From there, I will leave it to you to add it to another process in order to track over the long term.

You can read the twitter conversation here.  I also have it on good authority that Bob Pusateri (twitter) will be blogging about the same topic.

A DBAs List of Little Things

Categories: Meme Monday, News, Professional, SSC
Comments: 2 Comments
Published on: March 5, 2012

Today is Meme Monday.  Today we get to talk about all of the little things a DBA does.  Thomas LaRock started things off with his list – here.

I want to just add to the list he started.

  1. SAN Admin
  2. Server Admin
  3. Technical Writer
  4. Project Manager
  5. Automate the process to Migrate Data from Production to Dev/Test/QA
  6. Automate the process to Migrate Data from Production to Reporting Servers
  7. Export and Email/FTP/Transfer Data to customers
  8. Attend Sales meetings
  9. Predict customer requests before the request is made
  10. Create standards pertaining to the database environment
  11. Document
  12. Document
  13. Document
  14. Domain Admin
  15. Administer excel spreadsheets
  16. Administer Cognos, Crystal Reports, and Access (or any other tool that may provide a reporting interface for the database)
  17. Create Data warehouse
  18. Improve your skills

I really like the bullet item “therapist” from Tom’s list.  There is a lot of truth to that.

Slammer, Alive…Barely

Categories: News, Professional, SSC, SSSOLV
Tags: ,
Comments: 1 Comment
Published on: January 24, 2012

Slammer

By now you must have heard of the SQL Slammer worm.  It was quite an infectious little nuisance.  The harm it caused came largely due to unpatched, unprotected SQL Servers.

We are now 9 years out from the initial discovery of this worm.  The worm has made its way onto the endangered species list – but it is not yet extinct.  I don’t know if I should be surprised by that.

My initial reaction is “No way that worm is still causing problems.  Everybody knows about it.”  But yet, I just caught several infection attempts from remote hosts that were affected by Slammer.  When I take a step back, I recall that many people out there are still running on unpatched servers.  I know of many places that are running SQL 2000.  I know of a large pool of servers across different versions and editions that are not patched.  I even know of a few places that are still running SQL 6.5.

When I take all of that into account, finding that Slammer is still active does not surprise me – but it should.

So for fun, here is what I was able to trap from the recent attempts at my machine with SQL Slammer.

When I trace that IP back to its source, I get a host name of the machine.  If I search on the Host Name of the IP Address, I find this page.  If I were a hacker, I now have a lot of valuable information.  I can also assume that this particular host has many virii.

This entire little foray has made me wonder how many people out there are concerned about security.  Do you know what the patch level is of your server?  Is your AV software up to date?  Are you running any form of HIPS?  If you are in IT and your focus is Data, you may want to check those things.  After all, our focus is to protect the data.

«page 1 of 8






Calendar
November 2014
M T W T F S S
« Oct    
 12
3456789
10111213141516
17181920212223
24252627282930
Content
SQLHelp

SQLHelp


Welcome , today is Monday, November 24, 2014