Implicit Conversion Fail

Implicit

Every now and again, you may run into an error about implicit conversions in your queries. Implicit conversions are not uncommon in the computing world and can be viewed as kind of a fail-safe for when we don’t quite follow decent practices when designing the database or when writing queries or both.

Despite this little fail-safe, there are times when a nasty little error will pop up and cause a bit of consternation.

Implicit conversion from data type %ls to %ls is not allowed. Use the CONVERT function to run this query.

What Went Wrong

Unlike many other errors in SQL Server, this error message makes some sense. The major components of what is wrong are present and you are given a decent idea of what the conversion attempt is that failed. When this particular error happens, you can bet that there are issues with some TSQL code somewhere for sure. In addition, you can bet there is likely a problem with the database design as well. Yay! More work for your back burner.

First, this error comes with an error id of 257 and we can see the message text via the following query. This id is important for when we want to monitor for this problem in the future.

Let’s see how we can recreate this problem.

Which will produce the following.

This is a prime example of a bad query producing an error. Obviously, I am using the wrong data type to try and query the temp table. The ImplicitID column is an integer and I am trying to query it using a date. The quick fix, would be to query the appropriate date column if I must use a date in my query, or i can use an integer to query the ImplicitID column.

After this minor tweak, now the query will work and I start to see results. Given the random nature of the data in this query, the results will vary from batch to batch.

Wrapping it Up

Implicit conversions are a fail-safe for when bad design or code (or both) crops up in your environment. Sometimes, this fail-safe is inadequate and needs further assistance. Sometimes, that may be an explicit conversion and sometimes that means an appropriate rewrite of the query to use the appropriate columns and data types in the queries. This was an introductory article into the world of implicit conversions. There will be a follow-up or two about implicit conversions and monitoring for them. In preparation for the upcoming articles, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

 

File Maintenance – Cleaning Up Old Files

Comments: 1 Comment
Published on: December 21, 2018

Using SSIS to Maintain the File System

We have all run into a need or a desire to clean up old stale files from the file system, whether it be to remove old backup files or flat files that are created from one process or another.  And based on this need/desire, we have all come up with a method to help with achieve that goal.

Some of the methods might be to include a flag in a maintenance plan that may be used.  Other methods may be to use a SQL script employing xp_cmdshell and delete statements.  Yet another may utilize the sp_oa stored procs and DMO.  And still others may have ventured into powershell to accomplish the same task.  The point is, there are many methods.

I am adding yet another method to the mix.  Why?  I didn’t much like the option of using the sp_oa method or the xp_cmdshell route.  I am very novice with powershell and it would take a lot more tinkering to get the script working properly.  Also, I felt pretty comfortable with SSIS and had approval to try and get this done using that method.  And just because I am a novice with powershell, does not mean that I will not circle back around to try and accomplish this task via that means.

Note: This article was originally written in 2011 and got stuck in an unpublished state. Things have changed since then so i will definitely be circling back around for a powershell version.

Requirements

The method employed needs to be able to do the following:

  1. Remove multiple file types
  2. Be configurable
  3. Clean out files from numerous directories
  4. Remove files older than a specified number of days.

Setup

The solution I chose utilizes SSIS.  It also requires that there be a table in a database that helps to drive the package.

The table looks like the following.

The filepath column holds the FileSystem Path for each directory that needs to be cleaned.  Paths that are supported are local (e.g. C:\temp ) and unc paths (\\machine\c$\temp).  I set this attribute to a length of 256, but if you have a longer path, you will want to adjust the length.

The Process column will hold a value describing what that path relates to, such as MaintainDirectory.  In my example, I am using MaintainDirectory to control which directories hold files that potentially need to be deleted.

Here is an example of the contents of that table I am using currently.

The last piece of the setup before we start working on the SSIS package is the need for a string splitting function.  Pick the string splitter of your liking.  I have one that I like and am sure you have one that you prefer.  The SSIS package relies on the return field from the splitter being named “Item.”  If it is named something else, please make the adjustments in the package as necessary.

The Package

The package I created has been created in SSIS 2008.  To meet the requirements already set forth, I utilized the following objects: ADO.Net Data Source, 2 Execute SQL Tasks, 2 ForEach Loop Containers, a Script Task, and 8 variables.  Let’s take a look at these starting with the variables.

Variables

  • SQLServerName – The value held here is used in an Expression for the Data Source.  This will overwrite the ServerName value in the Data Source.
  • DatabaseName – Used alongside the SQLServerName variable in an Expression for the Data Source.  This value will overwrite the InitialCatalog value in the Data Source.  This should be the name of the database where the FilePaths table and String Split function exist.
  • DaysToKeep – This value is the cutoff point for which files to keep and which files will be deleted.  This variable is used as a ReadOnly variable in the Script Task.
  • obj_FileExtension – This object variable is used to store the result set from one of the Execute SQL tasks and the results of the string split function from the FileExtensionList variable.
  • FileExtensionList – This is a delimited list of file extensions that need to be evaluated for deletion.  It is important to note that the file extensions that are to be processed are case sensitive.  The extension must appear in this list as it appears in the file system.
  • FileExtension – to be used in one of the ForEach loops.  This variable will receive the FileExtension from the obj_FileExtension variable one at a time.
  • obj_ListOfDirectories – This variable will receive the result set of an Execute SQL Task to be later consumed by one of the ForEach loops.
  • DirectoryToMaintain – receives one at a time the Directory to process for file deletion.  The ForEach loop stores a value from obj_ListOfDirectories in this variable for processing.

Execute SQL Tasks

The two Execute SQL Tasks are simple in function.  One is to get the list of directories to maintain from the FilePaths table.  The other is strictly to split the string for the FileExtensionList variable.

The first is named “Get Directory List” and should receive the Full Result Set from the following query.

The Result Set tab of this task also needs to be modified.  it should look like this.

From this task, we flow to the next Execute SQL Task named “Split FileList.”  The setup of this task is very much like the previous task.  We want to receive the full result set.  We have a configuration to make on the result set tab.  We also need to map a parameter.  Let’s take a look at those real quick.

Parameter Mapping

Result Set

And this is the query that we will be executing.

Notice that the Parameter we named in the Parameter Mapping tab is being used in the function call.  I chose this method because I could see and understand how it works better.

ForEach Loops

The next stop in the flow is the ForEach Loop – Directory object.  As the name implies, this ForEach Loop is designed to work with the obj_ListOfDirectories variable/array.

With this first Loop container, we have two tabs that need to be configured in the properties.  Both Loop containers are similar in that they need the same tabs to be configured.  First, let’s talk about the Collection tab.

On the Collection tab, we need to set the Enumerator option to “ForEach ADO Enumerator.”  Then we need to select the obj_ListOfDirectories from the drop down labeled “ADO Source Object Variable.”  Your screen should look like the following image.

With this tab configured, we can focus our attention to the quick changes that need to be made on the Variable Mappings tab.  On this tab, we are telling the enumerator how to handle the data from the object variable.  We are mapping columns from the result set to variables for further consumption.  When configured, it should look like the following.

Inside of this ForEach loop container, we have another ForEach loop container.  This second ForEach loop container handles the file extensions that we listed out in delimited fashion in the FileExtensionList variable.  I have called this container “ForEach Loop – FileExtension” (just keeping it simple).

The collection tab follows the same configuration setup.  The difference of course being that this container will use the obj_FileExtension object from the source variable dropdown menu.

The variable mapping tab is also slightly different.  We will be mapping column 0 of the object to the FileExtension variable.  The explanation for the different number between the two loop container variable mappings is simple.  In obj_ListOfDirectories, we have multiple columns being returned.  In obj_FileExtension, we have but one single column being returned.

This inner Loop container will loop through each of the extensions for each of the directories that have been returned to the outer loop container.  the inner loop container has the remainder of the workload in it via the Script Task.

Script Task

It is via the script task that we actually get to start deleting files.  This was the most difficult piece of the entire package – though the script is not very large.

For the script task, I chose to implement it via the Visual Basic option (instead of C#).  I have three ReadOnlyVariables employed by the script.  Those variables are: User::DaysToKeep,User::DirectoryToMaintain, and User::FileExtension.

Once you have set those on the script tab, the next step is to click the Edit Script… button where we need to place the following script.

An important note of interest is the need for the Try…Catch.  Without this block as it is, you could run into an issue where the file (such as those pesky temp files) may be in use by some process and cause the package to error.  The Try…catch will move past that nasty error and delete the files that it can.

Inside this script, you will see that I am comparing the LastWriteTime to the PurgeDays and ensuring that the file extension matches one that is in the list.  Then we move into the try…catch and either delete the file that matches those criteria or throw an exception and move on to the next file.

When all is said and done, your package should look something like this.

You should also have a variable list that looks like this.

Each variable that is not an Object has a value assigned to it at this point.  These values will be overwritten where applicable.

Next Steps

Having this package is a good start.  But unless you are prepared to manually run this on a daily basis, it needs to be added to a job and scheduled.  There are two ways to go about scheduling this package.

The first option is to configure the FileExtensionList and DaysToKeep variables and save the package with those values.  Then run this package through SQL Agent with those values every time.  The drawback to this method is that if you need to add or remove a file extension (as an example) then you need to edit the package and re-save it.

The alternative option is pass the values through the job to overwrite those variables as the job runs.  Should you need to remove or add a file extension, it would just be done at the job definition level.

Let’s take a look at this second option.  I will skip past how to create the job as an SSIS job in SQL Server and we will look directly how to modify those variables from the job properties.

To configure these variables directly from the SQL Agent job, open the Job properties and click on the Set Values tab (assuming you have defined this job as an SSIS Job type).  You should get a screen similar to this (void of the set values shown in the pic).  Just add the parameters (variables) we have discussed to this point with appropriate values to fit your needs/environment.

I have chosen to only include the four variables shown above since the remaining variables are either objects or get overwritten in the ForEach loops during processing.  The only thing remaining now is to set the schedule for the job.  Once set, the job (and package) will take care of the rest.

Conclusion

I have now shown you how to maintain some of the directories on your system through the use of SSIS and SQL server.  There are many methods to accomplish this goal, it is up to each of us to choose the best method for our environment and comfort level (by means of supporting the chosen solution).

If you would like to read more interesting stuff concerning SSIS, you might want to check any of these articles: lost password, expected range errors, and synonyms extending SSIS.

12 Days Of Christmas and SQL

Categories: News, Professional, SSC
Comments: 3 Comments
Published on: December 26, 2017

One of my all-time favorite times of the year happens to be the Christmas Season. I enjoy the season because it is supposed to remind us to try and be better people. And for me, it does help. In all honesty, it should be a better effort year round, but this is a good time of year to try and get back on track and to try and focus more on other more important things.

For me, one of the more important things is to try and help others. Focusing on other people and their needs helps them but also helps one’s self. It is because of the focus on others that I enjoy, not just Christmas Day, but also the 12 Days of Christmas.

The 12 Days of Christmas is about giving for 12 Days. Though, in this day and age, most view it as a span of 12 Days in which they are entitled to receive gifts. If we are giving for a mere 12 Days and not focusing on receiving, then wouldn’t we all be just a little bit happier? I know that when I focus more on the giving I am certainly happier.

Giving

In the spirit of the 12 Days of Christmas and Giving, I have a 12 Day series that I generally try to do each Holiday Season. The series will generally begin on Christmas day to align with the actual 12 Days of Christmas (rather than the adopted tradition of ending on Christmas). This also means that the series will generally end on the celebration of “Twelfth Night” which is January 5th.

Each annual series will include several articles about SQL Server and have a higher goal of trying to learn something more about SQL Server. Some articles may be deep technical dives, while others may prove to be more utilitarian with a script or some functionality that can be quickly put to use and frequently used. Other articles may just be for fun. In all, there will be several articles which I hope will bring some level of use for those that read while they strive to become better at this thing called SQL Server.

This page will serve as a landing page for each of the annual series and will be updated as new articles are added.

2018

  1. How To: XEvents as Profiler – 25 December 2018
  2. Upgrading From SQL Server Profiler – 26 December 2018
  3. How To: File Target use in Extended Events – 27 December 2018
  4. SQL Servers Black Box Recorder – Def Trace – 28 December 2018
  5. SQL Servers Black Box Recorder – system_health – 29 December 2018
  6. SQLs Black Box Recorder – sp_server_diagnostics – 30 December 2018
  7. Finding Installed Event Sessions – 31 December 2018
  8. Finding Application Session Settings – 1 January 2019
  9. Checking Your Memory with XE – 2 January 2019
  10. Event Tracing for Windows Target – 3 January 2019
  11. Automatic Tuning Monitoring and Diagnostics – 4 January 2019
  12. Short Circuiting Your Session – 5 January 2019

2017

  1. XE Permissions – 25 December 2017
  2. Best New(ish) SSMS Feature – 26 December 2017
  3. XE System Messages – 27 December 2017
  4. Correlate Trace and XE Events – 28 December 2017
  5. Audit Domain Group and User Permissions – 29 December 2017
  6. An Introduction to Templates – 30 December 2017
  7. Failed to Create the Audit File – 31 December 2017
  8. Correlate SQL Trace and Actions – 1 January 2018
  9. Dynamics AX Event Session – 2 January 2018
  10. Sharepoint Diagnostics and XE – 3 January 2018
  11. Change Default Logs Directory – 4 January 2018
  12. Common Tempdb Trace Flags – Back to Basics (Day of Feast) – 5 January 2018

2015

  1. Failed – 25 December 2015
  2. Failed – 26 December 2015
  3. Failed – 27 December 2015
  4. Failed – 28 December 2015
  5. Failed – 29 December 2015
  6. Log Files from Different Source – 30 December 2015
  7. Customize XEvent Log Display – 31 December 2015
  8. Filtering Logged Data – 1 January 2016
  9. Hidden GUI Gems – 2 January 2016
  10. Failed – 3 January 2016
  11. Failed – 4 January 2016
  12. A Day in the Stream – 5 January 2016

2013

  1. Las Vegas Invite – 25 December 2013
  2. SAN Outage – 26 December 2013
  3. Peer to Peer Replication – 27 December 2013
  4. Broken Broker – 28 December 2013
  5. Peer Identity – 29 December 2013
  6. Lost in Space – 30 December 2013
  7. Command N Conquer – 31 December 2013
  8. Ring in the New Year – 1 January 2014
  9. Queries Going Boom – 2 January 2014
  10. Retention of XE Session Data in a Table – 3 January 2014
  11. Purging syspolicy – 4 January 2014
  12. High CPU and Bloat in Distribution – 5 January 2014

2012 (pre-Christmas)

  1. Maint Plan Logs – 13 December 2012
  2. Service Broker Out of Control – 14 December 2012
  3. Backup, Job and Mail History Cleanup – 15 December 2012
  4. Exercise for msdb – 16 December 2012
  5. Table Compression – 17 December 2012
  6. Maintenance Plan Gravage – 18 December 2012
  7. Runaway Jobs – 19 December 2012
  8. SSRS Schedules – 20 December 2012
  9. Death and Destruction, err Deadlocks – 21 December 2012
  10. Virtual Storage – 22 December 2012
  11. Domain Setup – 23 December 2012
  12. SQL Cluster on Virtual Box – 24 December 2012

New Backup Behavior in SQL 2014

Comments: 1 Comment
Published on: April 1, 2014

As has been well publicized, today is the official Release To Manufacturing date for SQL Server 2014.  You can read more about all of that here.

Something that hasn’t received much word is a new feature that is a game changer.  I’m not referring to the advancements with the In-Memory OLTP (aka Hekaton).  The real game changer in my opinion is the way backups will be treated in 2014.

encryptionSQL Server 2014 brings the added functionality of encryption to the database backups.  This is a significant improvement to securing data at rest.  This functionality applies to databases that have been TDE enabled as well as those that are not TDE enabled.  This functionality also applies to backups that are compressed and backups that are not compressed.

The beauty of this functionality is that all backups will be encrypted now by default.  What this means is that you need not configure anything on your end to make it happen.  Despite it being enabled by default, you can change the encryption method should you choose.

Another interesting note with this new default behavior is that all of your database backups will fail out of the box.  You might ask why.  Well, there are some pre-requisites that must be met in order for the encrypted backup to succeed.

Here are those pre-reqs.

  1. Create a Database Master Key for the master database.
  2. Create a certificate or asymmetric Key to use for backup encryption.

If you have not created your DMK, your backups will fail and you will be none the wiser until you try to restore that backup.  That is really the way you want to conduct your duties as a DBA, right?  You guessed it, the backup shows that it succeeds yet it does nothing.

As you move forward with your SQL 2014 implementation, ensure you create those DMKs and ensure your backups are safe.

Oh and in case you haven’t noticed, pay attention to today’s date.

 

Day 4 – Broken Broker

This is the fourth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers

mini-Broker

Brokers

 

 

 

 

On a recent opportunity to restore a database for a client, I experienced something new.  

I thought it was intriguing and it immediately prompted some questions.  First, let’s take a look at the message that popped up during the restore and then on to what was done to resolve the problem.

 

Query notification delivery could not send message on dialog ‘{someguid}.’. Delivery failed for notification ‘anotherguid;andanotherguid‘ because of the following error in service broker: ‘The conversation handle “someguid″ is not found.’

My initial reaction was “Is Service Broker enabled?”  The task should have been a relatively easy straight forward database restore and then to setup replication after that.  My next question that popped up was “Is SB necessary?”

Well the answers that came back were “Yes” and “YES!!!”  Apparently without SB, the application would break in epic fashion.  That is certainly not something that I want to do.  There are enough broke brokers and broke applications without me adding to the list.

Occasionally when this problem arises it means that the Service Broker needs a “reset.”  And in this case it makes a lot of sense.  I had just restored the database and there would be conversations that were no longer valid.  Those should be ended and the service broker “reset.”

The “reset” is rather simple.  First a word of warning – do not run this on your production instance or any instance without an understanding that you are resetting SB and it could be that conversations get hosed.

[codesyntax lang=”tsql”]

[/codesyntax]

For me, this worked like a charm.  There was also substantial reason to proceed with it.  If you encounter this message, this is something you may want to research and determine if it is an appropriate thing to do.

Day 3 – Reviewing Peers

Comments: 1 Comment
Published on: December 27, 2013

This is the third installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time

Remember back in the day when Napster was really popular?  I’m sure it is still popular now – just not in the same league as the early years.  Napster pretty much made some revolutionary changes in file-sharing across the internet.  Now the algorithms and the method have become more advanced and use a hive approach, but it still pretty much boils down to the setup that Napster used – Peer to Peer.

peer-pressure

In the P2P file-share world, every node had a part to play.  If you downloaded a file, your machine could also upload that file or other files to the network for other users.  This approach required full files in order to work.

p2p_net

In the Hive approach, the files are broken up into chunks.  So you still participate on a P2P basis, but you no longer need to have the complete file to participate.  (I am probably over-generalizing, but that is ok – the point is coming soon.)  This helped transfers be quicker and the P2P network/hive to be larger (in case you were wondering).

Now, let’s take that idea and move it down to a smaller chunk of data.  What if we did that with a database and only sent a record at a time to a partner and that partner could send a few records back to the first partner/peer?  Now we have something that could be pretty usable in many scenarios.  One such scenario could be to sync data from the same database at ten different locations (or maybe 100 different locations) so all locations would have current information.

Well, SQL Server does have that technology available for use.  Coincidentally enough, it is called Peer-to-Peer replication.  Truth be told, it is really more of a two transactional replication on steroids.  In SQL 2008, you had to setup transactional replication in order to create the P2P.  But in SQL 2012, there is now an option on the publication types for Peer-to-Peer.

Setting up P2P replication in SQL 2012 is pretty easy to do.  Here is a quick step-through on doing just that.  I will bypass the setup of the distributor and jump straight into setting up the publication through to the point of adding peers.  From that point, it will be left to you to determine what kind of subscription (push/pull) you use and to figure out how to configure those types.

Step-through

The first step is to expand the tree in SSMS until you see replication and then to expand that to see “Local Publications.”  From “Local Publications,” right click and select “New Publication.”

menu

Then it is almost as easy as following the prompts as I will show in the following images.  You need to select the database you wish to be included in the P2P publication.

db_selection

Then it is a matter of selecting the publication type.  Notice here that Peer to Peer has been highlighted.

repl_selection

Of course, no replication is complete without some articles to include in the replication.  In this case, I have chosen to just replicate a few of the articles and not every article in the database.  When replicating data, I recommend being very picky about what articles (objects) get included in the replication.  No sense in over-replicating and sending the entire farm across the wire to Beijing, London, Paris and Moscow.

table_selection

Once the articles are selected, it will be time to setup the agent security.  Again, this is pretty straight forward.  And in my contrived setup, I am just going to rely on the SQL Server Agent Service account.  The screen will inform you that it is not best practice.  I will leave that as a exercise for you to explore.

agent_securitylog_reader_security

With that last piece of configuration, the publication is ready.  Just click your way through to finish.

Once the publication is complete, it is time to add a subscriber to the publication.  That is easily accomplished by right clicking the publication.  Since this is a P2P publication, we need to select “Configure Peer-To-Peer Topology…”

p2p_topology_menu

Selecting that menu option will bring up the Wizard.  First step in the new wizard is to pick the publisher and the publication at that publisher that needs to be have the topology configured.

publication_selection

After selecting the publisher and publication then I can add nodes to the P2P topology by right-clicking the “map” (as I like to call it) area.  Select “Add a New Peer Node” from the menu and then enter the appropriate details for the new subscriber.

add_node

It is here that I will conclude this short tutorial.  Configuring the topology is an exercise best left to each individual circumstance.  Configuring where the pull subscribers will be and where the push subscribers will be is almost an art.  Have fun with it.

I have had the opportunity to use this kind of setup on a large multi-node setup across several sites.  It runs pretty smoothly.  Sometimes it can get to be a hair-raising event when a change gets introduced that borks the schema.  But those are the events that permit you to learn and grow and document what has happened and how to best handle the issues in your environment.

I have even taken a multi-site P2P setup and just added a 1 direction subscriber (as if it were a transactional publication) so the subscriber could just get the information and run reports without pushing changes back up into the rest of the topology.  That also works pretty well.  Document the design and be willing to change it up in case there appears to be latency and too much peer pressure.

12 Days of Christmas 2013 Day 2

Comments: 1 Comment
Published on: December 26, 2013

This is the second installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement

burningtime

Recently I was able to observe an interesting exchange between a couple of key people at a client.  That exchange gave me a bit to ponder.  I wanted to recount a bit of that exchange here.  All names have been, well you know how that goes.

Accountant Joe came in early one wintry morning.  He was gung-ho and ready for the day ahead.  Joe had huge plans to finish counting all of the beans and get his task list done for the day.  You see, taskmaster Judy had been harping on him significantly over the past week to get his beans counted.

On this frosty morning, Joe was zipping along.  As more and more people filed into the office from the various departments, Joe was still contentedly counting his beans.  That only lasted for a few fleeting moments with everybody in the office though.

Suddenly Joe could no longer count the beans.  The beans Joe was counting were served up via the backend database.  And since the beans were running too sow, Joe called the helpdesk to have them fix the database.  A few moments later, Sally called the helpdesk too.  Sally was complaining about things being horribly slow too.  Sally was trying to open the company calendar (Sally is the executive secretary).

More and more calls were coming in to the helpdesk from various departments and every user-base in the company.  The helpdesk was busy fighting this fire or that fire.  Finally news of the slowness is escalated to the DBA Dillon so he could investigate why the beans were so slow on this frosty day.  As Dillon investigated, he noticed that IO stalls were off the charts.  He was seeing IO stalls in the hundred second range instead of the milli-second range like normal.

Like a dilligent DBA, Dillon immediately escalated the issue to the sysops team who was responsible for the SAN (yeah he notified his manager too).  Bill from sysops promptly responded.  Sadly the response was “I am too busy at the moment.

After much pestering, Bill finally became available and was ready to help – 4 hours later.

As it turns out, the SAN that housed all company shares, applications, databases and even Exchange was down to about 30GB free space.  Due to the lack of free space, the SAN degraded performance automatically to try and prevent it from filling up entirely.  Bill knew about this pending failure and had ordered extra storage – which sat on his desk for 2+ weeks.

The entire company was essentially down because Bill ended up being too busy (in a meeting).  Though the issue was eventually resolved – the sting has yet to fade.

When faced with an outage situation, let this story be your gift to remind you of how not to treat the outage.

On the Twelfth Day…

Bacon wrapped frog legs (twelve of them) for Christmas.  No more drumming for these guys!!

What could be better than bacon wrapped frog legs?  Oh yeah, more Virtual lab setup.

We will delve into setting up a SQL Cluster today.  We will also cover some high level tips for dealing with virtual box.  This will be good information and the type of stuff I would have like to have known heading into setting up a Virtual Lab.

Season Cleaning First.

On the Twelfth Day of pre-Christmas…

My DBA brought to me a Virtual SQL Cluster.  And with that cluster, we have a a few tidbits for Using VirtualBox.

The first tidbit is an administration aid.  Occasionally it is good to have similar machines grouped together.  At the same time, it is also necessary to start multiple virtual machines at the same time.  This is done through groups in VirtualBox.

Here you can see some of the groups that I have created.  If I right-click on a machine name, I will be presented a menu that has the Group option.

Once I have a group created, I can get a few different options if I were to highlight the group name I would get different options as shown in the following image.

The notable options here are to “Ungroup”, “Rename Group”, and “Add Machine.”  Another option is “Start.”  Though this option is present for the machine menu, the behavior is different.  This option allows you to start the entire group.  This can be a handy tool when dealing with a cluster for instance.

The next handy tidbit is the snapshot.  A snapshot allows point in time image of the VM to be taken so different configurations can be tested – and then quickly reverted i necessary.  Here is what I have for one of my VMs in the snapshot manager.

From this very same screen you can also see one of the many methods available to create a clone of a virtual machine.  The clone icon is the little button above the right hand pane that looks like a sheep.  Cloning a VM is a quick way to create several machines for various purposes.  As you will hear from many people – you should build a base image first, then run sysprep against it.  Sysprep is necessary in order to help prevent problems down the road.

The next tidbit for today is in regards to the file locations for virtual machines and virtual disks.  I recommend changing the default path for the VM files.  This can be done through the preferences option on the file menu.  Shown in the attachment is what it may look like if you have not changed it.  Notice that the default path goes to your user profile directory.

Ignore the red text on this page for now.  We will not be discussing the Proxy.

The last tip is in the network settings within the preferences that we already have open.  In the network settings, we can find an option to configure DHCP settings for the Host-Only Ethernet Adapter.  These are settings you may want to configure to ensure you have more control over the environment.  It is also helpful when looking to configure those IP settings for the FreeNAS that we have already discussed.

As I wrap up these tidbits, I have decided that this is a lot of information to soak in at this point.  So in the spirit of Christmas, I have decided to finish off the clustering information in a 13th day post.  This final post may or may not be available on Christmas day.  Worst case it will be available on the 26th.

Part of that reason is I want to rebuild my lab following the instructions I will be posting and I need time to test it.  I want the instructions to be of great use.

Please stay tuned as we conclude this series very soon.

On the Eleventh Day…

Yesterday we had an introduction into setting up a virtual lab to help the DBA learn and test new technologies while improving his/her own skill set.

Today we will continue to discuss the building of a virtual lab.  Today we will get a little closer to the SQL portion of things as we will be installing a familiar operating system to SQL Server.

The Operating System will be 2008.  And the version of SQL Server will be 2008 R2.  I chose these specifically because at the time that I built out my lab, I was setting up the environment to help me study for the MCM exams.

As a sidebar, I was just informed by a friend of another blog series that is also currently discussing setting up Virtual Machines in Virtual Box.  Fortunately, his series is based on Windows 2012 and SQL 2012 – so there is a bit of a difference.  The author of that series is Matt Velic and you can read his articles on the topic here.

I’ll be honest, upon hearing that news I had to go check out his articles to make sure I wasn’t doing the exact same thing.  And while there may be a little overlap, it looks like we have different things that we are covering.

And now that brings us to recap time.

On the Eleventh Day of pre-Christmas…

The next pre-requisite for this lab is to install a Domain Controller and Active Directory.  For this Domain Controller, I have the following Virtual Box settings.

  • A single Dynamic Virtual Disk of 20GB
  • 2 Network Adapters (1 NAT and 1 Internal)
  • 1024 MB memory

To install the operating system, we will mount the iso image the same as we did for the FreeNAS in yesterdays post.  This is a Windows setup, and I will not cover that.

Once you have installed the operating system, the first thing to do is to install the guest additions for Virtual Box.

With guest additions installed, next we will turn to the network adapters.  I have two adapters installed for good reason.  One adapter is visible to the virtual network and will be used for the VMs to talk to each other.  The second adapter is installed so I can get windows validated and so patches can be downloaded and installed.

Talking about patches, this is where we want to make sure the operating system is patched.  Run windows update, finish all of the requisite reboots, and then come back to the network control panel.  Prior to installing the domain, disable the external NIC.  We will do this to limit the potential for errors when joining the subsequent machines to the domain.

For the Internal adapter, I will also configure a static IP address as shown here.

Let’s now setup the domain and domain controller on this machine.  From Server Manager, right click roles and select Add Roles.  From the new screen, select Active Directory Domain Services and DNS Server.

You are now ready to configure your domain.  I am going to allow you to use your favorite resource for the directions on configuring a domain in Windows 2008.  After the domain has been configured, then enable the external network adapter.

The final step is to configure DNS.  The main concern in DNS to configure is the reverse lookup zones.  I have three subnets (network address ranges) that I will configure.  The relevance of these three zones will become apparent in the final article of the lab setup mini-series.  The configurations will be along the lines as seen in this next screenshot.

This gets us to where we can start building our SQL Cluster.  We will cover that in the next installment.

On the Tenth Day…

Silver and Gold have a way of really bringing the look and feel of the Christmas season.

Silver and Gold also seem to represent something of greater value.

We are now into the final three articles of the 12 Days of pre-Christmas.  And with these three articles, I hope to bring something that is of more value than anything shared so far.

Of course, the value of these articles is subjective.  I have my opinion as to why these are more valuable.  I hope to convey that opinion as best as possible to help bring out as much value as can be garnered from these articles.

Let’s first recap what we have to date.

On the Tenth Day of pre-Christmas…

My DBA gave me an education.  Sure, everyday so far in this series could possibly be an education.  This is an education via a lab.  Every DBA should have a lab to be able to test features and hone skills.  A lab is a better place to do some of the testing that needs done than the DEV, QA, or even Production environments.

Think about it, do we really want to be testing the setup of clustering in the DEV environment and potentially impact the development cycle?  I’d dare so no.

Unfortunately, reality does not always allow for a lab environment to be accessible to the DBA.  So the DBA needs to make do with other means.  It is due to these types of constraints, that I am devoting the next three days to the setup of a lab.  This lab can even be created on a laptop.  I created this lab on my laptop with only 8GB of ram.  I was quite pleased to see that it performed well enough for my testing purposes.

We will begin with an introduction to the technology used – VirtualBox.  I will also discuss the creation of enough virtual machines to create a SQL Cluster (domain controller, two sql boxes, and a NAS) along with the configuration steps to ensure it will work.

For this lab, we will be using Virtual Box.  You can download Virtual Box here.  And yes, the tool is one that is provided by Oracle.  Two of the reasons I want to use Virtual Box is the ability to install multiple operating systems, and the tool is currently free.  Another benefit is that I can easily import virtual machines created in VMWare as well as Microsoft Virtual Server/Virtual PC (I have not tested any created in Hyper-V).

While you are downloading the Virtual Box app, download the Extension Pack as well.  Links are provided for the extension pack on the same page as the application download.  Be sure to download the Extension Pack for the version of Virtual Box you download.

The version of VirtualBox I will be using for this article is 4.2.2.  As of the writing of this article a new version has been released – 4.2.6.  The differences in versions may cause the instructions in these articles to be inaccurate for 4.2.6.  You can use whichever version you deem appropriate.  I just won’t be covering version 4.2.6 and don’t know if the screens are different or it the settings are different.

You can check your version in the Help.About Menu.

For this lab, we have a few things that will be required prior to setting up the SQL Cluster.  Two big components of this infrastructure are Storage and a Domain.  We are going to simulate shared storage through the use of FreeNAS.  We will be discussing FreeNAS today.

For starters, we can download FreeNAS from here.  You might be able to find a few configuration guides online for FreeNAS.  Most of them seemed to be for really old versions and were less than usable for the version that I had downloaded.  All settings to be discussed today are pertinent to FreeNAS-8.3.0-RELEASE-x64 (r12701M).

To setup FreeNAS, we will need to have a Virtual Machine configured with the following settings.

  • A BSD VM with FreeBSD as the version.
  • Ensure the motherboard settings only has the “Enable IO APIC” setting checked.
  • Three Virtual Disks (1 for NAS OS, 1 for SAN Storage, and another for a Quorum)
  • 512 MB memory
  • 2 Network Adapters (1 Internal and 1 connected to the Host-Only Adapter)

Despite the FreeNAS actual disk requirements being rather small, any fixed disk size less than 2GB causes mount errors.  Any amount of memory less than 512MB also causes a mount problem.  These settings are the minimum configurations to save the hair on your head.

The Network Adapters is more of a strong suggestion.  I was able to get it to work with only one adapter, but it was more hassle than it was worth.  I found it easier to configure for use by the cluster later if I had two adapters.  The two adapter configuration also allows me easier administration from within the VM environment as well as from the host machine.

One other thing to do is to mount the FreeNAS ISO that has been downloaded to the CD drive that is created by default with the VM creation.  I mount the ISO before booting by opening the settings for the VM within Virtual Box.  On the storage screen, highlight the “Empty” CD Icon in the middle then click on the CD Menu Icon on the far right as shown below.

Navigate to the folder where the FreeNAS ISO is saved and then click ok until you are back at the Virtual Box manager screen.  You are now ready to start the machine and finish the install and then configure.

Once powered on, you should eventually come to the following screen.

Select to Install/Upgrade.  From here, you will see a few more prompts such as the next screen to select the installation location.

This should be pretty straight forward installation options for the IT professional.  I will not cover all of the installation prompts.  Once the install is finished, you will need to reboot the VM and un-mount the installation media.  The system will then come to the following screen.

Now that we are at the console screen, the next step is to configure the Network Interfaces.  You can see that I have already done this based on the IP addresses seen at the bottom of the screen.  I will leave the configuration of the IP addresses to you.  Both the internal network and the host-only network will need to be configured.  The host network should be the second adapter.  Keep track of the IP addresses that have been configured.  We will need to use them later.

In a browser window we will now start configuring the storage to be used by our Lab.  In the address bar, we will need to input the address we configured for the host network.  In my case, 192.168.56.103.  When that page loads, the first thing we need to do is change the Admin password.

The default password is empty.  Pick a password you will remember and that is appropriate.  With that done, we can configure the storage.

The Next setting, I want to configure is the iSCSI setting.  In order to use the volumes that we create, we must enable the iSCSI service.  In the top section, click the Services button.  This will open a new tab in the web browser.  On the Services tab, we need to toggle the slider for iSCSI to the “ON” position as shown in the image.

Once toggled, we can configure the iSCSI settings for the volumes we will now create.  From here, we click on the storage tab.  Next, click on the Volume Manager Button.  In order for the disks to be imported, we have to use volume manager.  The Import Volume and Auto Import Volume must serve other purposes – but they don’t work for importing a new volume.  Here is a screenshot demonstrating what needs to be configured.

With the Volume created, a ZFS volume must next be created from within the storage management.  We do this by clicking the “Create ZFS Volume” icon next to the main volume we just created.  This icon is illustrated as the icon on the far right in the next image.

Once that icon is clicked, you will be presented with a new dialog.  The dialog is demonstrated in the above image.  Give the Volume a Name and then give it a size.  Note that you must specify a storage unit (m or g for example) or you will receive a pretty red error message.

Now go back to the Services tab where we enabled iSCSI.  There is a wrench icon next to the toggle to enable the service.  Click on this wrench and a new tab will be opened (again within the FreeNAS webgui) and the focus will be switched to this new tab.  On the “Target Global Configuration” ensure that Discovery Auth Method is set to “Auto.”  If it is not, make the change and click save at the bottom.

Next is the Portals.  The portals should be empty so we will need to add a portal.  By default, only one IP address is displayed for configuration on a new Portal entry.  We want to configure two IP addresses.  First, select 0.0.0.0 from the IP Address drop down on the new window that opened when clicking on “Add Portal.”  Then select “Add extra Portal IP”.

Next is to configure an Initiator.  For this lab, I created on Initiator specifying ALL for the Initiators and Authorized Network as shown here.

With an initiator and a portal in place, we now proceed to the configuration of the Targets.  I have configured three targets and the main difference is in the name.  They should be configured as shown here.

Almost done with the setup for the storage.  It will all be well worth it when we are done.  We need to configure Device Extents and then Associate the targets, then we will be done.

Like with the Targets, I have three device extents configured.  The configuration for each is the same process.  I want to give each a name that is meaningful and then associate the extent to a disk that we imported earlier.

Last for this setup is the Target to Extent association.  This a pretty straight forward configuration.  I named my targets the same as extents so there was no confusion as to which should go with which.

That wraps up the configurations needed to get the storage working so we can configure a cluster later on.  Just getting through this configuration is a pretty big step in getting the lab created for use in your studies and career enhancement.

Next up in this series is to show how to configure (in limited detail) a domain and DNS, and then to install and configure a cluster.  Stay tuned and I will even through in a few tidbits here and there about Virtual Box.

I didn’t include every screenshot possible throughout the setup of FreeNAS and the configuration of iSCSI.  Part of the fun and education of a lab is troubleshooting and learning as you go.  If you run into issues, I encourage you to troubleshoot and research.  It will definitely strengthen your skill-set.

«page 1 of 9

Calendar
September 2019
M T W T F S S
« Jul    
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Welcome , today is Sunday, September 22, 2019