Incorrect Syntax – What?

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: November 17, 2017

Intuitive errors are very helpful when trying to troubleshoot a problem. On the other hand, when we receive an error that is vague, we sometimes find ourselves wanting to throw the laptop out the window. What do we do though when the error message really does look to be pretty intuitive but we can’t figure out where the problem is?

Today, I am going to cover one of these falsely intuitive errors. As it turns out, this is a problem that I had actually forgotten about since I had not seen it in many years. The error is shown here.

This error looks pretty straight forward, right? It tells me there is an incorrect syntax somewhere in my query. Running into a syntax error somewhere along the line does happen to all of us at one time or another. Because of that, I would not be terribly alarmed by the error and just proceed to fixing the syntax problem. Here is the query that is throwing that error.

Now is the time to start pulling out your hair. There is no syntax error in the query. Go ahead and look it over 10-15 times. I know I did.

Ok, if there is no syntax error, then what could possibly be the real problem? Is the database corrupt? Maybe a system table is corrupt? Grasping at straws here, but could it possibly even be some sort of royally screwed up permissions.

Everything seems to be checking out properly. There is no corruption whatsoever. Laptop is soon to be launched at this point right? Ok, maybe not launched because this is a simple query. But, had this been a production related query that was rather intense and complicated, there really may be something getting launched as the frustration mounts.

Simple Solution

Well, since the error does not fit the crime, maybe this is one of those really odd errors that truly means something else. In this case, if we dissect the query we will probably notice that there is a DMO being called here. If I query sys.stats directly there is no problem but joining to sys.dm_db_stats_properties I get the error. If I run the query from the master database, the query runs without error. In fact, running the query from within a connection to most databases succeeds.

Now we have two clues to the problem. It only fails in some databases and only fails when the DMO is used in the query. Let’s take a look at database properties and see what we can find. In this scenario I am finding this little nugget in the affected databases.

This database happens to be in SQL Server 2000 compatibility mode. I wonder why that may be and proceed to investigating as well as fixing it. Let’s see what happens after changing the compatibility mode to something more recent (and yes this is an OLD version of SQL Server and the whole thing should be updated anyway but that is a discussion for another time).

If we step through this whole script, we will see that this is a script that will reproduce the entire scenario from break to fix. First thing is to set the compatibility mode to 80 (SQL Server 2000), then run the query (results in the error). Next I change the compatibility mode to 100 (SQL Server 2008) and rerun the query (no error this time) and I get all of the stats with properties that I was hoping to get. Then I conclude with confirming that I am in the correct compatibility mode.

Conclusion

Errors will happen as we work with TSQL – that can be expected. Sometimes we type too fast or just miss something here or there. When we get a syntax error, it really sets our mind to focusing on it being a mistake that we made. In this case however, the error is thrown for a syntax problem because the DMO is not accessible in compatibility mode 80.

Due to the straight forward error – we may spend a little to much time looking at the wrong thing. Keep your mind open to the error being elsewhere than your code should you see a syntax error near “.” in your endeavors.

What’s that SSIS Password

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: October 30, 2017

A recurring theme over the past several weeks (there are always recurring themes it seems) has been an issue that relates to SSIS. Clients will call for help with an SSIS package that has started failing and they are stuck trying to figure it out. They are all stuck at the same spot in the process – trying to get the package open. As it turns out somebody has decided to password protect the package in each case and the client is unable to open the package because they don’t know the password.

Inevitably they are all stuck at the following screen:

Granted this screenshot is for the password prompt for a project password – it really is the same issue if it is a package password. This seems like pretty good security right? If nobody knows the password then nobody can alter the package, right? At least that seems to be the prevailing course of thought.

Of course this introduces various other issues, one of which I alluded to already – a failing package. What if the requirements change? What if the package needs to be migrated? What if the package needs to be documented from stem to stern? Even better -what if this type of security is not all it is purported to be?

SSIS Security

Microsoft has provided various different “protection” levels for affecting sensitive information within a package. These levels can be set from within the GUI or from the dtutil utility. Here are some of those settings.

Setting Description Value
ServerStorage Rely on SQL Server database roles for protection. Only valid if saved to msdb and not to the file system.
DontSaveSensitive Suppresses the values of sensitive properties in the package when the package is saved. 0
EncryptSensitiveWithUserKey Uses a key that is based on the current user profile to encrypt only the values of sensitive properties in the package. 1
EncryptSensitiveWithPassword Uses DPAPI to to encrypt sensitive values in the package based on a user generated password. 2
EncryptAllWithPassword Encrypt the entire password based on a user generated password. 3
EncryptAllWithUserKey Uses a key based on current user profile to encrypt the package. Only the user that encrypted the package can open or run the package. 4

Based on these descriptions, a look at the package contents (via notepad) would look something like this:

The package will have that ProtectionLevel value added to the xml of the package with the selected value based on the descriptions from the previous table. In the case of this image, it appears that the package in question has been set to “EncryptSensitiveWithPassword” which correlates to a value of 2. This is also what is causing that password prompt to be displayed when opening the package.

So What’s that SSIS Password?

First, a bit of a birdwalk. Some will say you don’t need that password. They say you can go ahead and just change the “ProtectionLevel” value in the project/package xml and all will be rosy good. That may or may not work. Try it at your own risk. That said, if your ProtectionLevel is at a value of 2, there is an alternative method you could retrieve that password. I am going to show you just how to do that.

This method is going to need to assume that the package is being executed via a SQL Agent job and that you also have access to view job properties on the server in question.

Let’s just jump straight to a script to help us with that.

Take note here that I am only querying the msdb database. There is nothing exceedingly top secret here – yet. Most DBAs should be extremely familiar with these tables and functions that I am using here.

What does this show me though? If I have a package that is being run via Agent Job in msdb, then the sensitive information needs to be decrypted somehow. So, in order to do that decryption the password needs to be passed to the package. As it turns out, the password will be stored in the msdb database following the “DECRYPT” switch for the dtutil utility. Since I happen to have a few of these packages already available, when I run this particular query, I will see something like the following in my results.

Now, let’s go ahead and take one of those packages and test this out. I will just take the password that was listed for one of the packages, find the path of that package and then try to open the package using the password I just uncovered. Let’s keep it simple and say I am trying to open the first package called “Sports Adobe Ticket Sales Data Upload”.

After finding the package in question, I enter the password – shown here.

After I click the “OK” button, I am greeted with the following.

Tada! I now have full access to the package fully decrypted. If I need to change the ProtectionLevel, I can easily do that now as well. From here, I am set to proceed with troubleshooting the failure or any of the other reasons I noted for needing to gain access to the package.

 

Conclusion

Occasionally it becomes necessary to access an SSIS package that may be encrypted with a password. This is easily accomplished (unfortunately) if the package happens to be run via a job in SQL Server. While it is possible to gain access to the package, it would be far better to not need to circumvent the security to gain access. Instead, it would seem a much better idea to properly document the necessary passwords etc in a common vault so the appropriate people could access the package in the event of an emergency.

Drop That Schema

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: October 27, 2017

An often under utilized or maybe even mis-utilized feature of SQL Server is a database object schema. In the event of the latter, there is an occasional requirement for change. When this need arises, it can turn into a bit of a problem. That is of course if we are not well prepared. In this article, I am going to explore one possible change – the dropping of a schema.

What is a Schema?

A schema is a distinct namespace to facilitate the separation, management, and ownership of database objects. It removes the tight coupling of database objects and owners to improve the security administration of database objects (Source: Technet).

Much like a skeleton is a distinct system that helps support the structural integrity of certain biological entities, a schema helps to support a distinct functioning and structure within a database. This is, of course, a very simplistic description and analogy, but it works.

Another way to look at a schema is almost like a blueprint of what has been or will be created within the database. Like many blueprints, a database schema can map out all sorts of “sub-systems” at our disposal within the database. Some of these may include functions, procedures, tables, views and so forth.

What happens when you try to take out the entire blueprint without understanding the relationship to all of the subsystems? In the case of the schema it just so happens that you will be presented with an error message informing you that you are trying to do something that is either ill-advised or not supported or both.

Here is an example of such an error.

Msg 3729, Level 16, State 1, Line 2
Cannot drop schema ‘Maintenance’ because it is being referenced by object ‘AutoStatsHistory’.

This error message proves to be helpful. I obviously have some objects bound to the schema that need to be blown away before I can drop the schema. If I only have a few objects, this may not be too terribly annoying to do one by one. But what if I have 100 or more objects? Now that becomes an entirely different story.

Drop that Schema!

I have run into this very issue where there are far too many objects in the schema to be able to drop one by one. Add to the problem that I am looking to do this via script. Due to the need to drop the schema and the (albeit self imposed) requirement of doing it via script, I came up with the following that will cover most cases that I have encountered.

And a sample of the output:

As you can see here in the output, I have set the script to generate a series of drop statements for each of the dependent objects within the schema. In addition, the drop statements are ordered to remove objects that may have dependencies on other objects first. A big key here is that this script does not drop the objects for you. Rather it just creates the scripts to do the work. You must review the output and then execute the scripts separately. That means you will be entirely responsible for the results.

Once all of the objects are out of the way, the last statement in the sequence is to finally drop the schema.

You may notice that there are a few things not included in this script. The most notable may be that the Service Broker related objects are not accounted for in this script. I leave that for a later revision.

Conclusion

Occasionally it becomes necessary to remove a schema from the database for one reason or another (e.g. somebody decided to do both Dev and Prod in the same database separated only by schemas). Dropping a schema can become a little bit of a pain without the right tools. This script will help get you on your way to a scriptable solution (to help with documentation of what was changed) and a little less pain while doing it.

Day 12 – High CPU and Bloat in Distribution

This is the final installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom
  10. Retention of XE Session Data in a Table
  11. Purging syspolicy

distribution

Working with replication quite a bit with some clients you might run across some particularly puzzling problems.  This story should shed some light on one particularly puzzling issue I have seen on more than one occasion.

In working with a multi-site replication and multi-package replication topology, the cpu was constantly running above 90% utilization and there seemed to be a general slowness even in Windows operations.

Digging into the server took some time to find what might have been causing the slowness and high CPU.  Doing an overall server health check helped point in a general direction.

Some clues from the general health check were as follows.

  1. distribution database over 20GB.  This may not have been a necessarily bad thing but the databases between all the publications weren’t that big.
  2. distribution cleanup job taking more than 5 minutes to complete.  Had the job been cleaning up records, this might not have been an indicator.  In this case, 0 records were cleaned up on each run.

The root cause seemed to be pointing to a replication mis-configuration.  The mis-configuration could have been anywhere from the distribution agent to an individual publication.  Generally, it seems that the real problem is more on a configuration of an individual publication more than any other setting.

When these conditions are met, it would be a good idea to check the publication properties for each publication.  Dive into the distribution database and try to find if any single publication is the root cause and potentially is retaining more replication commands than any other publication.  You can use sp_helppublication to check the publication settings for each publication.  You can check MSrepl_commands in the distribution database to find a correlation of commands retained to publication.

Once having checked all of this information, it’s time to put a fix in place.  It is also time to do a little research before actually applying this fix.  Why?  Well, because you will want to make sure this is an appropriate change for your environment.  For instance, you may not want to try this for a peer-to-peer topology.  In part because one of the settings can’t be changed in a peer-to-peer topology.  I leave that challenge to you to discover in a testing environment.

The settings that can help are as follows.

[codesyntax lang=”tsql”]

[/codesyntax]

These settings can have a profound effect on the distribution retention, the cleanup process and your overall CPU consumption.  Please test and research before implementing these changes.

Besides the potential benefits just described, there are other benefits to changing these commands.  For instance, changing replication articles can become less burdensome by disabling these settings.  The disabling of these settings can help reduce the snapshot load and allow a single article to be snapped to the subscribers instead of the entire publication.

Day 11 – Purging syspolicy

This is the eleventh installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom
  10. Retention of XE Session Data in a Table

Garbage-Dump

Did you know there is a default job in SQL Server that is created with the purpose of removing system health phantom records?  This job also helps keep the system tables ,that are related to policy based management, nice and trim if you have policy based management enabled.  This job could fail for one of a couple of reasons.  And when it fails it could be a little annoying.  This article is to discuss fixing one of the causes for this job to fail.

I want to discuss when the job will fail due to the job step related to the purging of the system health phantom records.  Having run into this on a few occasions, I found several proposed fixes, but only one really worked consistently.

The error that may be trapped is as follows:

A job step received an error at line 1 in a PowerShell script.
The corresponding line is ‘(Get-Item SQLSERVER:\SQLPolicy\SomeServer\DEFAULT).EraseSystemHealthPhantomRecords()’.
Correct the script and reschedule the job. The error information returned by PowerShell is:
‘SQL Server PowerShell provider error: Could not connect to ‘SomeServer\DEFAULT’.
[Failed to connect to server SomeServer. –>

The first proposed fix came from Microsoft at this link.  In the article it proposed the root cause of the problem being due to the servername not being correct.  Now that article is specifically for clusters, but I have seen this issue occur more frequently on non-clusters than on clusters.  Needless to say, the advice in that article has yet to work for me.

Another proposed solution I found was to try deleting the “\Default” in the agent job that read something like this.

(Get-Item SQLSERVER:\SQLPolicy\SomeServer\Default).EraseSystemHealthPhantomRecords()

Yet another wonderful proposal from the internet suggested using Set-ExecutionPolicy to change the execution policy to UNRESTRICTED.

Failed “fix” after failed “fix” is all I was finding.  Then it dawned on me.  I had several servers where this job did not fail.  I had plenty of examples of how the job should look.  Why not check those servers and see if something is different.  I found a difference and ever since I have been able to use the same fix on multiple occasions.

The server where the job was succeeding had this in the job step instead of the previously pasted code.

if (‘$(ESCAPE_SQUOTE(INST))’ -eq ‘MSSQLSERVER’) {$a = ‘\DEFAULT’} ELSE {$a = ”};
(Get-Item SQLSERVER:\SQLPolicy\$(ESCAPE_NONE(SRVR))$a).EraseSystemHealthPhantomRecords()

That, to my eyes, is a significant difference.  Changing the job step to use this version of the job step has been running successfully for me without error.

I probably should have referenced a different server instead of resorting to the internet in this case.  And that stands for many things – check a different server and see if there is a difference and see if you can get it to work on a different server.  I could have saved time and frustration by simply looking at local “resources” first.

If you have a failing syspolicy purge job, check to see if it is failing on the phantom record cleanup.  If it is, try this fix and help that job get back to dumping the garbage from your server.

Day 10 – Retention of XE Session Data in a Table

This is the tenth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom

food-storage

 

Gathering event information is a pretty good thing.  It can do wonders for helping to troubleshoot.  What do you do if you need or want to be able to review the captured information in 3 months or maybe 12 months from now?

Retaining the session data for later consumption is often a pretty essential piece of the puzzle.  There is more than one way to accomplish that goal.  I am going to share one method that may be more like a sledgehammer for some.  It does require that Management Data Warehouse be enabled prior to implementing.

When using MDW to gather and retain the session data, you create a data collector pertinent to the data being collected and retained.  In the following example, I have a data collector that is created to gather deadlock information from the system health session.  In this particular case, I query the XML in the ring buffer to get the data that I want.  Then I tell the collector to gather that info every 15 minutes.  The collection interval is one of those things that needs to be adjusted for each environment.  If you collect the info too often, you could end up with a duplication of data.  Too seldom and you could miss some event data.  It is important to understand the environment and adjust accordingly.

Here is that example.

[codesyntax lang=”tsql”]

[/codesyntax]

Looking this over, there is quite a bit going on.  The keys are the following paramaters: @parameters and @interval.  The @parameters parameter stores the XML query to be used when querying the ring buffer (in this case).  It is important to note that the XML query in this case needs to ensure that the values node is capped to a max of varchar(4000) like shown in the following.

[codesyntax lang=”tsql”]

[/codesyntax]

With this data collector, I have trapped information and stored it for several months so I can compare notes at a later date.

Day 9 – Queries Going Boom

This is the ninth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New

Kaboom

Ever see an error like this??

The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.

Error 8623
Severity 16

That is a beautiful error.  The message is replete with information and gives you everything needed to fix the problem, right?  More than a handful of DBAs have been frustrated by this error.  It’s not just DBAs that this message seems to bother.  I have seen plenty of clients grumble about it too.

The obvious problem is that we have no real information as to what query caused the error to pop.  The frustrating part is that the error may not be a persistent or consistent issue.

Thanks to the super powers of XE (extended events), we can trap that information fairly easily now.  Bonus is that to trap that information, it is pretty lightweight as far as resource requirements go.

Without further ado, here is a quick XE session that could be setup to help trap this bugger.

[codesyntax lang=”tsql”]

[/codesyntax]

And now for the caveats.  This session will only work on SQL 2012.  The second caveat is that there are two file paths defined in this session that must be changed to match your naming and directory structure for the output files etc.

Should you try to create this session on SQL Server 2008 (or 2008 R2) instead of SQL Server 2012, you will get the following error.

Msg 25706, Level 16, State 8, Line 1
The event attribute or predicate source, “error_number”, could not be found.

Now that you have the session, you have a tool to explore and troubleshoot the nuisance “complex query” error we have all grown to love.  From here, the next step would be to explore the output.

Tools note

Categories: News, Professional, SSC
Comments: No Comments
Published on: March 24, 2011

I was asked a few months ago to allow some advertising on my blog.  I hadn’t given the idea a whole lot of thought but was reminded again today about the request.  For now, I think I will just give the requester a mention and add the tool he wants advertised to the growing list of tools that I have been compiling.

The tool looks like it has some merit and seems like it would be worth the trial.  The tool is called DBDefence.  You can find more about it here.  The purpose of this tool is to help encrypt your databases – outside of the databases and prevent people from prying too far into what they shouldn’t.  I haven’t yet tested the tool – but it seems quite interesting.

And since I have a captive audience already, I will add one more tool to the mix.  Like DBDefence, this other tool is security focused.  In SQL Server there is a bug that allows the savvy to view the unencrypted password of the SQL accounts that have logged in since last service start.  One of the passwords that falls into this category is that of the sa account.  Sentrigo has a tool that will clear that password from memory so it can no longer be viewed in clear text.  The name of the tool is Passwordizer.  You can check it out from here.

SQL Server Startup

From time to time I see a question asking how to determine the last time a SQL Server was either restarted or when the last time was that the database was used.  For the first question there is a pretty common answer that works for SQL 2000 and up.  In answering both questions though, I am going to show you three quick similar scripts that can be used to find this information.  The catch is that these queries will not work on SQL 2000.  On the flipside, it should give you an idea of what can be done to find out the same info for SQL 2000.

TempDB

[codesyntax lang=”tsql”]

[/codesyntax]

A common answer to the question of when was SQL Server last started is to check the Create_Date of the tempdb database.  Since tempdb is recreated every time SQL Server is restarted, it is a pretty accurate timestamp of the server start.  This query was written to work for SQL 2005 and above, but you can still find the create_date of the tempdb database in SQL 2000 by using a different query.

sys.dm_exec_requests

[codesyntax lang=”tsql”]

[/codesyntax]

In this query we access one of those Dynamic Management views available in SQL 2005 and up.  In this case, we look at the first process id similar to what one may have done with sysprocesses in sql 2000.  Now is probably a good time to also note something that is happening inside the CTE.  Notice the COALESCE that is being used?  I am taking the Max from those fields that are associated with read type of activities in a database.  This information is being pulled from a DMV called sys.dm_index_usage_stats.  This query will get me the most recent activity for reads and writes in each database (notice the group by).

Default Trace

[codesyntax lang=”tsql”]

[/codesyntax]

The last one is to make use of the default trace in SQL 2005, 2008 and 2008 R2.  The default trace records the time of certain events that occur in the database and that includes when the database is starting up.  To find the server startup time in this case, I am checking the EventSequence and IsSystem fields for a value of 1.  As you can see, I also dumped the filepath for the default trace file into a variable and use that in the function to get the data.

When comparing performance of these three options, they pretty much came out even.  Each took its turn performing faster – with regards to time.  However, the default trace method did return a more expensive execution plan every single time.  With how fast these queries run, I’m not real sure that that is very conclusive nor that it would be a heavy concern.  These queries are designed more for the occasional run by the DBA rather than to be executed millions of times a day.  You decide which will be best for you.  I think with the use of the function to get the default trace info, I was mildly surprised that the query performed that well.

Another thing to note is that each of these methods will return a slightly different timestamp.  For instance, the tempdb timestamp gives me 3:21:28 and the sys.dm_exec_requests produces a timestamp of 3:21:36 and the tracefile shows a timestamp of 3:21:24.  I don’t have any hard foundation for why that is – just speculation that seems to make sense.  I will just leave it at that though.

In addition to these methods there is also the option of checking the system event logs to determine the SQL Service startup time.  This information should help you when investigating your databases and server and would be something handy to keep in your toolbox.

Database Tools Follow-up

Categories: News, Professional, SSC
Comments: No Comments
Published on: March 18, 2011

After posting a database tools list yesterday, I found that there were things that I had forgotten and a couple of things that i hadn’t even considered.  In thinking about this, I found myself trying to find a script that I knew I had seen and that I wanted to put on the list.

Trying to find the script got me a bit out of sorts.  I knew I could probably find it on my share at work with my scripts (which reminds me that I really need to get a dropbox or put them all on a thumbdrive to keep with me).  But, I wasn’t at work and wanted to locate this cool script that I knew was recommended by another prominent person (more about that later).  So I pulled out my googlefu.

I tried searching the web for this blog that had the script and tried several combinations of the key words I knew would help me find it.  In doing this search, I came across a few blogs that had created a list of tools that database professionals should check out.  Here is a short list of the blogs I found.

All of these lists are great compilations.  I also found one more list, but found that it was just a copy of Aaron’s list.  This was extremely disappointing.  Just reference the list or come up with a list of your own.  A SQL Server MVP should never have to copy content from another MVP.  It’s not even that the list is the same – it was a copy of the entire post by Aaron.  I personally hold MVPs to a higher standard and find it disappointing and aggravating when one is copying content and is esteemed as a person who has contributed much to the community.

None of these lists had what I the item for which I was looking.  The item that I wanted was a script that Brent Ozar (blog) had given high compliments to a few months back.  It was a script that, he said “a script that’s like my Blitz server takeover script, but even more advanced.  Love it!”  Too bad I couldn’t remember the person, script or wording he used when I was trying to find the script.  Better yet, the page with the reference popped up several times in my searches.  Well, needless to say, I pinged the community and Brent on twitter and Brent came through.  Here are a few more items to add to the list of tools you should have or try.

  1. 60 Minute Blitz Script (Brent Ozar) – check it out.  It will be worth your time.  If you looked at the other lists you will also see that this script is there and highly recommended.
  2. Diagnostic Information Queries (Glenn Berry) – this was the script I was trying to find.  This is the script that Brent has given high praise – give it a look.
  3. sp_whoisactive (Adam Mechanic) – Newest version as of this post is 10.76 and is listed as a beta version.
  4. SSMS ToolsPack (Mladen Prajdic)

So there you have it.  Between my two lists and these other great lists – you should have a great toolbox.

«page 1 of 2








Calendar
November 2017
M T W T F S S
« Oct    
 12345
6789101112
13141516171819
20212223242526
27282930  
Content
SQLHelp

SQLHelp


Welcome , today is Sunday, November 19, 2017