Extended Events, Birkenstocks and SQL Server

TSQL Tuesday

I bring you yet another installment in the monthly meme called T-SQL Tuesday.  This is the 67th edition, and this time we have been given the opportunity to talk about something I really enjoy – Extended Events.

Props to Jes Borland (blog | twitter) for picking such an awesome topic. There is so much to cover with extended events, it’s like a green canvas ready for anything you can imagine.

Birkenstocks?

birkentstocks

I will save the explanation here for later when hopefully it all ties together for you (well, at least buckles up).

treehugger

While that is all fun and playful, let’s get down to the serious side now. One of my favorite quick fixes as a consultant is to come in and find that the server is set to “environment friendly” / “green” / “treehugger” mode. You can read more about power saving cpus from my friend Wayne Sheffield here.

That poor old cpu thing has been beat up pretty good. But how can we tell if the server is running in that mode if the only thing we can do is look in SQL Server (can’t install cpu-z, or don’t have adequate permissions on the server to see windows settings – just play along)? Lucky for us there is this cool thing called Extended Events.

In SQL Server we have this cool event called perfobject_processor. This particular event has some really cool metrics that it captures.  One such metric is the frequency. The frequency is an indicator to us whether the server has the cpu set to balanced, high performance, or power saver. Having that in mind, let’s create a session to trap this data and experiment a little with the cpu settings.

Well, that looks amazingly easy and straight forward. I am telling the session to trap the additional CPU information such as numa_node_id and cpu_id. You can eliminate those if you wish. They may be beneficial when trying to identify if there is an issue on a specific processor though.

To experiment, I will break out the age old argument provoker – xp_cmdshell. I will use that to cycle through each of the power saving settings and look at the results. Here is the bulk of the script all together.

And now for the XE Parser.

If I parse through the extended event after each change of the power scheme, I would be able to see the effect of each scheme change in the event session as well as in a tool such as Resource Monitor. Here is what I was able to see with each of the changes.

Balanced Saver

From Resource Monitor:

balanced_cpu

And the XE data:

balanced_results

This is my default power scheme. On my laptop, this is ok. For a production SQL server, this will cause problems.

High Performance

fullpower fullpower_results

Quickly, you should be able to spot that the blue line in the graph, and the numeric values from the XE session correlate to the processor giving you everything it has. This is good for SQL Server.

Power Saver

powersaver_cpu

See how that blue line falls off sharply?

 

powersaver_results

 

Supporting that steep fall in the graph, we can see that the XE trapped the percent of max frequency as 36%. You might be lucky and attain 36%. Don’t be surprised if you see something even lower. Please don’t use this setting on a production box – unless you want to go bald.

We can see that we have great tools via Extended Events to help troubleshoot various problems. As I said, this is one of my favorites because it is a very common problem and a very easy fix.

SQL Server is not GREEN! Do not put birkenstocks on the server and try to turn the server into a tree hugger. It just won’t work out that well. Set your fileservers or your print servers to be more power conscientious, but this is something that will not work well on SQL Server.

Final thought. If you have not figured out the birkenstocks, well it is a common stereotype with environmentalists in some areas that they may wear woolly socks and birkenstocks.

No wool socks were harmed in the making of this blog post!

SSIS: Value does not Fall Within the Expected Range

Categories: News, Professional, SSC
Comments: No Comments
Published on: July 17, 2014

Every now and again I find myself working with SSIS for one reason or another.  Every now and again I find myself fighting with SSIS because it provides less than useful error messages.  This is one of those error messages that can be a pain until you understand what can be causing it.  Albeit, that doesn’t help resolve the problem with the error message or with its usefulness or the pain that it can/will cause you.

execute sql taskThe error message “Value does not Fall Within the Expected Range” is tied to the execute sql task that can be placed in the control flow.

Let’s assume we have such a task in our control flow as demonstrated by the attached image.  We’ll call the Execute SQL Task “Parse Client Code.”

Inside of this task, we will probably have something like what is seen in the next image.

parametermapping

Here we have attached parameters to the task and even assigned those parameters to “variable” names that we might use within the query.  In this case, we are just trying to parse a code from a filename that can be used in downstream processing.  The code might look like the following.

[codesyntax lang=”tsql”]

[/codesyntax]

If I run that task at this point, I will get an error.  The error could be for any number of reasons based on the setup that I just showed.  The most common is that the Parameter Name is not really a name but really should be an ordinal position as to when the parameter is used in the script.  The ordinal position is 0 based.

The second issue is the data type that has been selected in error.  This should be a name and not a guid.  This means I should change the data type to the varchar type from the drop down that is available on the parameter screen under data type.

The next issues is the use of the variable name in the script itself.  One should use a ? instead of variable names.  So, this script should be fixed to look like the following.

[codesyntax lang=”tsql”]

[/codesyntax]

And the parameter screen should be made to look like the following.

paramfixed

These quick fixes can eliminate or even prevent what might possibly be a headache when dealing with SSIS.

Now, what if you need to have more than one parameter for your query?  No problem.  The same principles apply.  Just map your parameters according to proper data type and to the ordinal position that the parameter needs to be used within the query and you should be all set.

Using Synonyms to Extend SSIS

Comments: No Comments
Published on: July 3, 2014

There are a million and one uses for synonyms.  There are at least that many uses for SSIS.  The reality is, not all of those uses are good for you nor for your data nor for your database.

Recently I wrote an article about some good and some bad with synonyms.  You can read that article from my work blog site here.  Today, I just want to expand on that article a little bit.  I glossed over some things pretty quick in that article that I though might be fun to explain with a little more detail.

The Bad – a little closer look

First, let’s take a look a little closer at the bad.  More specifically, in the example I used, there was a vendor that created a synonym for everything.  I really do mean everything.  This is one case where using the word “literally” would be accurate.  On the client database, I could run a quick query such as the following and see over 7300 synonyms.

[codesyntax lang=”tsql”]

[/codesyntax]

In the case of this client and this vendor, 7300+ synonyms is far too many.  It just led to mass confusion.  If you were to run that query, you might see something like the following image.

massSynonyms

I added a “derived” column to show the total count of synonyms and the record name as it relates to that total.  That is a crazy amount of synonyms.  That just makes me think that somebody got into the krazy kool-aid one day, was bored and gave life to a synonym beast.

The Good – With more detail

On the flip side, in the aforementioned article, I talked about synonyms as a means to tweak performance in SSIS.  Normally I would not tout a synonym as a performance tweak.  So I want to caution that the performance gains are specific to SSIS and a handful of those millions of uses for SSIS.

Let’s just begin with a little bit of background.  For that background, some pictures will be really handy.  So here goes.

SpiceSource

In the preceding image we see a very simple segment of a data flow.

The data source uses a sql command to fetch the data required for the data flow.  In the beginning, it is very straight forward.  You probably have some package lying around with something similar to this.

In the following image, we see what the SQL Command was for that data source circled in red in the previous image.

SQLWithout

In the next image we see a slight tweak to the query.  This time to include a reference to a table that is defined/obfuscated by a synonym.

SQLWith

At this point I can hear some of you saying, “Ok, I see what he is doing.”  While many others are wondering why I just made the query more complex than the previous example.

Well as luck would have it, this change serves a couple of purposes.  1) The data has been staged in a separate database.  That database has a different name in every environment (recall the aforementioned article).  So the synonym minimizes code changes when deploying the package.  2) The synonym allows us to confirm that there is data in the stage table and that the data matches a ClientCode in the destination table.  3) Lastly, the synonym reduces my dataset which reduces memory requirements and also gets the data loaded faster (because it is smaller).

In addition to this minor tweak, I can also do something like the following.

WithoutSynonym

In the preceding image, we see two datasources.  Each datasource is subsequently sorted and then eventually joined.  Much like the previous example, due to naming standards and an effort to try and minimize code changes during deployments, at least one datasource is pulling in too much data.  The data is filtered down due to the Join transformation, but this is not very efficient.

WithSynonym

Through the use of a synonym, the datasources can be reduced to a single datasource.  This will eliminate the need for the Sort transformations and Join transformation.  Removing those three transformations reduced memory requirements.  And like the previous example, since we can trim down the number of records, the data flow will run a little bit faster too.

BigSQLWith

As You can see, the code is simple.  It’s not a super TSQL trick or anything to add a synonym into an existing query.  It just gets referenced like any other table.  Once again, in this case, the synonym is pointing to a table in a staging database.  That table has been loaded as a part of an ETL process and now needs to be manipulated a little bit through some other transformations and then inserted eventually into a “target” database.

Conclusion

As with tuning stored procedures or any TSQL, a similar technique was used here.  Reducing the datasets to contain JUST the data that is needed for the flow.  To facilitate that reduction in data to be just the essential data, I employed synonyms.

The reasons for using a synonym in this case were to: 1) restrict data to precisely what was needed, 2) ensure data being loaded was “constrained” by data in the destination table (e.g. only load for a specific client that does exist), and 3) minimize code changes during deployments.

When dealing with databases that serve the same purpose but follow some absurd naming standard that changes the name between environments, it can become cumbersome to maintain code during deployments.  This is particularly true when dealing with cross database joins or lookups.

September Performance Contest

Categories: News, Professional, SSC
Comments: No Comments
Published on: September 21, 2011

This month SafePeak is sponsoring a contest centered around improving performance in SQL Server.

The host of the contest is my friend Robert Pearl.  You can read the announcement from him about the contest here.  In that link you can also read his submission.  After reading that link by Robert, you can then go to the SafePeak website and read all of the entries there.

When you go there, make sure you vote for my article.  Yeah yeah – shameless plug I know.  I have submitted an article for the contest.  You can read that article from the contest or you can read it from my blog directly.

I am a little biased, but I think my entry is top notch!!  There is not a lot of time remaining on the contest – so go vote please.

Performance Tuning – Over the Top Biz Logic

Comments: No Comments
Published on: September 19, 2011

This month, Robert Pearl is helping to host a Performance Story contest.  The contest is being sponsored by Safe Peak, and you can see more about it – here.

The timing on this contest actually fits well with something I have already been working on.  The only delay has been related to getting everything scrubbed and obfuscated.  There is a lot that could be included with this particular experience, I plan on touching on two of the key areas.

Backstory

I have been working a lot lately with building reports in SSRS.  These reports had been working great from IE hitting directly to the SSRS Server in one environment.  When we migrated the reports to the production environment – we started seeing some serious timeout issues.

There was one report in particular that was giving more grief than others.  I had already gone through everything I could find with regards to SSRS in trying to help speed these reports up to acceptable times.

Having gone through what could be done in SSRS, I next turned my attention to the main stored procedure creating the data-set for the report.

Stored Procedure

Although the report was not even making it to this particular stored procedure – I wanted to take a look at it and make any improvements there that I could.  It made sense to eliminate it as a possible source of the problem.

I opened up the stored procedure and found a fair amount if Business Logic.  I came to a section of the proc that contained code that at first sight, made my head spin a bit.  I thought for sure I was seeing double because of all of the logic.

Once I finally got my head wrapped around it, I noticed a lot of similarities and wanted to start from that particular section with my tuning.  Here is an obfuscated version of what I saw.

[codesyntax lang=”tsql”]

[/codesyntax]

I want you to look closely at both sides of that OR condition in the where clause before proceeding to look at the image of the execution plan that this query generates.

Again, my head started spinning when I first looked at the execution plan.  It was the size of the overall plan that was causing it this time.  Looking through it though, I quickly saw what I thought was the main point of slowness with this query (yes it was slow).  If you open the image of that execution plan and compare the red section to the yellow section, you will see that it is duplicated.  These sections also correlate to each side of the OR condition that was already mentioned.

When browsing this execution  plan in SSMS, I was also lured into these sections due to a bad estimate on the query cost (which you don’t see in this image).  All of the Index Spools and Index scans/seeks had a cost of 89% attached to them (within the red and yellow regions).  Seeing that both sections were identical – I was certain that the OR condition could be optimized.

DeDuping

Since the queries were soooo similar, I knew there had to be a way to combine both sides and make it less costly.  After working on it a bit, I came up with the following query.

[codesyntax lang=”tsql”]

[/codesyntax]

Now take a close look at the first part of the where clause.

[codesyntax lang=”tsql”]

[/codesyntax]

That little change represents the only difference between the two sides of the Or condition in the prior query.  The result set is exactly the same.  Now we get a new execution plan that looks like this.

In this execution plan, you can see that there is now only one section matching the red and yellow sections from the previous query.  This is a very good thing.  We can see that the plan looks simpler than the last one.  What about if we execute the two queries in a single batch for comparison?

 

The improvements we see are substantial.  Look at the difference in cost comparison between the two!!  The old query consumes 97% of the cost.  To further illustrate the improvements by making this little change in logic, let’s take a look at the time.

Timecompare_withExecplan
Timecompare_withExecplan
Timecompare_withoutExecplan
Timecompare_withoutExecplan

 

 

 

 

 

 

 

 

Looking at the time it takes to run these queries should also be a strong indicator of the performance gains made by simplifying that where clause.  Let’s also take a look at another couple of screen-shots that helps to show some of the different improvements made by simplifying that “OR” condition.

Notice the old plan has an index scan, sort operation and 61 parallel operations.  The new plan has a key lookup.  Now look at the glaring difference between the two in estimated rows.  That is significant.

That is not all that was done with the query itself to improve performance.  Other things like index tuning and more work on the rest of the query was also done with some improvements in overall execution of the query.  But none were as significant as this.

TCP/IP

The other place where significant improvement was made in the performance of this query came from something outside of SQL Server.  A key part of this exercise has not yet been mentioned.  The reports were all taking 30-60 seconds to even load without any parameters having been selected.  Then they would take another 30 seconds or so after that to bring up the next parameter.

We ran a trace between the two servers to see what we could find.  After some more troubleshooting, we decided to try disabling the TCP Chimney.  This change had a significant impact in loading the reports from SSRS.  From within IE, these reports were now immediately loading and the prompt flow became very responsive.

Conclusion

Taking a little bit of extra time when writing out the logic for your queries can have a profound impact on the responsiveness of your queries.  In this case, simplifying the code but retaining the logic had a tremendous effect on overall performance.

In addition to what can be done in SQL Server, sometimes one needs to look outside of SQL to the network or disk layer for further tuning.  Working with the appropriate teams to accomplish those goals can go miles to having a positive effect on co-workers and application experience by the business users.

Why All The Fuzz?

Categories: News, Professional
Comments: 4 Comments
Published on: February 22, 2010

I really must thank Steve for his editorial on FizzBuzz.  It seemed like a really good topic to do some testing and comparison.  Then the comments started rolling in on the topic.  This provided more information and ideas to use for this article.  As trivial as it may seem, the topic brings up some very pertinent information for us in the Data Bizz.  The primary objective of this article is to compare and contrast methods and hopefully provide evidence to use one method over another.  The underlying objective being that of performance tuning your code.

A FizzBuzz is a really trivial topic to dispute.  The objectives being to weed out those that can from those that can’t at the most basic level.  However, with the FizzBuzz, the interviewer has the opportunity to get more insight into the prospective employee.  Before delving into the technical aspects, ponder the non-technical aspects first.  This sort of test will help to determine if the candidate can quickly assess requirements, is willing to gather more information, organization skills (though very minimally), mettle under pressure, and (at a very high level) character.  Keep in mind that all of these things are only a piece of the puzzle.  There still remains the technical skill, and “team fit”.

This FizzBuzz requires that one write some code to be generate a list of numbers.  The list will have multiples of 3 replaced by Fizz, multiples of 5 replaced by Buzz, and multiples of both 3 and 5 to be replaced by FizzBuzz.  Those are the base requirements.  The unstated requirements are the requirements that top tier candidates will accommodate instinctively.  These requirements are performance, scalability, maintainability, and in a set-based fashion (since this is TSQL and SQL is optimized for Set-Based coding).  Sometimes this test is implemented where the requirements state to print out the results.  For my purposes, I will take that to simply mean display the results.  The methods would be acutely different.

I will explore a few different methods that may be used in achieving the results.  Some answers are better than others.  The first is a nice long example by Gus (posted in the comments for the SSC editorial).

[codesyntax lang=”tsql”]

[/codesyntax]

This was created in jest.  However, there are some important things to note in the code.  First, the use of a table variable is not necessary.  With that table variable there is an additional attribute that is unnecessary, even if one decided to use a temp table or table variable.  That may be a minor thing, but can be considered to be sloppy coding, inattention to detail, and poor use of resources.  Second, the use of a while loop to populate that table.  Third, is the use of a cursor to loop through the numbers in the table to do the comparison for the FizzBuzz test.  This script was also hard-coded with the record amounts to use.  To make it scalable and maintainable, one should parameterize those values.  These criticisms are nothing new to Gus.  It is important to note that he coded this query in this fashion intentionally, and he pointed out some of the criticisms himself.

Will this code work?  Yes it will.  It produces the following IO stats and a looping execution plan (1 plan for each iteration).

For 100 records, this takes roughly five seconds to run on my server.  100 records is a very small test and thus larger tests are needed to see about the performance and scalability factor.  Scaling up to one million records, this query was still running after six hours.

Next up is the following query:

[codesyntax lang=”tsql”]

[/codesyntax]

This version loops through the numbers and assigns the value to to be printed and then prints it.  See what happens when the counter hits 15?  We end up assigning a value and then overwriting that value.  This version also just prints the result rather than displaying the query results.  For 100 records the execution is not bad.  There are no IO stats and no Execution plan.  However, to run this for the one million records, takes forty-five seconds.  If I needed a larger result set, I would also need to be careful of the Int variable.  This one would also need to have a change in order to use a variable for the number of records to build.

Next is a pretty nice looking CTE version.  There are many renditions of this one.

[codesyntax lang=”tsql”]

[/codesyntax]

This query is far more optimal than the previous queries.  Though we use a CTE in this query, we are still using Procedural programming.  This is a recursive CTE.  We are also lacking in the maintainability and scalability of this code.  Of a lesser magnitude is the use of the option (maxrecursion 100).  Just a little trick that should be noted.  The recursive definition has a limiting where clause thus making the maxrecursion statement unnecessary for so few records.  Since this performs so well with 100 records, let’s proceed to the one million test.  Here is where we run into another trick that must be used.  MaxRecursion needs to be specified with a 0.  Doing this will permit the query to attain one million.  The query now takes 15 seconds to complete.  Here are the IO Stats and Execution plan.

Now that I have shared some procedural based methods to accomplish this “simple” task, let’s explore some of the potential set-based methods.  The first uses a windowing function to create our number set.

[codesyntax lang=”tsql”]

[/codesyntax]

First thing to note is the reduction in code here.  The solution is very simple and runs very rapidly.  By using a cross join against sys.columns were able to create a result set.  This cross join has limitations.  Not all sys.columns tables are created equally and thus we would have to add another cross join in order to test one million records.  Also, note that the modulo was changed in the FizzBuzz test to use a modulo 15.  That was also documented in the short note at the end of the line.  Thus the alterations for the query are as follows (to test one million records).

[codesyntax lang=”tsql”]

[/codesyntax]

This query will return in ~14 seconds.  It is a set-based solution.  It is not quite yet to that upper echelon though.  Some modifications need to be made to make it scalable and maintainable.  Also note that I am still employing the use of sys.columns.  There are better methods for that.  One such is to use master.sys.all_columns.  Another is to use a Numbers table.  Before going into the use of the numbers table, here are the IO stats and exec plan.

Now for the numbers table.  I will jump straight to the one million records testing.  The query is not substantially different than the previous query.

[codesyntax lang=”tsql”]

[/codesyntax]

The runtime for this query is about the same as the previous query.  IO Stats and Exec Plan show a slightly different story.  Still, note that this query is not quite the scalable or maintainable query I would like to see.

As you can see, Logical reads increased, and the execution plan is slightly better.  The cost of this new query is lower and makes it a little more desirable – despite the higher logical reads.

We are now at a point where we have progressed to where we can start playing a bit.  The idea now is to fine tune and tweak what I have to see if it can be made better.  The remainder of the queries and testing will be in the next article.  There is still plenty to cover. 🙂

I’m givin’ ‘er all she’s got!

Categories: News, Professional
Comments: No Comments
Published on: February 21, 2010

Or am I?

As I proceed down the path on a consolidation project, I have taken time to pause and ponder the approach.  One driving goal is to shrink our footprint.  Another goal is to improve performance and use of hardware and software resources.  I am looking to make sure the appropriate consolidations are made while leaving appropriate servers as-is if appropriate.

From time to time we have the opportunity to choose between distributed processing and a more centralized approach.  In my experience, this decision has come about as a means to improve performance.  Distributed processing is a viable option when done correctly.  I have seen this work in the past and I have seen it fail miserably.  I have had the opportunity of reversing these types of solutions via server consolidation on two occasions.

What are some considerations when deciding to employ a distributed model or even a consolidated model?  Denny Cherry has a good article about the considerations for moving to a consolidated model.  You can read that article here.  I think some of those very considerations should be used when electing to not consolidate.  Both sides of the equation must be evaluated.  If you can meet the criteria, then consolidate.  If not, then have some very good reasons to consolidate.  Conversely, if looking to distribute, there must be some very sound reasons as to why one would elect that option.

To illustrate why not to distribute, I will share the two experiences I mentioned.

Experience 1

Performance for the primary database server had become horrendously unacceptable.  In a server that hosted one million users or more a day, something had to be done.  Complaints were common and frequent from the client and noticeable from in-house processes.  The decision was made to distribute processing to three servers.  Two of the servers were “report” servers and the third was the primary transaction server.  Transaction replication was used to copy the data to the report servers.  The application was configured to use the various servers through various means.  Each of the servers had linked servers created to help process requests from the application.  It was common for queries to join tables across the linked servers to produce the results necessary.  After these changes, the complaints were just as common and frequent about the application timing out and performance being horrendous.

When I started, the first thing to do was to start tuning queries.  I had gotten the run-down on how bad things were and started implementing a plan to correct the problems.  All of the servers involved in this process maintained >= 70% processor utilization at all times.  If a process/query had done awry, then we would see spikes to 100%.  Each server was running SQL Server Enterprise Edition with databases in SQL 2000 compatibility mode.

Experience 2

As luck would have it, this was a very similar situation as the one just described.  This scenario is different in that the number of users was far fewer.  The number of servers involved though was considerably more.  Each server was running Enterprise Edition.  There was some mix of SQL 2000 and SQL 2005.  One major difference was the level of user access.  Some power users were known to have access to the databases with the ability to write their own queries.  On a regular basis they would grind the server to it’s knees.  In cases like this, new servers were frequently deployed to try and counteract this problem.

Another big difference between the two is the lack of replication in this experience.   At least a traditional replication method.  Databases were backed up and restored two different servers from the production databases on a daily basis.  The benefit here is a test of the backups on a daily basis.  This should be done anyway, but being implemented as a means to improve performance is not the wisest decision.  This proved to be quite costly.

Solution

In both cases, the solution was very much the same.  Performance tuning had been neglected and was tantamount to reduced costs.  What performance tuning was necessary in these cases?  In both cases, the lack of clustered indexes was rampant.  Both cases employed the use of nolock as a means of performance tuning their queries.  In each case where nolock was used, the query was analyzed and tuned to perform several times faster.  These are some pretty basic techniques to use in order to minimize cost and improve performance.  Throwing hardware at the situation is not always the best solution.

In the first scenario, when employing these simple techniques, we were able to reduce processor utilization down to 35% with peaks to 55%.  That was prior to removing the replication and linked servers and consolidating the servers into one server (thereby reducing cost).  After merging all three servers into one, we saw one server use 15% processor on a persistent basis with spikes to 35%.  This change speaks to the cross-server joins and replication overhead.  That was a massive improvement over the usual 100% processor utilization.

In the second scenario; simply combining the servers back to one, tuning the queries, and implementing controls on end-user queries direct to the server was all that was needed.  This amounted to a reduction of about ten servers.  Included in that reduction is the reduction in Enterprise Licenses.

Conclusion

So when should I not distribute the processing across servers?  Do not employ a distributed processing approach when performance tuning has not been done.  Do not employ a distributed processing approach when it is political in nature (i.e. end-users who write their own queries).  Employing a distributed method under either of these two scenarios is only driving up cost.  Furthermore, it becomes a management headache for the database team.  The team must maintain the processes involved to keep the distributed approach running.  This can also render a database team less efficient due to more work.

If it is appropriate and due diligence been done, then go ahead and employ a distributed processing methodology.  As long as the work has been done to get the databases finely tuned, keep the customer happy, and abide by policy – there is nothing wrong with it.

It is the wrong approach when no forethought,no  foresight, no planning,lack of  due-diligence, or general wasting of money has been employed.  Just throwing hardware at a problem is not a good solution.  These “wrong” methods are just a few ways to try and mask the real problem.  Masking the problem never solves it.  It takes time and troubleshooting patience to delve into performance problems.  When solved though, at no additional cost, it feels pretty good and looks pretty good for you.  This is how a DBA can say “I’m givin’ ‘er all she’s got!” without it just being a quote.

page 1 of 1








Calendar
November 2017
M T W T F S S
« Oct    
 12345
6789101112
13141516171819
20212223242526
27282930  
Content
SQLHelp

SQLHelp


Welcome , today is Sunday, November 19, 2017