Bitwise and Derived Table revisited

Categories: News, Professional, SSC
Tags: ,
Comments: No Comments
Published on: August 31, 2011

Today I am going to revisit two posts from the past couple of weeks.  I want to revisit them just to make some minor updates and clarifications.  This is nothing earth-shattering but is good info to have.

The two posts to revisit are:

  1. Bitwise Operations
  2. Derived Table Column Alias

Bitwise Operations

In this particular post, I shared a simple example of how to perform bitwise operations.  The example involved the bit comparison of up to three values.  I made the query overly complicated.  Here is a less complicated method to get to the same results.

[codesyntax lang=”tsql”]

[/codesyntax]

Can you see the simplicity in that?  Both methods work.  Looking at this code, it is a little easier to follow and understand.

Derived Table Column Alias

In the post about subqueries and derived tables, there was an important piece of information that I neglected.  In the first example I posted there is a good example of what was neglected.  The first example was a derived table based on values rather than a query.  Here is that example again.

[codesyntax lang=”tsql”]

[/codesyntax]

If you were to try to write that query without the external column alias naming convention, you would get an error.  The error message(s) would be like the following.

Knowing this information could save you a bit of headache and time.  When using a value set rather than query, the column alias is required after the table alias.

Like I said, nothing big or fancy today – just a quick revisit to clarify some previous posts.  Oh, and I have some more good stuff coming down the pipe (like another bitwise related post).

Precision and Scale

Tags: ,
Comments: No Comments
Published on: August 24, 2011

PrecisionAs is the case with many of my topics of late, I came across this one by helping somebody else.  In SQL, we should be well aware of Precision and Scale of certain datatypes.

The particular case I was working on was focused on the decimal datatype, and so we will work with that throughout this post explicitly.

What are these attributes?

According to MSDN, these attributes have the following definitions.

Precision – specifies the number of digits an object can hold

Scale – specifies the number of digits to the right of the decimal point that an object can hold.

Based on those definitions, it seems pretty straight forward, right?  Well, it is until you start doing a bit of math.  Microsoft has formulas for figuring out what the resultant precision and scale will be for various math operations.  You can read about that here.

Throughout our example, we will be focusing on multiplication and division.  We will demonstrate a few different results and configurations as well.

First, let’s get some formulas out of the way.  The formulas for precision and scale, as they show in MSDN at the link above, are as follows:

As is described in the MSDN article, p represents precision and s represents scale.  The number annotations with p and s represent the corresponding expressions in the mathematical operation.  The equation that we will be trying to solve is as follows:

[codesyntax lang=”tsql”]

[/codesyntax]

ScaleBut for the majority of these exercises, we will be focusing on this part of the formula.

[codesyntax lang=”tsql”]

[/codesyntax]

This will provide us with ample example of the math involved when calculating the resultant precision and scale of a SQL math operation.

Here is an example of the above query with values.  This query results in a value that is consistent with such calculators as MS Excel ( ;0) ).

[codesyntax lang=”tsql”]

[/codesyntax]

However, if we use variables in lieu of those values, we start to see different results.  And thank goodness for that, because there wouldn’t be much to talk about otherwise.  So, let’s dump those values into some variables and see what starts happening.

[codesyntax lang=”tsql”]

[/codesyntax]

And the formula(s).  I say formulas, because I will be demonstrating two results here.  Notice quickly that I have two similar multiplier variables – they differ only in name and precision.

[codesyntax lang=”tsql”]

[/codesyntax]

If you execute those two queries, you should get very similar results.  Both should return 0.090xxx, but the second has more scale, extending the decimal out 8 places rather than 6 places.  For the second query our result is 0.09049569.  When you combine this difference at this point, it could make for some accounting nightmares.  Especially given this difference in result occurs early on in the equation.

Notice in my variables there is one called stage.  Let’s use that one now and see how using a staging variable plays into this.

[codesyntax lang=”tsql”]

[/codesyntax]

Do you see what just happened?  Both multipliers now produce the same result.  How could that be?  Let’s look at that.  This time, let’s post calculations for precision and scale along-side each of those queries.

[codesyntax lang=”tsql”]

[/codesyntax]

Looking this over, you should be able to quickly pick out some anomalies.  Let’s start with the anomalies present in the calculations for the second query.  First, you can see that the value for p1 is 18.  One might fairly think that it should be the resultant precision of the first query.  But, the variable is created as Decimal(18,2) and that precision and scale is used in calculations involving that variable.

The second thing one should notice is that the resultant precision is 43.  Then why did I change it to 38 at the end?  Max precision is 38.  If the resultant precision of a mathematical operation exceeds 38, then it must be reduced to 38.  This has an impact on scale – which is the next item of note.  In the aforementioned MSDN article, scale is simply reduced by the difference between resultant(p) and final(p).  That simple calculation holds true for these particular queries.  But, if we look at the following queries, we can clearly see that it is behaving differently.

[codesyntax lang=”tsql”]

[/codesyntax]

And the correlating notes regarding precision and scale calculations.

[codesyntax lang=”tsql”]

[/codesyntax]

Look at the final(s) for that first query.  Scale is actually 6, but that does not match the math.  Resultant(p) = 62, Final(p) = 38 and that means the difference is 24.  Resultant(s) is 24, from which I subtract 24 and should get 0.  Well, there is a part of that formula that needs better explanation maybe in the documentation.  The final(s) should actually include the max(6, Resultant(s) – (resultant(p) – final(p))).  The final(s) cannot be less than 6, and thus the reason that we see 6 digits to the right of the decimal in the result of that first query.

Now let’s change that divisor scale up a bit.  The requirements dictate that the divisor be a Decimal(18,2) – I used 18,6 as one of my test sets.  In this case, the only thing that changes is the final(s).  And in this particular case (though, I don’t recommend shortcutting – it just works for this case), we can simply add 4 to the final(s) of the second query.  The first remains unchanged.

Let’s look at the resulting value now.  This difference alone is cause enough for significant differences in the results of the larger formula.

First query = 0.090495, Second query = 0.090495695046.  To this point, I have shown why this happens.  The calculations performed exceed the limitations for precision which impacts scale – which affects accuracy of the formula.

I showed one method, without saying as much, on how to avoid this.  My use of an intermediary step to perform these calculations via variable helped to correct the precision/scale/accuracy problem.  Another viable option is to use appropriate precision and scale for the data being used.  Changing precision and scale to match expected data can have a significant impact on the resultant accuracy of the calculation.

I used two multipliers to demonstrate that last suggestion.  The more accurate result came from the second query which used a more appropriate precision and scale for the data (see the variable @multiplier).

TSQL Sudoku II

Categories: News, Professional, SSC
Comments: No Comments
Published on: August 23, 2011

As luck would have it, today I came across this TSQL Challenge.  It just so happens that I had already worked on a SQL Sudoku script and blogged about it less than one week ago – here.

Since posting that last article on my work on that script, I had already gone to work figuring out a few things to improve it.  One was to improve performance.  I felt it was a little slow and had an idea of what was causing it.  I’ll talk a little about that later.  The other issue was to present the output in a grid.  The first half of that was quickly solved too through a suggestion to do some string manipulation on the output.  I’ll also point that out in a bit.

If you read the Challenge from the first link, there is a requirement for it to be a grid output.  There is also what appears to be a requirement to be able to solve these puzzles through data stored in a table.  The solution I posted last week doesn’t do either of those things.  Soooo, that meant I needed to be able to do those things if I wanted to submit for this challenge.  One of the requirements was already on my to-do list – so not really that much different than my intents anyway.

For the purposes of this blog, I will only post the parts of my solution relevant to solving a puzzle from a string input.  The solution I submitted, solves from both string input as well as from table data (woot woot).

Let’s first look at the performance issue I was experiencing.  I suspected that the performance lag was related to constantly hitting back to the CTE called dual.  What I found was that there was only one very specific place that was causing the slowness.  The anchor of the recursive CTE that solves the Sudoku was referencing the dual CTE and it was dogging the performance.  The original looked like this:

[codesyntax lang=”tsql”]

[/codesyntax]

I changed it to the following and still saw the slowness.

[codesyntax lang=”tsql”]

[/codesyntax]

Finally, I decided to go ahead and whack dual from that section of code.  I had wanted to leave it because it proved useful in creating a 9 record result set.  I found a way to solve that part too – without the severe performance impact.

[codesyntax lang=”tsql”]

[/codesyntax]

This change alone was responsible for reducing the query time for a Sudoku with 30 givens from three seconds down to < 300ms.  That was a marked improvement.  I saw similar results in performance gains when working with more difficult puzzles such as those with only 19 givens.

The next thing that needed to be done was to display only the relevant data for each of the 9 rows.  Initially the solution just provides a single 81 character data string.  The fix for that is as follows.

[codesyntax lang=”tsql”]

[/codesyntax]

The code to split out the substrings was suggested to me with a minor flaw.  Each row only produced 8 digits in the result set.  This was quickly fixed by adjusting the length parameter of the substring function.  Also note that I have a Cross Apply back to the dual cte here.  This wasn’t in the original solution either.  Remember that I removed it from the anchor portion of the recursive CTE?  Well, to get the 9 record result set that I wanted, I needed to put it back somewhere.  Admittedly, this could be faster by doing a Cross Apply to a value set rather than the dual cte.  It could save a bit of memory too – I still need to test that.  Maybe I will submit another solution to the challenge with that fix if it works better.

That pretty much takes care of the performance problem as well as the first part of creating the grid.  To finish building the grid, I used a table variable and a Pivot (as I had planned).  The table variable is rather straight forward.

[codesyntax lang=”tsql”]

[/codesyntax]

I populated that Table with the following after the ctes.  Nothing real fancy here.

[codesyntax lang=”tsql”]

[/codesyntax]

This is where the code starts to get a bit fancy.  I am getting better at using the Pivot function, but sometimes it seems a bit tricky.  For instance, this time around, my numbers wouldn’t work out very well.  I figured out that the order of my columns in the Select has an impact on the Pivot as well.  Now I know.  Anyway, here is the Pivot functionality to move things into a grid.

[codesyntax lang=”tsql”]

[/codesyntax]

Not only is it important to have the columns just right, you also need to have the windowed functions done just right in order to produce a full result set.  Notice here that I do a couple Cross Apply calls.  The first is to get that Subquery result set with the pivots.  The second is using the Cross Apply against a value set.  Note that I am not using Dual in this case.  Simply put, I can’t.  I have an Insert statement between the CTEs and this Select/Pivot statement.

Another important element here is the final where clause.  This simple addition reduces my final result set from 81 records to just the 9 that I desired.  All of that, I get a grid result and the solution is done in ~40ms given the puzzle provided with the challenge.  Not too bad.

Here is the entire updated script to solve these puzzles from a string input.

[codesyntax lang=”tsql”]

[/codesyntax]

Once the challenge has been closed, I will consider revisiting this solution and posting the remainder of the script pertinent to solving a puzzle from table data.  I have adjusted my script to work for either method (from string or from a table like that in the Challenge).  I did that because I couldn’t really see somebody taking that much time to set up the data in a table in order to quickly solve the Sudoku puzzle in the Sunday paper.

 

I hope you enjoy these little improvements.

SQL Saturday 94

Categories: News, Professional, SSC
Comments: No Comments
Published on: August 22, 2011

SQL Saturday 94 in Salt Lake City is fast approaching.  Will you be there?  I submitted three sessions in hopes of maybe getting one selected.  Last year, I only submitted one at the inaugural SLC SQLSaturday.  I even blogged about my experiences with that one (here and here).

The next time I presented for SQLSaturday was for Johannesburg a few months ago (their inaugural event too).  And, true to form, I blogged about that (here and here).

Each of those experiences, I gave the same presentation.  This year, for SLC, I added two presentation submissions about Reporting Services and about Table Compression.  Neither of those was selected.  The Documentation presentation was selected again.  I had hoped to get to present one of the other two.  That is all well and good though.

Since the last time I presented, I learned more.  I have refined some of the queries a bit.  I have also refined the topic a bit.  It should go much better this time.  Are you coming to SQL Saturday 94?  I hope to see you there!!  If you come, you will be faced with a great dilemma.  Which outstanding content will you witness and attend in order to learn?

And if you don’t come for the learning, then come for the networking.  If not for the networking, then come to at least check out the sponsors (like SQL Solutions Group or myself – yup I am a sponsor this year).

Bitwise Operations

Tags: , ,
Comments: No Comments
Published on: August 19, 2011

Some time ago, I wrote an introductory post about bitwise operations in SQL Server.  I had fully intended on writing a follow-up to that.  Alas the opportunity has passed for the idea I was working on back then.

As luck would have it though, I encountered a new opportunity to share something on this topic.  This one came to me by once again helping out in the forums.  And, since I worked it out, I will be using the same problem posed in the forum and the solution I proposed.

First we need a little setup.  Let’s create a simple table and populate that table with some data.

[codesyntax lang=”tsql”]

[/codesyntax]

As I said, this setup is rather simple.  The solution is not much more complex.  However, before we get to the solution, we need to know what we need the solution to do.  From this table, I need to be able to determine the primary colors that make up a different color based on input of an ID relating to that color.  I know.  I know.  We don’t have all of the colors and their ColorTypes presented to us at this point – but let’s just go with it for a bit.  I would imagine that the other colors and the number assigned to their colortype would be populated at some other time.

For now, we are only working with seven color variations – so any number from 1-7 is a valid input.  How do we find all of the colors that are required for the number that we input?  Well, we use some smoke and mirrors.  Just kidding.  Seriously though, we use bitwise operations as well as a neat trick called “cross apply.”

[codesyntax lang=”tsql”]

[/codesyntax]

Do you see what is being done there?  I have known values in this table of 1,2, and 4.  I know that 7 is the max number I am allowing for input at this time.  Because of that, I know that I need three values in order to arrive at a value of 7.  Due to this requirement, I know I must Cross Apply the ColorPlate table twice beyond the first select from it.  That will permit me to sum three values from the ColorPlate table.

Now that I have access to three possible values, I need to compare those values using the Bitwise And operator.  This is denoted by ampersand ( & ).  Note that the where clause checks each of the three tables as well as the variable.  Then, I want to make sure that their bitand operation is not 0.  Pretty slick eh?

Let’s put it to action.  If I run the above query with a value of 6 for the @ColorType variable, I will get a two record result set.  The results returned would be the primary colors for green (which are Blue and Yellow).  If I use 7 for that same variable, I will get a three record result-set which would include red, blue and yellow.

This was a rather simple solution and scenario for a bitwise operation.  There are plenty of other examples out there of how to use these types of solutions.  Some more elaborate than others – but many good examples nonetheless.

I am interested in finding more solutions that involve these types of operations.  Who knows, maybe I will even be able to remember the neat stuff I learned while writing the last article on the topic and be able to put that up before too long.

Derived Table Column Alias

Comments: No Comments
Published on: August 18, 2011

By now, you have heard of subqueries.  You have also heard of Common Table Expressions.  I am sure you know what a derived table is and that you get a derived table through either a subquery or CTE.  How familiar are you with the subquery flavor of a derived table though?

I encountered something about derived tables recently that I had never seen, let alone heard of up to that point.  Let’s start with the Microsoft documentation on the topic.  If you browse to this page, you will find a description for column_alias immediately following the description of derived table.  What you don’t get is an example of how it is applicable.  Or do you?

If you look in the example of the derived table on that same page, you will see the following code (formatting added for readability).

[codesyntax lang=”php”]

[/codesyntax]

Here, we can see that column_alias is optionally supplied after the table_alias for the derived table.  In this example, we have supplied two new column aliases called a and b.

If we want, we can take this a step further and see the same sort of example supplied by Sybase.

[codesyntax lang=”sql”]

[/codesyntax]

You can read the documentation about derived table syntax in Sybase, here, if you so desire.  The point of this is to show similar code and documentation between SQL Server and its resuscitated predecessor.

And for grins, you actually have the same sort of optional syntax available for the derived table known as a CTE.  You can see the documentation, from Microsoft, on that here.

So, how do we put this to use?  Well, I am glad you asked that.  I have an example ready to go.

[codesyntax lang=”tsql”]

[/codesyntax]

In this example, I have a derived table implemented through a subquery.  The alias of this derived table is “Latest.”  Note that there is an additional set of parenthesis after that table alias.  Inside this set of parenthesis, you will see a couple of column names.  Those columns are called StudentId and RequestNbr.

Now, I want you to take a look inside that derived table and note the names of the columns I provided in the aliases there.  See how those column_aliases are different than the column_aliases provided after the table_alias?  By looking at the query, can you tell which takes precedence?  Aliases supplied for columns in the optional column_alias outside of the derived table override the column_aliases of those provided inside the derived table.  You can verify that by looking at the join conditions provided after those aliases were defined.

Running this script, you will see it execute without error.  Using this kind of syntax could be useful in certain cases.  I think that it could make finding those column names considerably easier.  It could also help with readability.

Let’s take a quick look at the same kind of setup, but using a CTE instead.

[codesyntax lang=”tsql”]

[/codesyntax]

Note that I moved that entire derived table from subquery to be a new CTE defined immediately after Request.  Now take note of the difference in declaration between Request and Latest.  In Latest, I define the column names up front and have the columns aliased differently inside the CTE.  I do not define the column_alias list for the Request derived table.  You can also note that the colum_alias defined prior to the guts of the Latest derived table take precedence over any column_alias defined inside that particular derived table.

I hope this was new information to somebody else.  If you learned something new, let me know.

TSQL Sudoku

Comments: 8 Comments
Published on: August 17, 2011

I am a big Sudoku fan.  Typically if I need a break, I will break out a Sudoku puzzle from any of a number of different sources (Websudoku, Android Apps, Puzzle Books).  Over time, I have come across a solution here or there to solve these puzzles via TSQL.

There are a few of these solutions out there already, such as one by Itzik Ben-Gan (which I can’t get to download without the file corrupting so I still haven’t seen it), or this one on SSC (which works most of the time but does provide inaccurate results from time to time).  I still wanted something to do this via CTE (much like the solution by Itzik is described to be at the link provided – if you have that code, I want to SEE it).

Just a couple of years ago, there was a post at SSC asking for some help converting a solution from Oracle to TSQL.  I checked out that code and worked on it for a day or two.  Then I got busy with other work that replaced the pet project.  I hadn’t given the idea much thought until just a few days ago as I was browsing my Topic list I had been building for articles.

This solution stuck with me this time around and I wanted to finish it up.  The Oracle solution for whatever reason made a lot more sense to me this time around, and I made great progress quickly.  It was actually this project that I was working on that prompted another post.  While working through the solution, I learned a fair amount about both flavors of SQL.  So, in preface to continuing to read here, you may want to check out the other article real quick since it pertains to some of the conversions done in this project.

Problems First

The OP supplied the Oracle solution asking for help in creating a TSQL Solution.  Here is that Oracle version.

[codesyntax lang=”sql”]

[/codesyntax]

If you read that other post I mentioned, you will quickly identify 5 functions/objects in use in this script that just don’t work in TSQL.  Those are:  dual, instr, substr, connect by, and trunc.  I did not mention mod in my other post, but mod is also done differently in TSQL than in Oracle.  I thought this one was a bit obvious and stuck with the top 5 ;).

Solution

After figuring out some of the subtle differences between commands and the best way to approach this, I was able to come up with a TSQL solution that works.  Take not first of that last where clause in the CTE of the Oracle solution.  That clause is very similar to what I refer to as the train-stop method to get unique paths in a hierarchy.  There are several methods to do similar functionality – I have concatenated strings with Stuff as wells cast to produce this functionality.

So here goes with the first rendition of this query.

[codesyntax lang=”tsql”]

[/codesyntax]

Notice that I have chosen to use an Itzik style numbers table/CTE.  This functions as my “dual” table translation and is necessary in the remainder of the query.  The final where clause of the CTE is simplified in TSQL by simply removing the TRUNC commands.  The original solution was merely removing the decimal precision.  In TSQL, the conversion is done to INT implicitly in this case.  I need to test a few more cases, but so far it works without error.

What this does not do…

This is the first rendition of the script.  Currently, it only returns the number sequence in one big long string.  I am working on modifying this script to produce a grid layout with the solution.  I envision this will require the use of PIVOT and possibly UNPIVOT to get me close.  In addition, I expect that further string manipulation will be needed – such as stuffing commas and then splitting it to make the PIVOT/UNPIVOT easier.  I’ll have to try some things and figure it out.  Also, I expect that some explicit conversions may be best in this query.  That could help improve performance a tad.

This, to this point, has been a fun diversion.  This has helped to learn a bit about Oracle, hierarchies, and to do a little math – all in one.  Better yet is that there is still work to be done on it and more learning.  If you have ideas how to make it better – I am very interested.

Top 5 Oracle Nuances I learned Today

Categories: News, Professional, SSC
Tags: ,
Comments: 1 Comment
Published on: August 16, 2011

I don’t do much with Oracle – at all.  Once in a blue moon, I find a little project to do that might involve Oracle.  I have never put a lot of thought to the differences between SQL and Oracle.  On the pet project I am doing right now, I put a little more thought into those differences and finally decided to write a little something about five things I am working with in the Oracle world and how those translate (or at least how I translated them) to the SQL world.

Let’s start with some very similar commands.

  1. substr().  In SQL, this translates to substring.  Easy enough right?  There is one more difference between the two than just the name.  The parameters are ordered differently in substr() than they are in substring().   Pay careful attention to your parameter sequence when converting this function from Oracle to SQL Server.
  2. instr().  This one is less obvious.  I have used PatIndex() and CharIndex() for this one – depends on needed functionality.  If you understand that instr is searching for a value within a string – it makes it a little easier to understand.  Also knowing that PatIndex searches for “Patterns” and Charindex() searches for a character is helpful.  If you need to supply the optional parameter used by instr(), then you should use Charindex.  Though not entirely the same – similar functionality is available in SQL for the instr() function.
  3. trunc().  This is a function used in Oracle to convert date and numbers to a shorter format (either different date format or fewer decimal places).  This is achieved through different means in SQL.  Two common methods are cast() and convert().
  4. dual.  This is not a function.  This is an internal table containing a single row.  There are many uses for this internal table.  One common use is equivalent to the Numbers/Tally table in SQL server.  Pick your favorite numbers/tally table method in these types of cases.
  5. connect by.  This is actually a pretty cool piece of functionality unique to Oracle.  I have seen this used in recursive CTEs to help control the hierarchy.  In these cases, it limits the result set to rows meeting the criteria of the connect by statement.  Similar functionality can be achieved through use of Joins and the Where clause.  This is a command that would be really cool in SQL.  It is true that you can build the hierarchy without this command in SQL.  I think it would help make that task easier and give it more flexibility.  It would also make it a little easier to read/understand.

This is all pretty cool.  It should be pretty straight forward stuff for most DBAs.  Some day, maybe we’ll explore a post dedicated to connect by and how some of the features of that command can be translated into SQL.  For now, just know that there is some commonality between the two RDBMSs – just a little translation may be necessary.

Send DBMail

Categories: News, Professional, SSC
Comments: No Comments
Published on: August 15, 2011

With SQL Server 2005, Microsoft improved the methods available for DBAs to send email from SQL Server.  The new method is called Database Mail.  If you want to send emails programmatically, you can now use sp_send_dbmail.  You can read all about that stored procedure here.

What I am really looking to share is more about one of the variables that has been introduced with sp_send_dbmail.  This parameter is @query.  As the online documentation states, you can put a query between single quotes and set the @query parameter equal to that query.  That is very useful.

Why am I bringing this up?  Doing something like this can be very useful for DBAs looking to create cost-effective monitoring solutions that require emailing result sets to themselves.  I ran across one scenario recently where a DBA was looking for help doing this very thing.  In this case, the query was quite simple.  He just wanted to get a list of databases with the size of those databases to be emailed.

Here is a quick and dirty of one method to do such a thing.

[codesyntax lang=”tsql”]

[/codesyntax]

As I said, this is a real quick and dirty example of how to send an email with query results.  The results of the query in the @query parameter (in this case) will be in the body of the email.  A slightly modified version of that first solution is as follows.

[codesyntax lang=”tsql”]

[/codesyntax]

This is only really slightly modified because I took the guts of sp_databases and dumped that into this query.  The modification being that the remark column was removed.  Why do this?  Well, to demonstrate two different methods to get the same data from the @query parameter.  We can either pass a stored procedure to the parameter, or we can build an entire SQL statement and pass that to the parameter.

This is just a simple little tool that can be used by DBAs.  Enjoy!

Summit 2011

Categories: News, Professional, SSC
Tags: ,
Comments: No Comments
Published on: August 11, 2011

Call me a slacker.  I have been postponing registering for Summit 2011.  I wanted to be sure that I had the week available in order to attend.

I finally did it!  I registered for Summit 2011.  I was stoked about it too.  Then I got the schedule for my sons’ cross country season.  I started worrying about missing region championships and had to come back and recheck the schedule.  Then I took another look at the season schedule.  I did this a few times because I didn’t want it to be true.  Region Championships are that very week – (Oct 12).  Now start the major dilemma.  I sure hope he will be running in the championships.

For now, I am registered for the Summit.  Come October, my plans may change relative to my sons performance and whether or not he will be competing at Region.

So, I hope to be seeing many of you at Summit.  If I am not there – you know why.

«page 1 of 2








Calendar
August 2011
M T W T F S S
« Jul   Sep »
1234567
891011121314
15161718192021
22232425262728
293031  
Content
SQLHelp

SQLHelp


Welcome , today is Sunday, May 28, 2017