Effects of Max Mem on Plan Cache

Reading a Microsoft Article (which can be found here) while making sure I understood what can cause a query plan to be recompiled or removed from the plan cache, I got to thinking a bit.

In the article, it lists the following as things that can cause a plan to be removed from cache or to be recompiled.

  • Execution plans remain in the procedure cache as long as there is enough memory to store them. When memory pressure exists, the Database Engine uses a cost-based approach to determine which execution plans to remove from the procedure cache. To make a cost-based decision, the Database Engine increases and decreases a current cost variable for each execution plan.
  • Changes made to a table or view referenced by the query (ALTER TABLE and ALTER VIEW).
  • Changes made to a single procedure, which would drop all plans for that procedure from the cache (ALTER PROCEDURE).
  • Changes to any indexes used by the execution plan.
  • Updates on statistics used by the execution plan, generated either explicitly from a statement, such as UPDATE STATISTICS, or generated automatically.
  • Dropping an index used by the execution plan.
  • An explicit call to sp_recompile.
  • Large numbers of changes to keys (generated by INSERT or DELETE statements from other users that modify a table referenced by the query).
  • For tables with triggers, if the number of rows in the inserted or deleted tables grows significantly.
  • Executing a stored procedure using the WITH RECOMPILE option.

The first item removes a plan from cache while the rest mark the plan is invalid and a recompile will be forced on the next execution of the SQL Statement.  But then the question comes, why make the distinction between the two?  Doesn’t removing a plan from cache in essence equate to a recompile further down the road when the query is executed again?  While semantically it is not a recompile, it sure sounds a lot like one.  The query was compiled, invalidated and removed from cache and then compiled again when executed the next time around.

When thinking about all of this, how can we see that memory pressure can “invalidate” query plans?  That is actually quite simple.  We can see the same net effects with the following experiment.

WARNING: DO NOT DO THIS ON ANY ENVIRONMENT THAT IS NOT A SANDBOX ENVIRONMENT.

In this setup, I have a Sandbox server with a simulated load that keeps plenty of plans in cache and SQL Server is happy with about eight gigabytes of memory allocated to it.  To show that memory can cause a bunch of plans to be invalidated (and removed from the plan cache), I am going to take memory away from SQL Server and in essence simulate a memory leak or memory pressure.

This shows that the settings have taken effect on SQL Server and all of this without a Service restart (I keep hearing that myth).

Max Memory in SQL Server

Seeing that the memory settings are taking immediate effect, we can turn our attention to that query that was run.  In the query to experiment with this, I took a count of the number of plans in the plan cache first, then changed the max memory, and then took a count of the number of plans in cache again.  This next image shows the results of those two plan count queries.  It is obvious here that the number of plans in cache was severely impacted by this change in Max memory when taking TOO much memory away from SQL Server (this is why you should only do this on a sandbox server).

Max Mem effect on Plan Cache

 

If you happen to take too much memory away from SQL Server, you could end up in a severe memory pressure situation where connections are limited.  Even trying to increase max memory could be severely hampered due to insufficient resources in the default resource pool.  Here is a blog post that demonstrates the effects on connections due to this error (even as bad as SQL Server not starting up).

Since the problem is in the default resource pool, you could try a connection to the DAC and modify the max memory there.  If the pressure is severe enough, you may be limited in your ability to connect to the DAC as well.  In that case, you may be in need of starting up in single user mode as referenced in that prior blog post.  Sometimes, it is enough to reduce the number of connections or resource requirements to regain access to the server to change the max memory appropriately.  Suffice it to say, do not run this on a production type of box, lest you are ready to cause a bit of an outage.  The intent here is solely to share that max memory can have an immediate impact on compiles and plans in cache.

10 Comments - Leave a comment
  1. Andy Warren says:

    Jason, I’m interested about the difference between removing the plan vs marking for recompile as you discussed. I wonder if marking for recompile is somehow less expensive due to the memory already being allocated? There’s no guarantee the plan would be the exact same size, so that seems unlikely?

    • Jason Brimhall says:

      Interesting Question Andy. That is something I was looking into doing for a followup post. I hope to be able to dive in and show the difference if there is one.

  2. dave wentzel says:

    https://connect.microsoft.com/SQLServer/feedback/details/783842/sp-recompile-table-does-not-recompile-referencing-stored-procedures

    I opened that connect item a couple years ago because recompiling does not work as MS describes it. I have some documented repro cases where sp_recompile just seems to be ignored. MS commented that sp_recompile really does more of a “lazy” recompile. I dispute that and my repro scripts demonstrate that. Or at least they did a couple years ago.

    Only bringing this up because my application is extremely sensitive to recompiles.

    BTW, great article.

  3. JeremyH says:

    Changing the max server memory (and hardening it using reconfigure) flushes the plan cache, so you aren’t seeing the dramatic drop in cached plans due to having 1/8th of the memory. You would obviously still see some drop in the amount of cached plans, but not the dramatic drop you are representing here. You would see the same type of dramatic drop if you started your test with 1GB of RAM and then increased it to 8GB. It will still clear out the plan cache.

    • Jason Brimhall says:

      Thanks and that is good info. I had never observed the plan cache flush on an increase in memory. I just ran a bunch of tests and did see a complete plan cache flush when upping the max memory by 8mb from the 9GB in use (just a small increase). And as soon as reconfigure was run, it did indeed flush the plan cache. Thanks for the info. But, it is a reconfigure combined with the max memory setting. A reconfigure of most attribute changes (or even if an attribute has not changed) will not trigger a plan cache flush.
      Try this and run reconfigure and a plan cache flush will not occur. exec sp_configure ‘remote admin connections’, 1

  4. JeremyH says:

    That’s correct, and I thought I stated that clearly. I don’t see where I mentioned that the reconfigure statement flushes the cache… There is a msdn blog posts that consolidates the list of instance level changes that will flush the cache, but upon some brief googling, I was not able to come up with it again. If I have some more time later today, I will try to find it to post here.

    • Jason Brimhall says:

      My point is simply that running reconfigure is what actually causes the plan flush. Changing max server memory by itself does not flush the cache. You can certainly change that setting, and not run reconfigure leaving it until the next server restart – thus avoiding a plan flush.
      Hardening the setting (via reconfigure) is required for the flush to occur. If you run a couple of tests you will see what I mean. Here is a sample test.
      select count(*) as PlanCount
      From sys.dm_exec_cached_plans
      ;
      GO

      /* increase memory by a small amount */
      EXEC sys.sp_configure N’max server memory (MB)’, N’9200′
      GO
      /*get a plan count — cache is not flushed*/
      select count(*) as PlanCount
      From sys.dm_exec_cached_plans
      ;
      GO

  5. JeremyE says:

    Under the remarks section of DBCC FREEPROCCACHE is where you will find the list of reconfigure operations that will clear the plan cache.

    http://msdn.microsoft.com/en-us/library/ms174283.aspx

Leave a comment

Your email address will not be published. Required fields are marked *










Calendar
January 2015
M T W T F S S
« Dec   Feb »
 1234
567891011
12131415161718
19202122232425
262728293031  
Content
SQLHelp

SQLHelp


Welcome , today is Monday, March 27, 2017