Quantcast
Channel: Transact-SQL forum
Viewing all articles
Browse latest Browse all 23857

Recompilation and parameter sniffing

$
0
0

Before, I have posted this question to the wrong forum - mirroring. I apologize.

In SQL server there is quite often a problem because of parameter sniffing. In average I have SP with at least 5 parameters.
The first SP call stores execution plan for the subsequent calls. If you have luck, the first call is with typical parameters values and only atypical parameters values will have bad execution plan. But what if first call is with aTypical value, than you have most of the executions with bad plan.

One of the biggest performance decrease is also because of sort warning(data is sorted on the disk), which is usually a result of a bad plan(it can happen also for some other reasons).
So, how to deal with it?

1. Create local variables or hint optimize for unknown. This will solve only the case, when first call is atypical, but still for atypical values will be bad plan (even typical values can have not so good plan as with recompile, but usually not big difference)

2. Create more stored procedures for different parameter values. Since SP have 5 parameters in average, that would create so many combinations that it is almost impossible to deal with it. Even with one parameter is hard to deal with many different parameter values.

3. Recompile SP every time.

4. Dynamic SQL which stores different plan for every different parameter combination, but not for different parameter values inside the same combination.

So, I have found out, that recompile every time is the best solution. Yes, it is CPU intensive, but still not so intensive as query with bad plan or I'm wrong?

Suppose an example:

SP is executed 100 times per second.  EP is in cache and 90% of executions have optimal plan and 10% of them have bad plan, which executes 100 times slower(I suppose that 100 times slower is some average).
If I use dummy maths:

90 + 10x100=1090

So, it is the same time as 1090 executions of SP with good plan in cache.

If recompilation time is 10 times greater than SP execution time then I get:

100+100x10=1100

It is the same time as 1100 executions of SP with good plan in cache.

So, recompile time should be 10 times greater than SP execution time that plan cashing would benefit in this example.

But in reality recompile time is often 10 times lower than SP execution time. I have checked couple of SP, but can't find any where recompile time would be even the same as SP execution time.
 (I'm not sure how to measure recompile time, so I have used: SET STATISTICS TIME ON and compare executions with recompile and without).
I have found also an example when SP with recompile is faster than SP executed from cache - i don't know how it is possible but cache maintenance obviously takes some resources. But I have tested as one user. When many users there can be some difference.

Recompile creates also schema stability lock(Sch-s), which could be the problem, when many users call the same SP at the same time.
In definition of Sch-s lock is, that there can be many concurrent Sch-s locks and it is compatible with all other lock types except Sch-M lock type, which is very rare(only when the table is modified). So, many users can have lock at the same time.
Or is there some different scenario at recompile, that one user must wait other user to free the lock?
Than this can be a bottle neck, and then it depends of number of users, it still can be faster than cache. Maybe then option 4 would be the best. 

I guess that default setting should be recompile every time for procedures with parameters and then only deal with case when this is not acceptable and causing some problems.
I would like to hear what are others experience or opinions?

Interesting reading:
Elephant and mouse

Thanks,
Simon


Viewing all articles
Browse latest Browse all 23857

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>