Sp Taking 15 Minutes, But the Same Query When Executed Returns Results in 1-2 Minutes

SP taking 15 minutes, but the same query when executed returns results in 1-2 minutes

This is the footprint of parameter-sniffing. See here for another discussion about it; SQL poor stored procedure execution plan performance - parameter sniffing

There are several possible fixes, including adding WITH RECOMPILE to your stored procedure which works about half the time.

The recommended fix for most situations (though it depends on the structure of your query and sproc) is to NOT use your parameters directly in your queries, but rather store them into local variables and then use those variables in your queries.

SQL: Stored procedure takes 20/25 seconds to run while the exact same select is almost instant running in a query window

Thanks for updating.

You can always try the index suggestions - it should be very simple to benchmark if it helps or not. My immediate thought is that neither will help, but practising how to benchmark in a controlled manner would be beneficial. Reading this question, we can tell that you've already tried numerous different things, and it's not really clear which changes helped the procedure behave more like the standalone, which changes improved both the stored procedure and the standalone and which made no difference but were kept anyway.

You haven't provided the actual execution plan that you want this to behave like, but let's see how much we can improve it.

Your time is mainly going towards the nested loops of the index seek of USER_ACCESS_MARKETS. This takes so long because it runs 539K times when it thought it would only execute 55 times. This bad estimate may have started with the join between ORDERS_WINDOW_CATALOGUES and CATALOGUES, it believes this will result in 3 rows but really returns 119 rows (this difference snowballs as joins are added). There's quite a few filters at play for this join, two are in the main join conditions that look to be the clustered index (which should be okay), then there's the c.IS_HISTORIC <> 1 and c.FK_SITE = @FK_SITE_2 filters, the latter two are probably what's responsible for the low cardinality. My guess would be c.FK_SITE = @FK_SITE_2 is the most important here, I would suggest you look at what this join looks like in your standalone version - where sniffing the FK_SITE_2 variable will be different.

To address this cardinality, you could create a temp table for CATALOGUES, like what you've already done with markets_catalogues. This would work if there is an efficient way of finding the rows that match these other filters. This will give SQL Server a chance to get estimates after populating this temp table - correcting it's estimation of the FK_SITE_2 filter.

SELECT * INTO #CATALOGUES_tmp FROM CATALOGUES WHERE FK_SITE = @FK_SITE_2 and IS_HISTORIC <> 1

Instead of addressing the cardinality estimates (there look to be plenty of sources of issues here and you will need to address plenty), you could just target the nested loops of USER_ACCESS_MARKETS by using a temp table so that it has to be hash joined:

SELECT * INTO #USER_ACCESS_MARKETS_tmp FROM USER_ACCESS_MARKETS WHERE PFK_ENTERPRISE = @PFK_ENTERPRISE_2 and (PFK_USER = @PK_USER_2 OR @PK_USER_2 IS NULL)

Or hint the join option - tell SQL Server it should just hash join to USER_ACCESS_MARKETS

inner hash join USER_ACCESS_MARKETS uam on
m.PFK_ENTERPRISE = uam.PFK_ENTERPRISE AND m.PK_MARKET = uam.PFK_MARKET

Those temp tables might cause other cardinality misestimates to start influencing the plan more so you'll need to take stock with the actual execution plan with each modification - see where the time is now going, see where the estimates are coming from.

Further optimization could come from noticing that it spends about a second aggregating 540K rows back down to 119 - this suggests that changing the joins to semi-joins or forcing a distinct earlier to reduce the amount of unnecessary results coming through could be possible. If you could do this so that the slow nested loop join to USER_ACCESS_MARKETS is only done a fraction of the time then that would also see huge benefits (and could still be a nested loop).

I've subqueried up part of the statement and placed a distinct in just before the join to USER_ACCESS_MARKETS, it's hard to predict how much impact this could have as I don't know anything about the PFK_MARKET column in market_catalogues (this column is the only additional column you care about from this join) but there's potential

ALTER PROCEDURE [dbo].[BL_GET_OW_AND_CATALOGUES_BY_SITE_FOR_ACTUAL_POSITION] 
@PFK_ENTERPRISE int,
@FK_SITE int,
@PK_USER int
WITH RECOMPILE
AS
SET ARITHABORT ON;

DECLARE @PFK_ENTERPRISE_2 int = @PFK_ENTERPRISE
DECLARE @FK_SITE_2 int = @FK_SITE
DECLARE @PK_USER_2 int = @PK_USER


SELECT sq.PK_ORDER_WINDOW, sq.OW_DESCRIPTION
FROM (
SELECT DISTINCT ow.PK_ORDER_WINDOW, ow.OW_DESCRIPTION, m.PFK_ENTERPRISE, m.PK_MARKET
FROM ORDERS_WINDOW ow
inner join ORDERS_WINDOW_CATALOGUES owc on ow.PFK_ENTERPRISE = owc.PFK_ENTERPRISE
and ow.PK_ORDER_WINDOW = owc.PFK_ORDER_WINDOW
inner join CATALOGUES c on owc.PFK_ENTERPRISE = c.PFK_ENTERPRISE
and owc.PFK_CATALOGUE = c.PK_CATALOGUE
inner join (SELECT * FROM markets_catalogues WHERE pfk_enterprise = @PFK_ENTERPRISE_2 and is_Active = 1 and FK_CATALOGUE_SETUP > 0) mc on
owc.PFK_ENTERPRISE = mc.PFK_ENTERPRISE and owc.PFK_CATALOGUE = mc.PFK_CATALOGUE
inner join MARKET m on
m.PFK_ENTERPRISE = mc.PFK_ENTERPRISE and m.PK_MARKET = mc.PFK_MARKET
WHERE (ow.PFK_ENTERPRISE = @PFK_ENTERPRISE_2) AND (ow.FK_ORDER_WINDOW_STATUS IN (1,2,3,5,6))
AND (c.IS_MAIN_CATALOG_RELATED = 1 OR c.FK_CATALOG_RELATED = 0)
AND c.FK_SITE = @FK_SITE_2
AND (c.IS_HISTORIC <> 1)
) sq
inner join USER_ACCESS_MARKETS uam on
sq.PFK_ENTERPRISE = uam.PFK_ENTERPRISE AND sq.PK_MARKET = uam.PFK_MARKET
AND (uam.PFK_USER = @PK_USER_2 OR @PK_USER_2 IS NULL)
inner join USERS u ON
uam.PFK_ENTERPRISE = u.PFK_ENTERPRISE AND uam.PFK_USER = u.PK_USER
group by sq.PK_ORDER_WINDOW, sq.OW_DESCRIPTION
ORDER BY sq.OW_DESCRIPTION ASC

I've also gone back to the version of the query without the temp table for markets_catalogues, you should try with and without putting the data in the temp table first.

I wonder if any of these plans look like what you are getting with the standalone statement.

SQL Server: Query fast, but slow from procedure

I found the problem, here's the script of the slow and fast versions of the stored procedure:

dbo.ViewOpener__RenamedForCruachan__Slow.PRC

SET QUOTED_IDENTIFIER OFF 
GO
SET ANSI_NULLS OFF
GO

CREATE PROCEDURE dbo.ViewOpener_RenamedForCruachan_Slow
@SessionGUID uniqueidentifier
AS

SELECT *
FROM Report_Opener_RenamedForCruachan
WHERE SessionGUID = @SessionGUID
ORDER BY CurrencyTypeOrder, Rank
GO

SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

dbo.ViewOpener__RenamedForCruachan__Fast.PRC

SET QUOTED_IDENTIFIER OFF 
GO
SET ANSI_NULLS ON
GO

CREATE PROCEDURE dbo.ViewOpener_RenamedForCruachan_Fast
@SessionGUID uniqueidentifier
AS

SELECT *
FROM Report_Opener_RenamedForCruachan
WHERE SessionGUID = @SessionGUID
ORDER BY CurrencyTypeOrder, Rank
GO

SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

If you didn't spot the difference, I don't blame you. The difference is not in the stored procedure at all. The difference that turns a fast 0.5 cost query into one that does an eager spool of 6 million rows:

Slow: SET ANSI_NULLS OFF

Fast: SET ANSI_NULLS ON


This answer also could be made to make sense, since the view does have a join clause that says:

(table.column IS NOT NULL)

So there is some NULLs involved.


The explanation is further proved by returning to Query Analizer, and running

SET ANSI_NULLS OFF

.

DECLARE @SessionGUID uniqueidentifier
SET @SessionGUID = 'BCBA333C-B6A1-4155-9833-C495F22EA908'

.

SELECT *
FROM Report_Opener_RenamedForCruachan
WHERE SessionGUID = @SessionGUID
ORDER BY CurrencyTypeOrder, Rank

And the query is slow.


So the problem isn't because the query is being run from a stored procedure. The problem is that Enterprise Manager's connection default option is ANSI_NULLS off, rather than ANSI_NULLS on, which is QA's default.

Microsoft acknowledges this fact in KB296769 (BUG: Cannot use SQL Enterprise Manager to create stored procedures containing linked server objects). The workaround is include the ANSI_NULLS option in the stored procedure dialog:

Set ANSI_NULLS ON
Go
Create Proc spXXXX as
....

SQL poor stored procedure execution plan performance - parameter sniffing

Basically yes - parameter sniffing (in some patch levels of) SQL Server 2005 is badly broken. I have seen plans that effectively never complete (within hours on a small data set) even for small (few thousand rows) sets of data which complete in seconds once the parameters are masked. And this is in cases where the parameter has always been the same number. I would add that at the same time I was dealing with this, I found a lot of problems with LEFT JOIN/NULLs not completing and I replaced them with NOT IN or NOT EXISTS and this resolved the plan to something which would complete. Again, a (very poor) execution plan issue. At the time I was dealing with this, the DBAs would not give me SHOWPLAN access, and since I started masking every SP parameter, I've not had any further execution plan issues where I would have to dig in to this for non-completion.

In SQL Server 2008 you can use OPTIMIZE FOR UNKNOWN.

Generating the SQL query plan takes 5 minutes, the query itself runs in milliseconds. What's up?

Something will have caused the plan to require recompiling such as a statistics update or DDL change.
The list if here: Execution Plan Caching and Reuse

The query in it's current form will always take 3-5 minutes to recompile: this can't be avoided.

Assuming you can't change it (PIVOT, use a trigger to maintain a "proper" table etc), then you can only control when the recompilation happens.

Remus' plan guide answer is one way. I'd also look at my statistic maintenance and ensure it's done overnight say, so it only happens once at start of day

Why query is taking time when I select one column before *?

As I mention in the comment, the queries are actually quite different, because of your ORDER BY and the use of using ordinal positions (Bad Habits to Kick : ORDER BY ordinal).

As such, your 2 queries should really be written like this:

SELECT TOP (1)
EventName,
* --This should also be all your columns in the table, including EventName again(?)
FROM dbo.DDLAudit --WITH (NOLOCK) --Do you have a "good" reason for using NOLOCK?
WHERE EventType = 'ALTER_PROCEDURE'
ORDER BY EventName DESC;

SELECT TOP (1)
* --This should also be all your columns in the table
FROM dbo.DDLAudit --WITH (NOLOCK) --Do you have a "good" reason for using NOLOCK?
WHERE EventType = 'ALTER_PROCEDURE'
ORDER BY {What Ever the first column in the table dbo.DDLAudit is} DESC;

Ordering can be an expensive operation. This is especially so if the data engine has no indexes to help it order the data. You state in the comments that the first column in your table DDLAudit is the PRIMARY KEY. You don't state it this is a CLUSTERED PRIMARY KEY but if it is, then the RDBMS has the ideal index to sort your data by. If isn't your CLUSTERED INDEX then even still, SQL Server has something it can use to sort the data, likely with a key lookup.

If we assume that the PK is CLUSTERED, then the data engine's task is simply backward scan that index until it finds a row where EventType = 'ALTER_PROCEDURE' and then it can "short circuit" the query; stopping there.

For your other column, EventName, I suspect you don't have an index to help the RDBMS. As a result it needs to find every single row that fits the where EventType = 'ALTER_PROCEDURE', then sort all that data on EventName in descending order, and finally return the first row.

If you want the first query to be faster, you'll need to ensure that it is adequately indexed (perhaps on EventName and EventType) and INCLUDE all the other columns from your table in the query. Though I would also suggest rewriting the query to not use * and actually define the columns you want. You don't need EventName returned twice in your SELECT.

SQL Server query runs more time from code than from Management Studio

To solve the crash we've increased the timeout in the IIS.
@Apostrofix has posted this link with an unexpected solution, that I wasn't familiar with before.

A quote from this post is:

The recommended fix for most situations (though it depends on the
structure of your query and sproc) is to NOT use your parameters
directly in your queries, but rather store them into local variables
and then use those variables in your queries.

What could be causing slow function import performance?

I've seen people have this issue when SQL Server has a bad cached execution plan. People seem to recommend running the following commands to fix it:

DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

See Stored procedure slow when called from web, fast from Management Studio for a more complete answer.



Related Topics



Leave a reply



Submit