Table Valued Function Where Did My Query Plan Go

Table Valued Function where did my query plan go?

Multi-statement table valued functions (TVF) are black boxes to the optimiser for the outer query. You can only see IO, CPU etc from profiler.

The TVF must run to completion and return all rows before any processing happens. That means a where clause will not be optimised for example.

So if this TVF returns a million rows, it has be sorted first.

SELECT TOP 1 x FROM db.MyTVF ORDER BY x DESC

Single statement/inline TVFs do not suffer because they are expanded like macros and evaluated. The example above would evaluate indexes etc.

Also here too: Does query plan optimizer works well with joined/filtered table-valued functions? and Relative Efficiency of JOIN vs APPLY in Microsoft SQL Server 2008

To answer exactly: no, no, and no

I have very few multi statement TVFs: where I do, I have lots of parameters to filter inside the UDF.

Table Valued Function Killing My Query Performance

Returning a Table Variable will make it a multi-statement table valued function and can be bad for performance due to the fact that it's treated like a table except there are no statistics available for SQL Server to base a good execution plan on - so it will estimate the function as returning a very small number of rows. If it returns a larger number of rows, then therefore the plan generated could be a lot less than optimal.

Whereas, returning just a SELECT makes it an inline table valued function - think of it more as a view. In this case, the actual underlying tables get brought into the main query and a better execution plan can be generated based on proper statistics. You'll notice that in this case, the execution plan will NOT have a mention of the function at all as it's basically just merged the function into the main query.

There's a great reference on it on MSDN by CSS SQL Server Engineers including (quote):

But if you use multi-statement TVF,
it’s treated as just like another
table. Because there is no
statistics available, SQL Server has
to make some assumptions and in
general provide low estimate. If your
TVF returns only a few rows, it will
be fine. But if you intend to
populate the TVF with thousands of
rows and if this TVF is joined with
other tables, inefficient plan can
result from low cardinality estimate.

SQL Server 2005 Table-valued Function weird performance

From these query plans it looks like you could benefit from an index like this (if I inferred your DB schema right):

CREATE INDEX IX_call_company_date ON call (company_id, date)

In general this seems to be a standard query optimization problem and the table-valued functions aren't making a difference here actually.

Performance of SELECT * in an Inline Table-Valued Function

The i in iTVF is for inlined - as you know. That means, that the engine will try to find the best execution plan as if the statement was written into the query directly.

From this point of view there should be no difference whether you use

SELECT * FROM YourView WHERE idCity=@idCity

or

SELECT * FROM YourITVF(@idCity)

The engine should be clever enough to handle only needed columns, but - in general - it is better to use a fix list of columns. (See the link in @a_horse_with_no_name's comment.)

Hint: When you wrap a view (as you want this) with SELECT * FROM ... you should keep in mind, that you have to recompile this iTVF if you alter your view.

The problem might be, that the engine has troubles to resolve deeply nested structures and might not find the best plan in the end (and even might not see, that an expensive computed column is not needed in the end result).

If your view was built on sub-views and these sub-views were built from sub-sub-views, other iTVFs and so on, this will lead to sub-optimal plans.

Some days ago I had to tune a slow view which turned out as a view with 9(!) call levels, covering views in views in views in ... and a lot of computed columns and so on. The engine was not able to look through this jungle anymore.

So in short:

  • Try not to nest to deeply.
  • An iTVF can lead to more readable code (less repetition, speaking names)
  • An iTVF can lead to better performance (as it is pre-compiled with a fix set of parameters, but be aware of parameter sniffing)
  • I would not use an iTVF just to pass in a simple filter variable...

Does query plan optimizer works well with joined/filtered table-valued functions?

In this case, it's an "inline table valued function"
The optimiser simply expands (unnests) it if it's useful (or view).

If the function is treated as "black box" by the outer query, the quickest way is to compare IO shown in SSMS vs IO in profiler.
Profler captures "black box" IO that SSMS does not.

Blog post by Adam Mechanic (his book is in my drawer at work)

Improve Table Value Function Performance

Ok, so looking at the execution plan you've provided you're doing Key Lookups which means that you can probably tune an index to decrease the execution time here.

At the moment, you're using the index Relationship_82_FK on the table InvoiceReceipt to return the two values ReceiptRecID and StockHoldingInvoiceID. Now, because this index doesn't contain all of the information that you need SQL is having to go back and look up against your clustered index to return the additional information. Try altering the index Relationship_82_FK to have ReceiptAccHolder and ReceiptPrice as included fields.

Bad query plan with query using CTE and scalar functions

My guess is the performance problems are because of the liberal use of scalar functions. They are routinely terrible from a performance perspective. Converting those to an inline table valued function would be my first step. Make sure they are NOT multi-statement table valued functions though or you might actually end up going backwards for performance. Or if the function is fairly simple you could just inline the logic which will be even better.

EDIT

Using a scalar function in the join caused the plan to change from a more efficient one to one joining without a predicate and filtering after. PeriodDateKey is produced in the CTE from the function DateToDatekey and that is what is being joined to BigTable. Changing it to a functionally equivalent expression caused the plan to change and produced a big improvement.

Trying to move the function call outside of the CROSS APPLY produced no improvement and while removing all references to scalar functions produced a performance gain, it was not as significant as the gain realized by the one change to PeriodDateKey

Wrapper TSQL Table-valued Function is slow

Do it as an inline TVF, which is much, much faster:

CREATE FUNCTION [console].[getCalculosRequisita]
(

@Disponivel BIGINT,
@mediaDiaria float,
@DiasStockArtigo INT,
@DiasAntes INT,
@SaidasPorMes float,
@QtdEncomendada2Meses BIGINT,
@StockAtual BIGINT,
@QtdRequisitada BIGINT,
@caixaMinima INT
)
RETURNS TABLE -- WITH SCHEMABINDING -- preferable, but then you can't change the underlying function
(
DiasAteRotura INT,
AcaoRequisita varchar(10),
Aconselhada BIGINT
)
AS RETURN
(SELECT DiasAteRotura, AcaoRequisita, Aconselhada
FROM [cartridge].[getCalculosRequisitaTSQL]
(
@Disponivel ,
@mediaDiaria ,
@DiasStockArtigo ,
@DiasAntes ,
@SaidasPorMes ,
@QtdEncomendada2Meses ,
@StockAtual ,
@QtdRequisitada ,
@caixaMinima
) AS t
);
GO

Obviously, if you do this then you cannot do any other inserts. In any case logging would be impossible, so I'm not sure what you were planning on doing.

You have not given the code for the underlying function. Perhaps that can be done as an iTVF also.



Related Topics



Leave a reply



Submit