Shredding Xml from Execution Plans

Shredding XML From Execution Plans

A very straight way could be this (while @x is your XML-execution-plan):

DECLARE @x XML=
N'<root>
<ElementE1 AttributA1="A1-text belongs to E1[1]" OneMore="xyz">E1-Text 2</ElementE1>
<ElementE1 AttributA1="A1-text belongs to E1[2]">E1-Text 2</ElementE1>
<ElementParent>
<subElement test="sub"/>
Free text
</ElementParent>
</root>';

DECLARE @idoc INT;
EXEC sp_xml_preparedocument @idoc OUTPUT, @x;
SELECT * FROM OPENXML (@idoc, '*');
EXEC sp_xml_removedocument @idoc;

The result (not all columns)

+----+----------+----------+--------------+------+--------------------------+
| id | parentid | nodetype | localname | prev | text |
+----+----------+----------+--------------+------+--------------------------+
| 0 | NULL | 1 | root | NULL | NULL |
+----+----------+----------+--------------+------+--------------------------+
| 2 | 0 | 1 | ElementE1 | NULL | NULL |
+----+----------+----------+--------------+------+--------------------------+
| 3 | 2 | 2 | AttributA1 | NULL | NULL |
+----+----------+----------+--------------+------+--------------------------+
| 13 | 3 | 3 | #text | NULL | A1-text belongs to E1[1] |
+----+----------+----------+--------------+------+--------------------------+
| 4 | 2 | 2 | OneMore | NULL | NULL |
+----+----------+----------+--------------+------+--------------------------+
| 14 | 4 | 3 | #text | NULL | xyz |
+----+----------+----------+--------------+------+--------------------------+
| 5 | 2 | 3 | #text | NULL | E1-Text 2 |
+----+----------+----------+--------------+------+--------------------------+
| 6 | 0 | 1 | ElementE1 | 2 | NULL |
+----+----------+----------+--------------+------+--------------------------+
| 7 | 6 | 2 | AttributA1 | NULL | NULL |
+----+----------+----------+--------------+------+--------------------------+
| 15 | 7 | 3 | #text | NULL | A1-text belongs to E1[2] |
+----+----------+----------+--------------+------+--------------------------+
| 8 | 6 | 3 | #text | NULL | E1-Text 2 |
+----+----------+----------+--------------+------+--------------------------+
| 9 | 0 | 1 | ElementParent| 6 | NULL |
+----+----------+----------+--------------+------+--------------------------+
| 10 | 9 | 1 | subElement | NULL | NULL |
+----+----------+----------+--------------+------+--------------------------+
| 11 | 10 | 2 | test | NULL | NULL |
+----+----------+----------+--------------+------+--------------------------+
| 16 | 11 | 3 | #text | NULL | sub |
+----+----------+----------+--------------+------+--------------------------+
| 12 | 9 | 3 | #text | 10 | Free text |
+----+----------+----------+--------------+------+--------------------------+

The id shows clearly, that the algorithm is breadth first, there is no id=1 (why ever) and the nodetype allows to distinguish between elements, attributs and (floating) text. The prev column points to a sibling up in the chain. The missing columns are related to namespaces...

The approach with FROM OPENXML is outdated, but this is one of the rare situations it might still be very usefull...

You get a list with IDs and ParentIDs you might query with an recursive CTE... This depends on what you want to do with this afterwards...

Tools for visualising execution xml plans as HTML

I couldn't find one so I made one myself

https://github.com/JustinPealing/html-query-plan

Screenshot

Its currently being used on the Stack Exchange Data Explorer, Paste the Plan and Azure Data Studio.

The Best Way to shred XML data into SQL Server database columns

Stumbled across this question whilst having a very similar problem, I'd been running a query processing a 7.5MB XML file (~approx 10,000 nodes) for around 3.5~4 hours before finally giving up.

However, after a little more research I found that having typed the XML using a schema and created an XML Index (I'd bulk inserted into a table) the same query completed in ~ 0.04ms.

How's that for a performance improvement!

Code to create a schema:

IF EXISTS ( SELECT * FROM sys.xml_schema_collections where [name] = 'MyXmlSchema')
DROP XML SCHEMA COLLECTION [MyXmlSchema]
GO

DECLARE @MySchema XML
SET @MySchema =
(
SELECT * FROM OPENROWSET
(
BULK 'C:\Path\To\Schema\MySchema.xsd', SINGLE_CLOB
) AS xmlData
)

CREATE XML SCHEMA COLLECTION [MyXmlSchema] AS @MySchema
GO

Code to create the table with a typed XML column:

CREATE TABLE [dbo].[XmlFiles] (
[Id] [uniqueidentifier] NOT NULL,

-- Data from CV element
[Data] xml(CONTENT dbo.[MyXmlSchema]) NOT NULL,

CONSTRAINT [PK_XmlFiles] PRIMARY KEY NONCLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

Code to create Index

CREATE PRIMARY XML INDEX PXML_Data
ON [dbo].[XmlFiles] (Data)

There are a few things to bear in mind though. SQL Server's implementation of Schema doesn't support xsd:include. This means that if you have a schema which references other schema, you'll have to copy all of these into a single schema and add that.

Also I would get an error:

XQuery [dbo.XmlFiles.Data.value()]: Cannot implicitly atomize or apply 'fn:data()' to complex content elements, found type 'xs:anyType' within inferred type 'element({http://www.mynamespace.fake/schemas}:SequenceNumber,xs:anyType) ?'.

if I tried to navigate above the node I had selected with the nodes function. E.g.

SELECT
,C.value('CVElementId[1]', 'INT') AS [CVElementId]
,C.value('../SequenceNumber[1]', 'INT') AS [Level]
FROM
[dbo].[XmlFiles]
CROSS APPLY
[Data].nodes('/CVSet/Level/CVElement') AS T(C)

Found that the best way to handle this was to use the OUTER APPLY to in effect perform an "outer join" on the XML.

SELECT
,C.value('CVElementId[1]', 'INT') AS [CVElementId]
,B.value('SequenceNumber[1]', 'INT') AS [Level]
FROM
[dbo].[XmlFiles]
CROSS APPLY
[Data].nodes('/CVSet/Level') AS T(B)
OUTER APPLY
B.nodes ('CVElement') AS S(C)

Hope that that helps someone as that's pretty much been my day.

Xml Shredding .Value Singleton

The call to .value() will only return a single value by definition.

If you need to enumerate a list of nodes, use .nodes() - that's what it's there for.

But please avoid using .nodes(*) - that's just a killer for performance - you need to be as specific with your XPath in the .nodes() call

XML Server XML performance optimization

I can give you one answer and one guess:

First I use a declared table variable to mock up your scenario:

DECLARE @tbl TABLE(s NVARCHAR(MAX));
INSERT INTO @tbl VALUES
(N'<root>
<SomeElement>This is first text of element1
<InnerElement>This is text of inner element1</InnerElement>
This is second text of element1
</SomeElement>
<SomeElement>This is first text of element2
<InnerElement>This is text of inner element2</InnerElement>
This is second text of element2
</SomeElement>
</root>')
,(N'<root>
<SomeElement>This is first text of elementA
<InnerElement>This is text of inner elementA</InnerElement>
This is second text of elementA
</SomeElement>
<SomeElement>This is first text of elementB
<InnerElement>This is text of inner elementB</InnerElement>
This is second text of elementB
</SomeElement>
</root>');

--This query will read the XML with a cast out of a sub-select. You might use a CTE instead, but this should be syntactical sugar only...

SELECT se.value(N'(.)[1]','nvarchar(max)') SomeElementsContent
,se.value(N'(InnerElement)[1]','nvarchar(max)') InnerElementsContent
,se.value(N'(./text())[1]','nvarchar(max)') ElementsFirstText
,se.value(N'(./text())[2]','nvarchar(max)') ElementsSecondText
FROM (SELECT CAST(s AS XML) FROM @tbl) AS tbl(TheXml)
CROSS APPLY TheXml.nodes(N'/root/SomeElement') AS A(se);

--The second part uses a table to write in the typed XML and read from there:

DECLARE @tbl2 TABLE(x XML)
INSERT INTO @tbl2
SELECT CAST(s AS XML) FROM @tbl;

SELECT se.value(N'(.)[1]','nvarchar(max)') SomeElementsContent
,se.value(N'(InnerElement)[1]','nvarchar(max)') InnerElementsContent
,se.value(N'(./text())[1]','nvarchar(max)') ElementsFirstText
,se.value(N'(./text())[2]','nvarchar(max)') ElementsSecondText
FROM @tbl2 t2
CROSS APPLY t2.x.nodes(N'/root/SomeElement') AS A(se);

Why is /text() faster than without /text()?

If you look at my example, the content of an element is everything from the opening tag down to the closing tag. The text() of an element is the floating text between these tags. You can see this in the results of the select above. The text() is one separately stored portion in a tree structure actually (read next section). To fetch it, is a one-step-action. Otherwise a complex structure has to be analysed to find everything between the opening tag and its corresponding closing tag - even if there is nothing else than the text().

Why should I store XML in the appropriate type?

XML is not just text with some silly extra characters! It is a document with a complex structure. The XML is not stored as the text you see. XML is stored in a tree structure. Whenever you cast a string, which represents an XML, into a real XML, this very expensive work must be done. When the XML is presented to you (or any other output) the representing string is (re)built from scratch.

Why is the pre-casted approach faster

This is guessing...

In my example both approaches are quite equal and lead to (almost) the same execution plan.

SQL Server will not work down everything the way you might expect this. This is not a procedural system where you state do this, than do this and after do this!. You tell the engine what you want, and the engine decides how to do this best. And the engine is pretty good with this!

Before execution starts, the engine tries to estimate the costs of approaches. CONVERT (or CAST) is a rather cheap operation. It could be, that the engine decides to work down the list of your calls and do the cast for each single need over and over, because it thinks, that this is cheaper than the expensive creation of a derived table...

XQuery Plan Complexity

There are some mysteries in the query plan that needs to be sorted out first. What does the compute scalar do a and why is there a stream aggregate.

The table valued function returns a node table of the shredded XML, one row for each shredded row. When you use typed XML those columns are value, lvalue, lvaluebin and tid. Those columns are used in the compute scalar to calculate the actual value. The code in there looks a bit strange and I can't say that I understand why it is as it is but the gist of it is that the function xsd_cast_to_maybe_large returns the value and there is code that handles the case when the value is equal to and greater than 128 bytes.

CASE WHEN datalength(
CONVERT_IMPLICIT(sql_variant,
CONVERT_IMPLICIT(nvarchar(64),
xsd_cast_to_maybe_large(XML Reader with XPath filter.[value],
XML Reader with XPath filter.[lvalue],
XML Reader with XPath filter.[lvaluebin],
XML Reader with XPath filter.[tid],(15),(5),(0)),0),0))>=(128)
THEN CONVERT_IMPLICIT(int,CASE WHEN datalength(xsd_cast_to_maybe_large(XML Reader with XPath filter.[value],
XML Reader with XPath filter.[lvalue],
XML Reader with XPath filter.[lvaluebin],
XML Reader with XPath filter.[tid],(15),(5),(0)))<(128)
THEN NULL
ELSE xsd_cast_to_maybe_large(XML Reader with XPath filter.[value],
XML Reader with XPath filter.[lvalue],
XML Reader with XPath filter.[lvaluebin],
XML Reader with XPath filter.[tid],(15),(5),(0))
END,0)
ELSE CONVERT_IMPLICIT(int,CONVERT_IMPLICIT(sql_variant,
CONVERT_IMPLICIT(nvarchar(64),
xsd_cast_to_maybe_large(XML Reader with XPath filter.[value],
XML Reader with XPath filter.[lvalue],
XML Reader with XPath filter.[lvaluebin],
XML Reader with XPath filter.[tid],(15),(5),(0)),0),0),0)
END

The same compute scalar for non typed XML is much simpler and actually understandable.

CASE WHEN datalength(XML Reader with XPath filter.[value])>=(128) 
THEN CONVERT_IMPLICIT(int,XML Reader with XPath filter.[lvalue],0)
ELSE CONVERT_IMPLICIT(int,XML Reader with XPath filter.[value],0)
END

If there are more than 128 bytes in value fetch from lvalue else fetch from value. In the case with non typed XML the returned node table only outputs the columns id, value and lvalue.

When you use typed XML the storage of the node values are optimized based on the datatype specified in the schema. Looks like it could either end up in value, lvalue or lvaluebin in the node table depending on what type of value it is and xsd_cast_to_maybe_large is there to help sort things out.

The stream aggregate does a min() over the returned values from the compute scalar. We know and SQL Server does (at least sometimes) knows that there will only ever be one row returned from the table valued function when you specify an XPath in the value() function. The parser makes sure that we build the XPath correctly but when the query optimizer looks at the estimated rows it sees 200 rows. The base estimate for the table valued function that parses XML is 10000 rows and then there is some adjustments made using the XPath used. In this case it ends up with 200 rows where there is only one. Pure speculation on my part is that the stream aggregate is there to take care of this discrepancy. It will never aggregate anything, only send the one row through that is returned but it does affect the cardinality estimate for the entire branch and makes sure the optimizer uses 1 rows as an estimate for that branch. That is of course really important when the optimizer chooses join strategies etc.

So how about 100 attributes? Yes, there will be 100 branches if you use the value function 100 times. But there are some optimizations to be done here. I created a test rig to see what shape and form of the query would be the fastest using 100 attributes over 10 rows.

The winner was to use untyped XML and not to use the nodes() function to shred on r.

select X.value('(/r/@a1)[1]', 'int') as a1,
X.value('(/r/@a2)[1]', 'int') as a2,
X.value('(/r/@a3)[1]', 'int') as a3
from @T

There is also a way to avoid the 100 branches using pivot but depending on what your actual query looks like it might not be possible. The data type coming out from the pivot must be the same. You could of course extract them as a string and convert to appropriate type in the column list. It also requires that your table has a primary/unique key.

select a1, a2, a3
from (
select T.ID, -- primary key of @T
A.X.value('local-name(.)', 'nvarchar(50)') as Name,
A.X.value('.', 'int') as Value
from @T as T
cross apply T.X.nodes('/r/@*') as A(X)
) as T
pivot(min(T.Value) for Name in (a1, a2, a3)) as P

Query plan for pivot query, 10 rows 100 attributes:

Sample Image

Below is the results and the test rig I used. I tested with 100 attributes and 10 rows and all int attributes.

Result:

Test                                                Duration (ms)
-------------------------------------------------- -------------
untyped XML value('/r[1]/@a') 195
untyped XML value('(/r/@a)[1]') 108
untyped XML value('@a') cross apply nodes('/r') 131
untyped XML value('@a') cross apply nodes('/r[1]') 127
typed XML value('/r/@a') 185
typed XML value('(/r/@a)[1]') 148
typed XML value('@a') cross apply nodes('/r') 176
untyped XML pivot 34
typed XML pivot 52

Code:

drop type dbo.TRABType
drop type dbo.TType;
drop xml schema collection dbo.RAB;

go

declare @NumAtt int = 100;
declare @Attribs nvarchar(max);

with xmlnamespaces('http://www.w3.org/2001/XMLSchema' as xsd)
select @Attribs = (
select top(@NumAtt) 'a'+cast(row_number() over(order by 1/0) as varchar(11)) as '@name',

'sqltypes:int' as '@type',
'required' as '@use'
from sys.columns
for xml path('xsd:attribute')
)
--CREATE XML SCHEMA COLLECTION RAB AS

declare @Schema nvarchar(max) =
'
<xsd:schema xmlns:schema="urn:schemas-microsoft-com:sql:SqlRowSet1" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:sqltypes="http://schemas.microsoft.com/sqlserver/2004/sqltypes" elementFormDefault="qualified">
<xsd:import namespace="http://schemas.microsoft.com/sqlserver/2004/sqltypes" schemaLocation="http://schemas.microsoft.com/sqlserver/2004/sqltypes/sqltypes.xsd" />
<xsd:element name="r" type="r"/>
<xsd:complexType name="r">[ATTRIBS]</xsd:complexType>
</xsd:schema>';

set @Schema = replace(@Schema, '[ATTRIBS]', @Attribs)

create xml schema collection RAB as @Schema

go

create type dbo.TType as table
(
ID int identity primary key,
X xml not null
);

go

create type dbo.TRABType as table
(
ID int identity primary key,
X xml(document rab) not null
);

go

declare @NumAtt int = 100;
declare @NumRows int = 10;

declare @X nvarchar(max);
declare @C nvarchar(max);
declare @M nvarchar(max);

declare @S1 nvarchar(max);
declare @S2 nvarchar(max);
declare @S3 nvarchar(max);
declare @S4 nvarchar(max);
declare @S5 nvarchar(max);
declare @S6 nvarchar(max);
declare @S7 nvarchar(max);
declare @S8 nvarchar(max);
declare @S9 nvarchar(max);

set @X = N'<r '+
(
select top(@NumAtt) 'a'+cast(row_number() over(order by 1/0) as varchar(11))+'="'+cast(row_number() over(order by 1/0) as varchar(11))+'" '
from sys.columns
for xml path('')
)+
'/>';

set @C =
stuff((
select top(@NumAtt) ',a'+cast(row_number() over(order by 1/0) as varchar(11))
from sys.columns
for xml path('')
), 1, 1, '')

set @M =
stuff((
select top(@NumAtt) ',MAX(CASE WHEN name = ''a'+cast(row_number() over(order by 1/0) as varchar(11))+''' THEN val END)'
from sys.columns
for xml path('')
), 1, 1, '')

declare @T dbo.TType;
insert into @T(X)
select top(@NumRows) @X
from sys.columns;

declare @TRAB dbo.TRABType;
insert into @TRAB(X)
select top(@NumRows) @X
from sys.columns;

-- value('/r[1]/@a')
set @S1 = N'
select T.ID'+
(
select top(@NumAtt) ', T.X.value(''/r[1]/@a'+cast(row_number() over(order by 1/0) as varchar(11))+''', ''int'')'
from sys.columns
for xml path('')
)+
' from @T as T
option (maxdop 1)';

-- value('(/r/@a)[1]')
set @S2 = N'
select T.ID'+
(
select top(@NumAtt) ', T.X.value(''(/r/@a'+cast(row_number() over(order by 1/0) as varchar(11))+')[1]'', ''int'')'
from sys.columns
for xml path('')
)+
' from @T as T
option (maxdop 1)';

-- value('@a') cross apply nodes('/r')
set @S3 = N'
select T.ID'+
(
select top(@NumAtt) ', T2.X.value(''@a'+cast(row_number() over(order by 1/0) as varchar(11))+''', ''int'')'
from sys.columns
for xml path('')
)+
' from @T as T
cross apply T.X.nodes(''/r'') as T2(X)
option (maxdop 1)';

-- value('@a') cross apply nodes('/r[1]')
set @S4 = N'
select T.ID'+
(
select top(@NumAtt) ', T2.X.value(''@a'+cast(row_number() over(order by 1/0) as varchar(11))+''', ''int'')'
from sys.columns
for xml path('')
)+
' from @T as T
cross apply T.X.nodes(''/r[1]'') as T2(X)
option (maxdop 1)';

-- value('/r/@a') typed XML
set @S5 = N'
select T.ID'+
(
select top(@NumAtt) ', T.X.value(''/r/@a'+cast(row_number() over(order by 1/0) as varchar(11))+''', ''int'')'
from sys.columns
for xml path('')
)+
' from @TRAB as T
option (maxdop 1)';

-- value('(/r/@a)[1]')
set @S6 = N'
select T.ID'+
(
select top(@NumAtt) ', T.X.value(''(/r/@a'+cast(row_number() over(order by 1/0) as varchar(11))+')[1]'', ''int'')'
from sys.columns
for xml path('')
)+
' from @TRAB as T
option (maxdop 1)';

-- value('@a') cross apply nodes('/r') typed XML
set @S7 = N'
select T.ID'+
(
select top(@NumAtt) ', T2.X.value(''@a'+cast(row_number() over(order by 1/0) as varchar(11))+''', ''int'')'
from sys.columns
for xml path('')
)+
' from @TRAB as T
cross apply T.X.nodes(''/r'') as T2(X)
option (maxdop 1)';

-- pivot
set @S8 = N'
select ID, '+@C+'
from (
select T.ID,
A.X.value(''local-name(.)'', ''nvarchar(50)'') as Name,
A.X.value(''.'', ''int'') as Value
from @T as T
cross apply T.X.nodes(''/r/@*'') as A(X)
) as T
pivot(min(T.Value) for Name in ('+@C+')) as P
option (maxdop 1)';

-- typed pivot
set @S9 = N'
select ID, '+@C+'
from (
select T.ID,
A.X.value(''local-name(.)'', ''nvarchar(50)'') as Name,
cast(cast(A.X.query(''string(.)'') as varchar(11)) as int) as Value
from @TRAB as T
cross apply T.X.nodes(''/r/@*'') as A(X)
) as T
pivot(min(T.Value) for Name in ('+@C+')) as P
option (maxdop 1)';

exec sp_executesql @S1, N'@T dbo.TType readonly', @T;
exec sp_executesql @S2, N'@T dbo.TType readonly', @T;
exec sp_executesql @S3, N'@T dbo.TType readonly', @T;
exec sp_executesql @S4, N'@T dbo.TType readonly', @T;
exec sp_executesql @S5, N'@TRAB dbo.TRABType readonly', @TRAB;
exec sp_executesql @S6, N'@TRAB dbo.TRABType readonly', @TRAB;
exec sp_executesql @S7, N'@TRAB dbo.TRABType readonly', @TRAB;
exec sp_executesql @S8, N'@T dbo.TType readonly', @T;
exec sp_executesql @S9, N'@TRAB dbo.TRABType readonly', @TRAB;

In SQL Server, deleting nodes using .modify() XQuery taking 38 minutes to execute

On my machine the delete took 1 hour 25 minutes and gave me this not so pretty query plan.

Sample Image

This plan finds all empty nodes (the ones to be deleted) and stores those in a Table Spool. Then for each node in the entire document there is a check if that node is present in the spool (Nested Loops (Left Semi Join)) and if it is that node is excluded from the final result (Merge join (Left Anti Semi Join)). The xml is then rebuilt from the nodes in the UDX operator and assigned to the variable. The table spool is not indexed so for each node that needs to be checked there will be a scan of the entire spool (or until a match is found).

That essentially means the performance of this algorithm is O(n*d) where n is the total number of nodes and d is the total number or deleted nodes.

There are a couple of possible workarounds.

First and perhaps the best is if you could modify your XML query to not produce the empty nodes in the first place. Totally possible if you create the XML with for xml and perhaps not possible if you already have parts of the XML stored in a table.

Another option is to shred the XML on Row (see sample XML below), put the result in a table variable, modify the XML in the table variable and then recreate the combined XML.

declare @T table(PaymentData xml);

insert into @T
select T.X.query('.')
from @PaymentData.nodes('Row') as T(X);

update @T
set PaymentData.modify('delete //*[not(node())]');

select T.PaymentData as '*'
from @T as T
for xml path('');

Sample Image

This will give you the performance characteristic of O(n*s*d) where n is the number of row nodes, s is the number of sub-nodes per row node and d is the number of deleted rows per row node.

A third option that I really can't recommend is to use an undocumented trace flag that removes the use of a spool in the plan. You can try it out in test or you can perhaps capture the plan generated and use it in a plan guide.

declare @T table(PaymentData xml);

insert into @T values(@PaymentData);

update @T
set PaymentData.modify('delete //*[not(node())]')
option (querytraceon 8690);

select @PaymentData = PaymentData
from @T;

Query plan with trace flag:

Sample Image

Instead of 1 hour 25 minutes, this version took 4 seconds on my computer.

Shredding the XML to multiple rows to the table variable took in total 6 seconds to execute.

Not having to delete any rows at all is of course the fastest.

Sample data, 12000 nodes with 32 subnodes where 2 is empty if you want to try this at home.

declare @PaymentData as xml;

set @PaymentData = (
select top(12000)
1 as N1, 1 as N2, 1 as N3, 1 as N4, 1 as N5, 1 as N6, 1 as N7, 1 as N8, 1 as N9, 1 as N10,
1 as N11, 1 as N12, 1 as N13, 1 as N14, 1 as N15, 1 as N16, 1 as N17, 1 as N18, 1 as N19, 1 as N20,
1 as N21, 1 as N22, 1 as N23, 1 as N24, 1 as N25, 1 as N26, 1 as N27, 1 as N28, 1 as N29, 1 as N30,
'' as N31,
'' as N32
from sys.columns as c1, sys.columns as c2
for xml path('Row')
);

Note: I have no idea why it only took 24 seconds to execute on one of your servers. I would advise you to recheck that the XML actually is identical. Or why not test using the XML sample I have provided for you.

Update:

For the shredding version the problem with the spool in the delete query could be moved to the shredding query instead leaving you with about the same bad performance. That is however not always true. I have seen plans where there is no spool and plans where there are a spool and I don't know why it is there sometimes and why it is not at other times.

I have also found that if you use a temp table instead with insert ... into I don't get the spool in the shredding query.

select T.X.query('.') as PaymentData
into #T
from @PaymentData.nodes('Row') as T(X);

update #T
set PaymentData.modify('delete //*[not(node())]');


Related Topics



Leave a reply



Submit