Why are logical reads for windowed aggregate functions so high?
Logical reads are counted differently for worktables: there is one 'logical read' per row read. This does not mean that worktables are somehow less efficient than a 'real' spool table (quite the reverse); the logical reads are just in different units.
I believe the thinking was that counting hashed pages for worktable logical reads would not be very useful because these structures are internal to the server. Reporting rows spooled in the logical reads counter makes the number more meaningful for analysis purposes.
This insight should make the reason your formula works clear. The two secondary spools are fully read twice (2 * COUNT(*)), and the primary spool emits (number of group values + 1) rows as explained in my blog entry, giving the (COUNT(DISTINCT CustomerID) + 1) component. The plus one is for the extra row emitted by the primary spool to indicate the final group has ended.
Paul
Large number of logical reads on single row delete
This answer solved the problem, now deletes work like a charm. I'm not sure if there are any downsides I should be aware of.
Difference in number of logical reads for similar set of data
This is due to the way you are creating an index (creating first ,for one table and last, for one table ) and With varying Fill factor settings..
Fill Factor as we know , determines how much free space is left in leaf level page of index,in this case you are asking to set it 20 which means, leave the rest of 80% free..
Emphasis needed:
Fill factor setting will be honored only when index is rebuilt or created.
First Case:
Even though you create an index with fill factor of 20,there is no data,so free space is not left and we know while inserting this setting is not honored.
Querying table1 pages show there are only 500 rows
select object_name(object_id),index_depth,index_level,avg_fragmentation_in_percent,avg_page_space_used_in_percent,page_count from
sys.dm_db_index_physical_stats(db_id(),0,-1,0,'Detailed')
where object_id=object_id('t')
Second case:
You are creating an index after data is inserted and SQL will honor the setting,pages will be rearranged to honor this setting..
Table2 page count gives us 1000 rows..
select object_name(object_id),index_depth,index_level,avg_fragmentation_in_percent,avg_page_space_used_in_percent,page_count from
sys.dm_db_index_physical_stats(db_id(),0,-1,0,'Detailed')
where object_id=object_id('t1')
Even though table count is same,number of pages are different due to the way you are creating index.This is the reason why you are seeing different logical reads for each table with same data and structure
If you query after rebuilding both indexes ,you will see same logical reads
Related Topics
Formula for Computed Column Based on Different Table's Column
Anonymous Table or Varray Type in Oracle
SQL Server Trigger Insert Values from New Row into Another Table
Ora 00904 Error:Invalid Identifier
Access - Compare Two Tables and Update or Insert Data in First Table
Any Way to Achieve Fulltext-Like Search on Innodb
How to Export Image Field to File
How to Get a SQL Row_Number Equivalent for a Spark Rdd
SQL Server 2008 Paging Methods
SQL Server 2008 - Help Writing Simple Insert Trigger
How to Split String Using Delimiter Char Using T-Sql
Insert Select Statement in Oracle 11G
SQL Speed Up Performance of Insert
Sql, How to Concatenate Results
Is There a Select ... into Outfile Equivalent in SQL Server Management Studio
Coldfusion - Variable Field Name When Looping Through Database Query Results