SQL Count(*) Performance

SQL count(*) performance

Mikael Eriksson has a good explanation bellow why the first query is fast:

SQL server optimize it into:
if exists(select * from BookChapters). So it goes looking for the presence of one row instead of counting all the rows in the table.

For the other two queries, SQL Server would use the following rule. To perform a query like SELECT COUNT(*), SQL Server will use the narrowest
non-clustered index to count the rows. If the table does not have any
non-clustered index, it will have to scan the table.

Also, if your table has a clustered index you can get your count even faster using the following query (borrowed from this site Get Row Counts Fast!)

--SQL Server 2005/2008
SELECT OBJECT_NAME(i.id) [Table_Name], i.rowcnt [Row_Count]
FROM sys.sysindexes i WITH (NOLOCK)
WHERE i.indid in (0,1)
ORDER BY i.rowcnt desc

--SQL Server 2000
SELECT OBJECT_NAME(i.id) [Table_Name], i.rows [Row_Count]
FROM sysindexes i (NOLOCK)
WHERE i.indid in (0,1)
ORDER BY i.rows desc

It uses sysindexes system table. More info you can find here SQL Server 2000, SQL Server 2005, SQL Server 2008, SQL Server 2012

Here is another link Why is my SELECT COUNT(*) running so slow? with another solution. It shows technique that Microsoft uses to quickly display the number of rows when you right click on the table and select properties.

select sum (spart.rows)
from sys.partitions spart
where spart.object_id = object_id(’YourTable’)
and spart.index_id < 2

You should find that this returns very quickly no matter how many tables you have.

If you are using SQL 2000 still you can use the sysindexes table to get the number.

select max(ROWS)
from sysindexes
where id = object_id(’YourTable’)

This number may be slightly off depending on how often SQL updates the sysindexes table, but it’s usually corrent (or at least close enough).

sql count performance

These would have the same performance. In most databases, count() results in a scan of the table or available indexes. Whether or not it uses the index instead of the table depends only on the query optimizer. If the optimizer is smart enough to use the index, it should be smart enough in both cases.

Using available metadata tables, you can often get the number of rows in a table much mroe efficiently than by using a count() query.

Mysql count performance on very big tables

Finally the fastest was to query the first X rows using C# and counting the rows number.

My application is treating the data in batches. The amount of time between two batches are depending the number of rows who need to be treated

SELECT pk FROM table WHERE fk = 1 LIMIT X

I got the result in 0.9 seconds.

Thanks all for your ideas!

UPDATE vs COUNT vs SELECT performance

There are different resource types involved:

  • disk I/O (this is the most costly part of every DBMS)
  • buffer pressure: fetching a row will cause fetching a page from disk, which will need buffer memory to be stored in
  • work/scratch memory for intermediate tables, structures and aggregates.
  • "terminal" I/O to the front-end process.
  • cost of locking, serialisation and versioning and journaling
  • CPU cost : this is neglectable in most cases (compared to disk I/O)

The UPDATE query in the question is the hardest: it will cause all disk pages for the table to be fetched, put into buffers, altered into new buffers and written back to disk. In normal circumstances, it will also cause other processes to be locked out, with contention and even more buffer pressure as a result.

The SELECT * query needs all the pages, too; and it needs to convert/format them all into frontend-format and send them back to the frontend.

The SELECT COUNT(*) is the cheapest, on all resources. In the worst case all the disk pages have to be fetched. If an index is present, fewer disk- I/O and buffers are needed. The CPU cost is still neglectable (IMHO) and the "terminal" output is marginal.



Related Topics



Leave a reply



Submit