Counting Rows for All Tables at Once

Counting rows for all tables at once

From here: http://web.archive.org/web/20080701045806/http://sqlserver2000.databases.aspfaq.com:80/how-do-i-get-a-list-of-sql-server-tables-and-their-row-counts.html

SELECT 
[TableName] = so.name,
[RowCount] = MAX(si.rows)
FROM
sysobjects so,
sysindexes si
WHERE
so.xtype = 'U'
AND
si.id = OBJECT_ID(so.name)
GROUP BY
so.name
ORDER BY
2 DESC

How to fetch the row count for all tables in a SQL SERVER database

The following SQL will get you the row count of all tables in a database:

CREATE TABLE #counts
(
table_name varchar(255),
row_count int
)

EXEC sp_MSForEachTable @command1='INSERT #counts (table_name, row_count) SELECT ''?'', COUNT(*) FROM ?'
SELECT table_name, row_count FROM #counts ORDER BY table_name, row_count DESC
DROP TABLE #counts

The output will be a list of tables and their row counts.

If you just want the total row count across the whole database, appending:

SELECT SUM(row_count) AS total_row_count FROM #counts

will get you a single value for the total number of rows in the whole database.

Count total rows of all tables in a database SQL Server

You can take a glance to the following article;

Different approaches of counting number of rows in a table

This is my favorite one;

 SELECT SCHEMA_NAME(t.[schema_id]) AS [table_schema]
,OBJECT_NAME(p.[object_id]) AS [table_name]
,SUM(p.[rows]) AS [row_count]
FROM [sys].[partitions] p
INNER JOIN [sys].[tables] t ON p.[object_id] = t.[object_id]
WHERE p.[index_id] < 2
GROUP BY p.[object_id]
,t.[schema_id]
ORDER BY 1, 2 ASC

Sample Image

This one find out total number of the SQL Database

SELECT 
SUM(p.[rows]) AS [row_count]
FROM [sys].[partitions] p
INNER JOIN [sys].[tables] t ON p.[object_id] = t.[object_id]
WHERE p.[index_id] < 2

Sample Image

Get record counts for all tables in MySQL database

SELECT SUM(TABLE_ROWS) 
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = '{your_db}';

Note from the docs though: For InnoDB tables, the row count is only a rough estimate used in SQL optimization. You'll need to use COUNT(*) for exact counts (which is more expensive).

Select count(*) from multiple tables

SELECT  (
SELECT COUNT(*)
FROM tab1
) AS count1,
(
SELECT COUNT(*)
FROM tab2
) AS count2
FROM dual

Count rows in all tables, that meets conditions

My answer gets the column metadata, executes the statement in a loop and publishes the results to a table variable:

if object_id('tempdb..#LoopList') is not null
drop table #LoopList

select
s.[name] as SchemaName
,t.[name] as TableName
,row_number() over (order by t.[object_id] asc) as RowNumber
into #LoopList
from sys.columns as c
inner join sys.tables as t
on c.[object_id] = t.[object_id]
inner join sys.schemas as s
on t.[schema_id] = s.[schema_id]
where c.[name] = 'Dataareaid'

declare
@a int = 1
,@b int = (select max(RowNumber) from #LoopList)
,@c nvarchar(max)
,@d nvarchar(max)
,@e int

declare @count table (RowCounter int)

declare @resultsTable table (TableName nvarchar(500), RowCounter int)

while @a <= @b

begin

delete from @count
set @c = concat ((select quotename(SchemaName) from #LoopList where RowNumber = @a)
,'.'
,(select quotename(TableName) from #LoopList where RowNumber = @a)
)

set @d = concat(N'select count(*) from '
,@c
,N' where Dataareaid = ''FR'''
)
insert into @count (
RowCounter
)
exec sp_executesql @d

set @e = (select top 1 RowCounter from @count)

insert into @resultsTable (
TableName
,RowCounter
)
values (@c,@e)

set @a += 1;

end

select * from @resultsTable

How do you find the row count for all your tables in Postgres

There's three ways to get this sort of count, each with their own tradeoffs.

If you want a true count, you have to execute the SELECT statement like the one you used against each table. This is because PostgreSQL keeps row visibility information in the row itself, not anywhere else, so any accurate count can only be relative to some transaction. You're getting a count of what that transaction sees at the point in time when it executes. You could automate this to run against every table in the database, but you probably don't need that level of accuracy or want to wait that long.

WITH tbl AS
(SELECT table_schema,
TABLE_NAME
FROM information_schema.tables
WHERE TABLE_NAME not like 'pg_%'
AND table_schema in ('public'))
SELECT table_schema,
TABLE_NAME,
(xpath('/row/c/text()', query_to_xml(format('select count(*) as c from %I.%I', table_schema, TABLE_NAME), FALSE, TRUE, '')))[1]::text::int AS rows_n
FROM tbl
ORDER BY rows_n DESC;

The second approach notes that the statistics collector tracks roughly how many rows are "live" (not deleted or obsoleted by later updates) at any time. This value can be off by a bit under heavy activity, but is generally a good estimate:

SELECT schemaname,relname,n_live_tup 
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;

That can also show you how many rows are dead, which is itself an interesting number to monitor.

The third way is to note that the system ANALYZE command, which is executed by the autovacuum process regularly as of PostgreSQL 8.3 to update table statistics, also computes a row estimate. You can grab that one like this:

SELECT 
nspname AS schemaname,relname,reltuples
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE
nspname NOT IN ('pg_catalog', 'information_schema') AND
relkind='r'
ORDER BY reltuples DESC;

Which of these queries is better to use is hard to say. Normally I make that decision based on whether there's more useful information I also want to use inside of pg_class or inside of pg_stat_user_tables. For basic counting purposes just to see how big things are in general, either should be accurate enough.



Related Topics



Leave a reply



Submit