Get Record Counts For All Tables in MySQL Database

Get record counts for all tables in MySQL database

SELECT SUM(TABLE_ROWS) 
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = '{your_db}';

Note from the docs though: For InnoDB tables, the row count is only a rough estimate used in SQL optimization. You'll need to use COUNT(*) for exact counts (which is more expensive).

MySQL: Summarize all table row-counts in a single query

The first example code here is a stored procedure which performs the entire process in one step, so far as the user is concerned.

BEGIN

# zgwp_tables_rowcounts
# TableName RowCount
# Outputs a result set listing all tables and their row counts
# for the current database

SET SESSION group_concat_max_len = 1000000;

SET @sql = NULL;
SET @dbname = DATABASE();

SELECT
GROUP_CONCAT(
CONCAT (
'SELECT ''',table_name,''' as TableName, COUNT(*) as RowCount FROM ',
table_name, ' '
)
SEPARATOR 'UNION '
) AS Qry
FROM
information_schema.`TABLES` AS t
WHERE
t.TABLE_SCHEMA = @dbname AND
t.TABLE_TYPE = "BASE TABLE"
ORDER BY
t.TABLE_NAME ASC

INTO @sql
;

PREPARE stmt FROM @sql;

EXECUTE stmt;

END

Notes:

  • The SELECT..INTO @sql creates the necessary query, and the PREPARE... EXECUTE runs it.

  • Sets the group_concat_max_len variable in order to allow a long enough result string from GROUP_CONCAT.

The above procedure is useful for a quick look in an admin environment like Navicat, or on the command line. However, despite returning a result set, so far as I am aware it can't be referenced in another View or Query, presumably because MySQL is unable to determine, before running it, what result sets it produces, let alone what columns they have.

So, it is still useful to be able to quickly produce, without manual editing, the separate SELECT...UNION statement that can be used as a View. That is useful if you want to join the row counts to some other per-table info from another table. Herewith another stored procedure:

BEGIN

# zgwp_tables_rowcounts_view_statement
# Output: SelectStatement
# Outputs a single row and column, containing a (possibly lengthy)
# SELECT...UNION statement that, if used as a View, will output
# TableName RowCount for all tables in the current database.

SET SESSION group_concat_max_len = 1000000;
SET @dbname = DATABASE();

SELECT
GROUP_CONCAT(
CONCAT (
'SELECT ''',table_name,''' as TableName, COUNT(*) as RowCount FROM ',
table_name, ' ', CHAR(10))
SEPARATOR 'UNION '
) AS SelectStatement
FROM
information_schema.`TABLES` AS t
WHERE
t.TABLE_SCHEMA = @dbname AND
t.TABLE_TYPE = "BASE TABLE"
ORDER BY
t.TABLE_NAME ASC
;
END

Notes

  • Very similar to the first procedure in concept. I added a linebreak (CHAR(10)) to each subsidiary "SELECT...UNION" statement, for convenience in viewing or editing the statement.

  • You could create this as a function and return the SelectStatement, if that's more convenient for your environment.

Hope that helps.

MySQL - How to count all rows per table in one query

SELECT 
TABLE_NAME,
TABLE_ROWS
FROM
`information_schema`.`tables`
WHERE
`table_schema` = 'YOUR_DB_NAME';

How to fetch the row count for all tables in a SQL SERVER database

The following SQL will get you the row count of all tables in a database:

CREATE TABLE #counts
(
table_name varchar(255),
row_count int
)

EXEC sp_MSForEachTable @command1='INSERT #counts (table_name, row_count) SELECT ''?'', COUNT(*) FROM ?'
SELECT table_name, row_count FROM #counts ORDER BY table_name, row_count DESC
DROP TABLE #counts

The output will be a list of tables and their row counts.

If you just want the total row count across the whole database, appending:

SELECT SUM(row_count) AS total_row_count FROM #counts

will get you a single value for the total number of rows in the whole database.

How do you find the row count for all your tables in Postgres

There's three ways to get this sort of count, each with their own tradeoffs.

If you want a true count, you have to execute the SELECT statement like the one you used against each table. This is because PostgreSQL keeps row visibility information in the row itself, not anywhere else, so any accurate count can only be relative to some transaction. You're getting a count of what that transaction sees at the point in time when it executes. You could automate this to run against every table in the database, but you probably don't need that level of accuracy or want to wait that long.

WITH tbl AS
(SELECT table_schema,
TABLE_NAME
FROM information_schema.tables
WHERE TABLE_NAME not like 'pg_%'
AND table_schema in ('public'))
SELECT table_schema,
TABLE_NAME,
(xpath('/row/c/text()', query_to_xml(format('select count(*) as c from %I.%I', table_schema, TABLE_NAME), FALSE, TRUE, '')))[1]::text::int AS rows_n
FROM tbl
ORDER BY rows_n DESC;

The second approach notes that the statistics collector tracks roughly how many rows are "live" (not deleted or obsoleted by later updates) at any time. This value can be off by a bit under heavy activity, but is generally a good estimate:

SELECT schemaname,relname,n_live_tup 
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;

That can also show you how many rows are dead, which is itself an interesting number to monitor.

The third way is to note that the system ANALYZE command, which is executed by the autovacuum process regularly as of PostgreSQL 8.3 to update table statistics, also computes a row estimate. You can grab that one like this:

SELECT 
nspname AS schemaname,relname,reltuples
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE
nspname NOT IN ('pg_catalog', 'information_schema') AND
relkind='r'
ORDER BY reltuples DESC;

Which of these queries is better to use is hard to say. Normally I make that decision based on whether there's more useful information I also want to use inside of pg_class or inside of pg_stat_user_tables. For basic counting purposes just to see how big things are in general, either should be accurate enough.

Exact count of all rows in MySQL database

I think the only accurate (and slower) way is to do for every single table:

SELECT COUNT(*) FROM Table


Related Topics



Leave a reply



Submit