Getting List of Tables, and Fields in Each, in a Database

Getting list of tables, and fields in each, in a database

Is this what you are looking for:

Using OBJECT CATALOG VIEWS

 SELECT T.name AS Table_Name ,
C.name AS Column_Name ,
P.name AS Data_Type ,
P.max_length AS Size ,
CAST(P.precision AS VARCHAR) + '/' + CAST(P.scale AS VARCHAR) AS Precision_Scale
FROM sys.objects AS T
JOIN sys.columns AS C ON T.object_id = C.object_id
JOIN sys.types AS P ON C.system_type_id = P.system_type_id
WHERE T.type_desc = 'USER_TABLE';

Using INFORMATION SCHEMA VIEWS

  SELECT TABLE_SCHEMA ,
TABLE_NAME ,
COLUMN_NAME ,
ORDINAL_POSITION ,
COLUMN_DEFAULT ,
DATA_TYPE ,
CHARACTER_MAXIMUM_LENGTH ,
NUMERIC_PRECISION ,
NUMERIC_PRECISION_RADIX ,
NUMERIC_SCALE ,
DATETIME_PRECISION
FROM INFORMATION_SCHEMA.COLUMNS;

Reference : My Blog - http://dbalink.wordpress.com/2008/10/24/querying-the-object-catalog-and-information-schema-views/

How to get all tables and column names in SQL?

however your question isn't enough clear but you can get all of it with this this code

SELECT * FROM INFORMATION_SCHEMA.COLUMNS

How do I get list of all tables in a database using TSQL?

SQL Server 2000, 2005, 2008, 2012, 2014, 2016, 2017 or 2019:

SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE='BASE TABLE'

To show only tables from a particular database

SELECT TABLE_NAME 
FROM [].INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'

Or,

SELECT TABLE_NAME 
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
AND TABLE_CATALOG='dbName' --(for MySql, use: TABLE_SCHEMA='dbName' )

PS: For SQL Server 2000:

SELECT * FROM sysobjects WHERE xtype='U' 

Get column names of all tables in SQL

You can use INFORMATION_SCHEMA.COLUMNS:

select c.*
from INFORMATION_SCHEMA.COLUMNS c;

This has name, type, and a lot of other information for all tables in a database -- note, not on a server but in a database.

php/mysql get all tables and columns in a database

For security reasons you should have a whitelist of databases/tables you want to generate reports from. Querying for all tables assumes that all future tables will need to be part of this system.

You can query for the columns in each table using show columns from tableName and iterate the results.

How to loop through all tables and fields in each table to get percentage of missing values

The following SQL query generates one query per column in a database that counts total rows and rows where the value is NULL.

You can load this in to a variable and loop through it in SSIS running the statement in each row one at a time and logging the results form that query out to another table.

SELECT 
OBJECT_SCHEMA_NAME(C.object_id) AS TableSchema
,OBJECT_NAME(C.object_id) AS TableName
,C.name AS ColumnName
,'SELECT COUNT(*) AS TotalRows, COUNT(IIF([' +C.name+ '] IS NULL,1,NULL)) AS NullRows
FROM [' + OBJECT_SCHEMA_NAME(C.object_id) + '].[' + OBJECT_NAME(C.object_id) + ']' AS CountQuery
FROM sys.columns AS C
INNER JOIN sys.tables AS T
ON C.object_id = T.object_id

Is creating a new table for each `list` the best way to speed up database queries here?

Decisions like this always depend on how you think your final queries might end up.
Your initial solution works well in most cases provided you put indexes on the lookup columns. Then you can just join the tables together using the ids when you run your searches. By putting list items into a single table you have the advantage of normalizing the data easily so a specific item only takes up space in your database once.

Sometimes you might categorise a table so it might get a subset of the data. Maybe all of the items that start with a letter, but you wouldn't do this sort of thing until the table reached a certain threshold. Your multiple solution does work, but you are going to need unions on lots of tables if you want to export the data together in a single query.

If you never need to lookup what the items are and just want to export them as is you could consider jsonb which allows you to put a json binary object directly into your row alongside the list details. You can query the items in the json but it is not as efficient as a indexed database column for quick lookups.
Using your example you would end up with a single table.






















idtitlelist_items
0"foo"['hello','foobar']
1"bar"['world']

How to get a list of tables used for each query in the query history in snowflake

An alternative approach using rlike and information_schema.tables.

You could extend this further by looking at the # rows per table (high = fact, low = dimension) and the number of times accessed.

select query_text, array_agg(DISTINCT TABLE_NAME::string) 
from
(select top 100 query_text
from
table(information_schema.query_history())
where
EXECUTION_STATUS = 'SUCCESS' ) a
left outer join
(select TABLE_NAME from INFORMATION_SCHEMA.TABLES group by TABLE_NAME) b
on
upper(a.query_text) rlike '.*('||upper(b.table_name)||').*'
group by
query_text

Extended Version:

I noticed there's some issues with the above answer. Firstly that it does not allow you to run the explain plan any more than one query at a time. Secondly if the query_id uses a cache it fails to return any objects.

So be extending my initial answer as follows.

  1. Create a couple views that read all the databases and provide central authority on all tables/views/objects/query_histories.
  2. Run the generated SQL which creates a couple views. It again uses rlike but substitutes database and schema names from the query_history when not present.

I've added credits, time elapsed to the two views for more extensions.

You can validate for yourself by checking the explain plan as above and if you don't see identical tables check the SQL and you'll most likely see the cache has been used.

Would be great to hear if anyone finds this useful.

Sample Image

Step 1 Create a couple views :

show databases;
select RTRIM( 'create or replace view bob as ( '||listagg('select CONCAT_WS(\'.\',TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME) table_name_3,CONCAT_WS(\'.\',TABLE_SCHEMA, TABLE_NAME) table_name_2,TABLE_NAME, ROW_COUNT, BYTES from ' ||"name" ||'.INFORMATION_SCHEMA.TABLES union all ') ,' union all')||')' tabs,
RTRIM( 'create or replace view bobby as ( '||listagg('select QUERY_ID, query_text ,DATABASE_NAME, SCHEMA_NAME, CREDITS_USED_CLOUD_SERVICES , TOTAL_ELAPSED_TIME from table( '||"name" ||'.information_schema.query_history()) where EXECUTION_STATUS = \'SUCCESS\' union all ') ,' union all')||')' tabs2
from table(result_scan( LAST_QUERY_ID()));

Step 2 Run this SQL:

select
QUERY_TEXT,
query_id,
CREDITS_USED,
TOTAL_ELAPSED,
array_agg(TABLE_NAME_3) tables_used
from
(
select
QUERY_TEXT
,query_id
,TABLE_NAME
, rlike( (a.query_text) , '.*(\\s.|\\.){1}('||(bob.TABLE_NAME)||'(\\s.*|$))','is') aa
, rlike( (a.query_text) , '.*(\\s.|\\.){1}('||(bob.TABLE_NAME_2)||'(\\s.*|$))','is') bb
, rlike( (a.query_text) , '.*(\\s.){1}('||(bob.TABLE_NAME_3)||'(\\s.*|$))','is') cc,
bob.TABLE_NAME_3,
count(1) cnt,
max(CREDITS_USED_CLOUD_SERVICES) CREDITS_USED,
max(TOTAL_ELAPSED_TIME) TOTAL_ELAPSED
from
BOBBY a
left outer join
BOB
on
rlike( (a.query_text) , '.*(\\s.|\\.){1}('||(bob.TABLE_NAME)||'(\\s.*|$))','is')
or rlike( (a.query_text) , '.*(\\s.|\\.){1}('||(bob.TABLE_NAME_2)||'(\\s.*|$))','is')
or rlike( (a.query_text) , '.*(\\s.|\\.){1}('||(bob.TABLE_NAME_3)||'(\\s.*|$))','is')
where
TABLE_NAME is not null
and ( cc
or iff(bb, upper( DATABASE_NAME||'.'||TABLE_NAME) = bob.TABLE_NAME_3, false)
or iff(aa, upper (DATABASE_NAME||'.'||SCHEMA_NAME||'.'||TABLE_NAME) = bob.TABLE_NAME_3, false)
)
group by
1,2,3,4,5,6,7)
group by
1,2,3,4;


Related Topics



Leave a reply



Submit