How to Get the Size of a Varchar[N] Field in One SQL Statement

How to get the size of a varchar[n] field in one SQL statement?


select column_name, data_type, character_maximum_length    
from information_schema.columns
where table_name = 'myTable'

Retrieve the maximum length of a VARCHAR column in SQL Server

Use the built-in functions for length and max on the description column:

SELECT MAX(LEN(DESC)) FROM table_name;

Note that if your table is very large, there can be performance issues.

Best practices for SQL varchar column length

No DBMS I know of has any "optimization" that will make a VARCHAR with a 2^n length perform better than one with a max length that is not a power of 2.

I think early SQL Server versions actually treated a VARCHAR with length 255 differently than one with a higher maximum length. I don't know if this is still the case.

For almost all DBMS, the actual storage that is required is only determined by the number of characters you put into it, not the max length you define. So from a storage point of view (and most probably a performance one as well), it does not make any difference whether you declare a column as VARCHAR(100) or VARCHAR(500).

You should see the max length provided for a VARCHAR column as a kind of constraint (or business rule) rather than a technical/physical thing.

For PostgreSQL the best setup is to use text without a length restriction and a CHECK CONSTRAINT that limits the number of characters to whatever your business requires.

If that requirement changes, altering the check constraint is much faster than altering the table (because the table does not need to be re-written)

The same can be applied for Oracle and others - in Oracle it would be VARCHAR(4000) instead of text though.

I don't know if there is a physical storage difference between VARCHAR(max) and e.g. VARCHAR(500) in SQL Server. But apparently there is a performance impact when using varchar(max) as compared to varchar(8000).

See this link (posted by Erwin Brandstetter as a comment)

Edit 2013-09-22

Regarding bigown's comment:

In Postgres versions before 9.2 (which was not available when I wrote the initial answer) a change to the column definition did rewrite the whole table, see e.g. here. Since 9.2 this is no longer the case and a quick test confirmed that increasing the column size for a table with 1.2 million rows indeed only took 0.5 seconds.

For Oracle this seems to be true as well, judging by the time it takes to alter a big table's varchar column. But I could not find any reference for that.

For MySQL the manual says "In most cases, ALTER TABLE makes a temporary copy of the original table". And my own tests confirm that: running an ALTER TABLE on a table with 1.2 million rows (the same as in my test with Postgres) to increase the size of a column took 1.5 minutes. In MySQL however you can not use the "workaround" to use a check constraint to limit the number of characters in a column.

For SQL Server I could not find a clear statement on this but the execution time to increase the size of a varchar column (again the 1.2 million rows table from above) indicates that no rewrite takes place.

Edit 2017-01-24

Seems I was (at least partially) wrong about SQL Server. See this answer from Aaron Bertrand that shows that the declared length of a nvarchar or varchar columns makes a huge difference for the performance.

Retrieve size of field having varchar datatype in SQL Server using C#

information_schema.columns table gives the information
try

SELECT table_catalog, 
table_name,
column_name,
data_type,
character_maximum_length
FROM information_schema.columns
WHERE data_type = 'varchar'

querying WHERE condition to character length?

Sorry, I wasn't sure which SQL platform you're talking about:

In MySQL:

$query = ("SELECT * FROM $db WHERE conditions AND LENGTH(col_name) = 3");

in MSSQL

$query = ("SELECT * FROM $db WHERE conditions AND LEN(col_name) = 3");

The LENGTH() (MySQL) or LEN() (MSSQL) function will return the length of a string in a column that you can use as a condition in your WHERE clause.

Edit

I know this is really old but thought I'd expand my answer because, as Paulo Bueno rightly pointed out, you're most likely wanting the number of characters as opposed to the number of bytes. Thanks Paulo.

So, for MySQL there's the CHAR_LENGTH(). The following example highlights the difference between LENGTH() an CHAR_LENGTH():

CREATE TABLE words (
word VARCHAR(100)
) ENGINE INNODB DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_unicode_ci;

INSERT INTO words(word) VALUES('快樂'), ('happy'), ('hayır');

SELECT word, LENGTH(word) as num_bytes, CHAR_LENGTH(word) AS num_characters FROM words;

+--------+-----------+----------------+
| word | num_bytes | num_characters |
+--------+-----------+----------------+
| 快樂 | 6 | 2 |
| happy | 5 | 5 |
| hayır | 6 | 5 |
+--------+-----------+----------------+

Be careful if you're dealing with multi-byte characters.

Changing the maximum length of a varchar column?

You need

ALTER TABLE YourTable ALTER COLUMN YourColumn <<new_datatype>> [NULL | NOT NULL]

But remember to specify NOT NULL explicitly if desired.

ALTER TABLE YourTable ALTER COLUMN YourColumn VARCHAR (500) NOT NULL;

If you leave it unspecified as below...

ALTER TABLE YourTable ALTER COLUMN YourColumn VARCHAR (500);

Then the column will default to allowing nulls even if it was originally defined as NOT NULL. i.e. omitting the specification in an ALTER TABLE ... ALTER COLUMN is always treated as.

ALTER TABLE YourTable ALTER COLUMN YourColumn VARCHAR (500) NULL;

This behaviour is different from that used for new columns created with ALTER TABLE (or at CREATE TABLE time). There the default nullability depends on the ANSI_NULL_DFLT settings.



Related Topics



Leave a reply



Submit