Work Around SQL Server Maximum Columns Limit 1024 and 8Kb Record Size

Work around SQL Server maximum columns limit 1024 and 8kb record size

SQL Server Maximum Columns Limit

Bytes per short string column 8,000

Bytes per GROUP BY, ORDER BY 8,060

Bytes per row 8,060

Columns per index key 16

Columns per foreign key 16

Columns per primary key 16

Columns per nonwide table 1,024

Columns per wide table 30,000

Columns per SELECT statement 4,096

Columns per INSERT statement 4096

Columns per UPDATE statement (Wide Tables) 4096

When you combine varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns that exceed 8,060 bytes per row, consider the following:

Surpassing the 8,060-byte row-size limit might affect performance because SQL Server still maintains a limit of 8 KB per page. When a combination of varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns exceeds this limit, the SQL Server Database Engine moves the record column with the largest width to another page in the ROW_OVERFLOW_DATA allocation unit, while maintaining a 24-byte pointer on the original page. Moving large records to another page occurs dynamically as records are lengthened based on update operations. Update operations that shorten records may cause records to be moved back to the original page in the IN_ROW_DATA allocation unit. Also, querying and performing other select operations, such as sorts or joins on large records that contain row-overflow data slows processing time, because these records are processed synchronously instead of asynchronously.

Therefore, when you design a table with multiple varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns, consider the percentage of rows that are likely to flow over and the frequency with which this overflow data is likely to be queried. If there are likely to be frequent queries on many rows of row-overflow data, consider normalizing the table so that some columns are moved to another table. This can then be queried in an asynchronous JOIN operation.

  • The length of individual columns must still fall within the limit of
    8,000 bytes for varchar, nvarchar, varbinary, sql_variant, and CLR
    user-defined type columns. Only their combined lengths can exceed the
    8,060-byte row limit of a table.
  • The sum of other data type columns, including char and nchar data,
    must fall within the 8,060-byte row limit. Large object data is also
    exempt from the 8,060-byte row limit.
  • The index key of a clustered index cannot contain varchar columns
    that have existing data in the ROW_OVERFLOW_DATA allocation unit. If
    a clustered index is created on a varchar column and the existing
    data is in the IN_ROW_DATA allocation unit, subsequent insert or
    update actions on the column that would push the data off-row will
    fail. For more information about allocation units, see Table and
    Index Organization.
  • You can include columns that contain row-overflow data as key or
    nonkey columns of a nonclustered index.
  • The record-size limit for tables that use sparse columns is 8,018
    bytes. When the converted data plus existing record data exceeds
    8,018 bytes, MSSQLSERVER ERROR 576 is returned. When columns are
    converted between sparse and nonsparse types, Database Engine keeps a
    copy of the current record data. This temporarily doubles the storage
    that is required for the record. .
  • To obtain information about tables or indexes that might contain
    row-overflow data, use the sys.dm_db_index_physical_stats dynamic
    management function.

Creating table with n number of columns and datatype Nvarchar

CREATE Proc [dbo].[CreateMaxColTable_Nvarchar500]
(@TableName nvarchar(100),@NumofCols int)
AS
BEGIN

DECLARE @i INT
DECLARE @MAX INT
DECLARE @SQL VARCHAR(MAX)
DECLARE @j VARCHAR(10)
DECLARE @len int
SELECT @i=1
SELECT @MAX=@NumofCols
SET @SQL='CREATE TABLE ' + @TableName + '('

WHILE @i<=@MAX

BEGIN
select @j= cast(@i as varchar)
SELECT @SQL= @SQL+'X'+@j +' NVARCHAR(500) , '
SET @i = @i + 1
END
select @len=len(@SQL)

select @SQL = substring(@SQL,0,@len-1)

SELECT @SQL= @SQL+ ' )'

exec (@SQL)

END

For more information you can visit these links:

http://msdn.microsoft.com/en-us/library/ms186981%28SQL.105%29.aspx?PHPSESSID=tn8k5p1s508cop8gr43e1f34d2

http://technet.microsoft.com/en-us/library/ms143432.aspx

But please could you tell the scenario why do you need a table with so much columns?
I think you should consider about the re-design of the database.

Is there a way around the 8k row length limit in SQL Server?

If you are using SQL Server 2005, 2008 or 2012, you should be able to use NVARCHAR(max) or NTEXT which would be larger than 8,000 characters. MAX will give you 2^31 - 1 characters:

http://msdn.microsoft.com/en-us/library/ms186939(v=sql.90).aspx

Need to exceed the 8k record limit when using wide columns/sparse tables

Not possible, because SQL Server stores rows on 8K pages. The only way to do so would be to store some of the data off-row (e.g. using MAX or other LOB types for some of your columns). To your application, this will still look like it's on the same row, even though logically it is on a complete different area of disk.

If your sparse column set alone exceeds the limit, sorry, you'll need to look at a different way to store the data (either not pivoted, EAV, or simply use two tables joined by a key, each containing half of the column set). For the latter you can make this relatively transparent to users by using views and/or enforcing all data access / DML through stored procedures that understand the division.

Handling data with 1000~ variables, preferably using SQL

You are asking for best, I can give an answer (how I solved it), but cant say if it is the 'best' way (in your environment), I had the Problem to collect inventory data of many thousend PCs (no not NSA - kidding)

my soultion was:

One table per PC (File for you?)

Table File:
one row per file, PK FILE_ID

Table File_data
one row per column in file, PK FILE_ID, ATTR_ID, ATTR_NAME, ATTR_VALUE, (ATTR_TYPE)

The Table File_data, was - somehow - big (>1e6 lines) but the DB handled that fast

HTH

EDIT:

I was pretty short in my anwser, lately; I want to put some additional information to my (and still working) solution:

the table 'per info source' has more than the two fields PK, FILE_ID ie. ISOURCE, ITYPE, where ISOURCE and ITYPE dscribe from where (I had many sources) and what basic Information type it is / was. This helps to get a structure into queries. I did not need to include data from 'switches' or 'monitors', when searching for USB divices (edit: to day probably: yes)

the attributes table had more fields, too. I mention here the both fileds: ISOURCE, ITYPE, yes, the same as above, but a slightly different meaning, the same idea behind

What you would have to put into these fields, depends definitely on your data.

I am sure, that if you take a closer look, what information you have to collect, you will find some 'KEY Values' for that



Related Topics



Leave a reply



Submit