Sql Insert Query Performance

SQL Server insert performance

Sounds like the inserts are causing SQL Server to recalculate the indexes. One possible solution would be to drop the index, perform the insert, and re-add the index. With your attempted solution, even if you tell it to ignore constraints, it will still need to keep the index updated.

SQL speed up performance of insert?

To get the best possible performance you should:

  • Remove all triggers and constraints on the table
  • Remove all indexes, except for those needed by the insert
  • Ensure your clustered index is such that new records will always be inserted at the end of the table (an identity column will do just fine). This prevents page splits (where SQL Server must move data around because an existing page is full)
  • Set the fill factor to 0 or 100 (they are equivalent) so that no space in the table is left empty, reducing the number of pages that the data is spread across.
  • Change the recovery model of the database to Simple, reducing the overhead for the transaction log.

Are multiple clients inserting records in parallel? If so then you should also consdier the locking implications.

Note that SQL Server can suggest indexes for a given query either by executing the query in SQL Server Management Studio or via the Database Engine Tuning Advisor. You should do this to make sure you haven't removed an index which SQL Server was using to speed up the INSERT.

If this still isn't fast enough then you should consider grouping up inserts an using BULK INSERT instead (or something like the bcp utility or SqlBulkCopy, both of which use BULK INSERT under the covers). This will give the highest throughput when inserting rows.

Also see Optimizing Bulk Import Performance - much of the advice in that article also applies to "normal" inserts.

SQL Server INSERT Performance (SQL, Azure SQL Database)

The query runtime appears to be dominated by IO waits.

Here are the wait stats

Sample Image

PAGEIOLATCH_EX is a wait to write to disk, PAGEIOLATCH_SH is a wait to read from disk, and LOG_RATE_GOVERNOR is essentially also an IO wait, waiting to write to the log file. The IO and Log write limits on a 20DTU database are quite small, and the standard tier DTU model provisions only 1-4 IOPS/DTU, so that's under 100 IOPS.

So you can either

  1. Write less data

-by eliminating columns, especially the nvarchar(max) column if it's large

-by compressing the data using Page Compression or a Clustered Columnstore index, or by using the COMPRESS TSQL function for the nvarchar(max) column if it is large

or


  1. Provide more resources

-by scaling to a higher DTU, or VCore configuration, or a Serverless configuration with elastic scale

-by moving to Hyperscale which provides 100MB/S log throughput at every service level

-Moving this database into an Elastic Pool where it can share a larger pool of resources with other databases.

Table partioning won't reduce the amount of writes. And In-Memory OLTP is only available in the Premium/Business Critical Tier, which already has higher IOPS.

Insert Into From Select Performance SQL Server

Drop all the indexes on TableA , then insert again :

INSERT INTO tableA
SELECT * FROM TableB

Indexes are known to slow down insert statements .

How to do very fast inserts to SQL Server 2008

ExecuteNonQuery with an INSERT statement, or even a stored procedure, will get you into thousands of inserts per second range on Express. 4000-5000/sec are easily achievable, I know this for a fact.

What usually slows down individual updates is the wait time for log flush and you need to account for that. The easiest solution is to simply batch commit. Eg. commit every 1000 inserts, or every second. This will fill up the log pages and will amortize the cost of log flush wait over all the inserts in a transaction.

With batch commits you'll probably bottleneck on disk log write performance, which there is nothing you can do about it short of changing the hardware (going raid 0 stripe on log).

If you hit earlier bottlenecks (unlikely) then you can look into batching statements, ie. send one single T-SQL batch with multiple inserts on it. But this seldom pays off.

Of course, you'll need to reduce the size of your writes to a minimum, meaning reduce the width of your table to the minimally needed columns, eliminate non-clustered indexes, eliminate unneeded constraints. If possible, use a Heap instead of a clustered index, since Heap inserts are significantly faster than clustered index ones.

There is little need to use the fast insert interface (ie. SqlBulkCopy). Using ordinary INSERTS and ExecuteNoQuery on batch commits you'll exhaust the drive sequential write throughput much faster than the need to deploy bulk insert. Bulk insert is needed on fast SAN connected machines, and you mention Express so it's probably not the case. There is a perception of the contrary out there, but is simply because people don't realize that bulk insert gives them batch commit, and its the batch commit that speeds thinks up, not the bulk insert.

As with any performance test, make sure you eliminate randomness, and preallocate the database and the log, you don't want to hit db or log growth event during test measurements or during production, that is sooo amateurish.



Related Topics



Leave a reply



Submit