SQL Server INSERT into huge table is slow
OK, here's what I would do:
Check to see if you need both indexes [IX_ACCOUNT_ID_POINTS_CODE] and [IX_ACCOUNT_ID] as they may be redundant.
Before you do the
INSERT
, Disable the Trigger and drop the Foreign Keys.Do the
INSERT
setting the fields normally set by the Trigger, and insuring that the FK Column's values are valid.Re-Enable the trigger, and re-create the Foreign Keys WITH NOCHECK.
I would leave the indexes on as you are inserting less than 0.2% of the total row count so it's probably faster to update them in-place rather than to drop and rebuild them.
SQL insert very slow
You're right in that indexing will do nothing to improve the insert performance (if anything it might hurt it due to extra overhead).
If inserts are slow it could be due to external factors such as the IO performance of the hardware running your SQL Server instance or it could be contention at the database or table level due to other queries. You'll need to get the performance profiler running to determine the cause.
If you're performing the inserts sequentially, you may want to look into performing a bulk insert operation instead which will have better performance characteristics.
And finally, some food for thought, if you're doing 10K inserts every 5 seconds you might want to consider a NoSQL database for bulk storage since they tend to have better performance characteristics for this type of application where you have large and frequent writes.
Slow insert: select from view
Checking the execution plan is the first step, as others have said. Given that the INSERT
(rather than the query) is causing the delay, you could troubleshoot that further. Here are some things you can try:
- Try using Statistics IO to find out more, as answered here.
- Attempt an
INSERT
using static data (e.g.INSERT INTO [SomeSmallTable] VALUES (1, 2, '...etc');
). This will tell you if the issue is anyINSERT
statement, or when inserting from a view specifically. - Check how much data the view is returning. 4s may or may not be reasonable, depending on how many rows are being inserted.
- Check the table design to see how it is using primary keys, foreign keys, composite keys, indexes, triggers, etc. Some of these features optimise a table's design for selecting, but make insertion slower as a trade-off. A good answer about this can be found here.
- If you know it's not a load issue (because you're the only one using this database), check whether something else might be restricting resources on the machine you're using (other resource-intensive tasks, any other queries happening at the same time, scheduled jobs within SQL Server, etc.) You can use SQL Server Profiler to watch the queries in real time.
- If slow performance is not limited to this particular query, then there are other general design considerations you can look into.
Insert into taking extremely long
Thanks for all the suggestions. @Larnu especially for making me look into my function. I've managed to get around my issue.
I found the cause of the slow insert to be, the user defined function i had in my insert statement. I then moved to using a sequence, which was a lot better but still a bit slow.
I then came across "rowversion" which is a binary value and not legible but can be ordered so that works great. It can also be applied to all my existing records in the DB without needing to reinsert those or update them.
My inserts are now in the region of 1.9ms to insert the record. Not super amazing but good enough for my purposes, considering the number of columns.
SQL speed up performance of insert?
To get the best possible performance you should:
- Remove all triggers and constraints on the table
- Remove all indexes, except for those needed by the insert
- Ensure your clustered index is such that new records will always be inserted at the end of the table (an identity column will do just fine). This prevents page splits (where SQL Server must move data around because an existing page is full)
- Set the fill factor to 0 or 100 (they are equivalent) so that no space in the table is left empty, reducing the number of pages that the data is spread across.
- Change the recovery model of the database to Simple, reducing the overhead for the transaction log.
Are multiple clients inserting records in parallel? If so then you should also consdier the locking implications.
Note that SQL Server can suggest indexes for a given query either by executing the query in SQL Server Management Studio or via the Database Engine Tuning Advisor. You should do this to make sure you haven't removed an index which SQL Server was using to speed up the INSERT
.
If this still isn't fast enough then you should consider grouping up inserts an using BULK INSERT
instead (or something like the bcp utility or SqlBulkCopy
, both of which use BULK INSERT
under the covers). This will give the highest throughput when inserting rows.
Also see Optimizing Bulk Import Performance - much of the advice in that article also applies to "normal" inserts.
Related Topics
Ant SQL Task: How to Run SQL and Pl/Sql and Notice Execution Failure
What Is Db/Development_Structure.SQL in a Rails Project
How to Send a Query Result in CSV Format
How to Get Referenced Values from Another Table
SQL Convert Milliseconds to Days, Hours, Minutes
SQL - Subquery in Aggregate Function
Generate Series of Week Intervals for Given Month
Way to Abort Execution of MySQL Scripts (Raising Error Perhaps)
Cross Apply Performance Difference
PHP Is Truncating Mssql Blob Data (4096B), Even After Setting Ini Values. am I Missing One
Freetds and Unixodbc Character Converting
Undo Log Error: No More Space Left Over in System Tablespace for Allocating Undo Log Pages
Atomically Set Serial Value When Committing Transaction
Is Count(*) in SQL Server a Constant Time Operation? If Not, Why Not