How to Update Top 100 Records in SQL Server

how can I Update top 100 records in sql server

Note, the parentheses are required for UPDATE statements:

update top (100) table1 set field1 = 1

SQL UPDATE TOP () or UPDATE with SELECT TOP

First statement will be faster. But the top 150 records are chosen randomly. Records updated in both the queries might not be same. Since you are spitting the updates into batches your approach may not update all records.

I will do this using following consistent approach than your approach.

;WITH cte
AS (SELECT TOP (350) value1,
value2,
value3
FROM database1
WHERE value1 = '123'
ORDER BY ID -- or any other column to order the result
)
UPDATE cte
SET value1 = '',
value2 = '',
value3 = ''

Also you don't have to worry transaction log size when updating couple thousands records there is no need of batches here

How to UPDATE TOP(n) with ORDER BY giving a predictable result?

SQL Server allows you to update a derived table, CTE or view:

UPDATE x
SET
IsDone = 1
OUTPUT
inserted.Id,
inserted.Etc
FROM (
select TOP (N) *
FROM
QueueTable
WHERE
IsDone = 0
ORDER BY
CreatedDate ASC;
) x

No need to compute a set of IDs first. This is faster and usually has more desirable locking behavior.

How can select Top 100 rows from a table and update a column value as 'inprogress' for all those selected rows

You use update-able cte :

with u_cte as (
select col3, row_number() over (order by ?) as seq
from table t
where col3 is null
)
update u_cte
set col3 = 'in progress'
where seq <= 100;

? use ordering column instead that specify column ordering.

T-SQL update top n rows for each group with n variable for each group (cross apply alternative)

You could try to use simple JOIN instead of correlated subquery:

WITH cte AS
(
SELECT d.id, d.col, d.dest, s.source
FROM (SELECT *,
rn = ROW_NUMBER() OVER(PARTITION BY col ORDER BY id) FROM #desttable) d
JOIN #sourcetable s
ON d.col = s.col
AND d.rn <= s.rownum
)
UPDATE cte
SET dest = source;

SELECT *
FROM #desttable;

LiveDemo


You should post your real data sample, data structures and query plans. Otherwise we could only guess how to improve it.

How to update large table with millions of rows in SQL Server?


  1. You should not be updating 10k rows in a set unless you are certain that the operation is getting Page Locks (due to multiple rows per page being part of the UPDATE operation). The issue is that Lock Escalation (from either Row or Page to Table locks) occurs at 5000 locks. So it is safest to keep it just below 5000, just in case the operation is using Row Locks.

  2. You should not be using SET ROWCOUNT to limit the number of rows that will be modified. There are two issues here:

    1. It has that been deprecated since SQL Server 2005 was released (11 years ago):

      Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications that currently use it. For a similar behavior, use the TOP syntax

    2. It can affect more than just the statement you are dealing with:

      Setting the SET ROWCOUNT option causes most Transact-SQL statements to stop processing when they have been affected by the specified number of rows. This includes triggers. The ROWCOUNT option does not affect dynamic cursors, but it does limit the rowset of keyset and insensitive cursors. This option should be used with caution.

    Instead, use the TOP () clause.

  3. There is no purpose in having an explicit transaction here. It complicates the code and you have no handling for a ROLLBACK, which isn't even needed since each statement is its own transaction (i.e. auto-commit).

  4. Assuming you find a reason to keep the explicit transaction, then you do not have a TRY / CATCH structure. Please see my answer on DBA.StackExchange for a TRY / CATCH template that handles transactions:

    Are we required to handle Transaction in C# Code as well as in Store procedure

I suspect that the real WHERE clause is not being shown in the example code in the Question, so simply relying upon what has been shown, a better model (please see note below regarding performance) would be:

DECLARE @Rows INT,
@BatchSize INT; -- keep below 5000 to be safe

SET @BatchSize = 2000;

SET @Rows = @BatchSize; -- initialize just to enter the loop

BEGIN TRY
WHILE (@Rows = @BatchSize)
BEGIN
UPDATE TOP (@BatchSize) tab
SET tab.Value = 'abc1'
FROM TableName tab
WHERE tab.Parameter1 = 'abc'
AND tab.Parameter2 = 123
AND tab.Value <> 'abc1' COLLATE Latin1_General_100_BIN2;
-- Use a binary Collation (ending in _BIN2, not _BIN) to make sure
-- that you don't skip differences that compare the same due to
-- insensitivity of case, accent, etc, or linguistic equivalence.

SET @Rows = @@ROWCOUNT;
END;
END TRY
BEGIN CATCH
RAISERROR(stuff);
RETURN;
END CATCH;

By testing @Rows against @BatchSize, you can avoid that final UPDATE query (in most cases) because the final set is typically some number of rows less than @BatchSize, in which case we know that there are no more to process (which is what you see in the output shown in your answer). Only in those cases where the final set of rows is equal to @BatchSize will this code run a final UPDATE affecting 0 rows.

I also added a condition to the WHERE clause to prevent rows that have already been updated from being updated again.

NOTE REGARDING PERFORMANCE

I emphasized "better" above (as in, "this is a better model") because this has several improvements over the O.P.'s original code, and works fine in many cases, but is not perfect for all cases. For tables of at least a certain size (which varies due to several factors so I can't be more specific), performance will degrade as there are fewer rows to fix if either:

  1. there is no index to support the query, or
  2. there is an index, but at least one column in the WHERE clause is a string data type that does not use a binary collation, hence a COLLATE clause is added to the query here to force the binary collation, and doing so invalidates the index (for this particular query).

This is the situation that @mikesigs encountered, thus requiring a different approach. The updated method copies the IDs for all rows to be updated into a temporary table, then uses that temp table to INNER JOIN to the table being updated on the clustered index key column(s). (It's important to capture and join on the clustered index columns, whether or not those are the primary key columns!).

Please see @mikesigs answer below for details. The approach shown in that answer is a very effective pattern that I have used myself on many occasions. The only changes I would make are:

  1. Explicitly create the #targetIds table rather than using SELECT INTO...
  2. For the #targetIds table, declare a clustered primary key on the column(s).
  3. For the #batchIds table, declare a clustered primary key on the column(s).
  4. For inserting into #targetIds, use INSERT INTO #targetIds (column_name(s)) SELECT and remove the ORDER BY as it's unnecessary.

So, if you don't have an index that can be used for this operation, and can't temporarily create one that will actually work (a filtered index might work, depending on your WHERE clause for the UPDATE query), then try the approach shown in @mikesigs answer (and if you use that solution, please up-vote it).

How to update 4 000 000 random numbers of records in SQL Server

You can use NEWID() in ORDER BY like this:

UPDATE  X
SET [DateKey] = 2
FROM (SELECT TOP (4000000) *
FROM [FACT_INTERNATIONAL] AS X
ORDER BY NEWID()) AS X

4 Million records have too much cost in update, and sql may give an error after a while. Try with fewer records in my opinion.

How can I update top 100 rows in DB2

This is dooable, although you may not get the results you expect...

First, always remember that SQL is inherently UNORDERED. This means that there is no such thing as the 'top' rows, unless you explicitly define what you mean. Otherwise, your results are 'random' (sortof).

Regardless, this is dooable, presuming you have some sort of unique key on the table:

UPDATE table1 SET field1 = 1
WHERE table1Key IN (SELECT table1Key
FROM table1
WHERE field1 <> 1
ORDER BY field1
FETCH FIRST 100 ROWS ONLY)

Why do you only want to update 100 rows at a time? What sort of problem are you really trying to solve?



Related Topics



Leave a reply



Submit