Preventing Deadlocks in SQL Server

How to prevent deadlock in concurrent T-SQL transactions?

I think the comment above from @DaleK helped me the most. I will quote it:

While its a great ambition to try and avoid all deadlocks... its not
always possible... and you can't prevent all future deadlocks from
happens, because as more rows are added to tables query plans change.
Any application code should have some form of retry mechanism to
handle this. – Dale K

So I decided to implement some form of retry mechanism to handle this.

Avoiding Deadlocks within SQL transaction

I don't see any reason why you would make the SELECT query as part of the transaction to solve the deadlock or time out issue. Setting the ReadUncommitted isolation level on first sql connection myConnection that you have thought is also not the right approach. I see there are two possible solutions:

  1. First Solution: Setting isolation level IsolationLevel.ReadUncommitted on the transaction myTrans you have started will not help. If you are comfortable with dirty reads then you should actually be setting this isolation level on the second SQL connection myConnection2 that you are establishing for firing select query on User table. To set the isolation level for the select query through myConnection2 you need to use with (nolock) table level hint. So your query will start to look like:

    string sSelect = "SELECT FirstName, LastName FROM User WITH (NOLOCK) WHERE ID = 123";

    You can get more details here.
    Also, read about the consequences of dirty read here which is a side effect of using this particular isolation level.

  2. Second solution: Default isolation level of SQL Server is Read Committed. So when you start firing query through a new SQL connection using a variable named myConnection2 it is working on ReadCommitted isolation level. The default behavior exhibited by ReadCommitted isolation level is Blocking Read i.e. if there are uncommitted changes on a table (which can be committed or rollbacked due to an active transaction) then your select statement on User table will get blocked. It will wait for the transaction to finish so that it can read the newly updated data or the original data in case of a rollback. If it doesn't do such a blocking read then it will end up doing dirty read which is a well known concurrency issue with databases.

    If you do not want your SELECT statements to get blocked and want to keep going with last committed value of a row then there is a new database level setting in SQL Server named READ_COMMITTED_SNAPSHOT. Here is how you can change it using SQL script:

    ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON

    Quoting Pinal Dave from his article here:

If you are having problem with blocking between readers (SELECT) and
writers (INSERT/UPDATE/DELETE), then you can enable this property
without changing anything from the application. Which means
application would still run under read committed isolation and will
still read only committed data.

Note: This is a database level setting and will affect all the transactions on your database using READCOMMITTED isolation level.

In my opinion you should go with first solution. Also there are few key points which you should keep in mind to avoid deadlocks in SQL Server queries. Quoting Pinal Dave from here:

  • Minimize the size of transaction and transaction times.
  • Always access server objects in the same order each time in application.
  • Avoid cursors, while loops, or process which requires user input while it is running.
  • Reduce lock time in application.
  • Use query hints to prevent locking if possible (NoLock, RowLock)
  • Select deadlock victim by using SET DEADLOCK_PRIORITY.

How to prevent deadlock in SQL Server stored procedure?

Would need to see the table and index DDL and full deadlock graph to be sure, but you probably just need to lock the target row on the initial read. EG

ALTER PROCEDURE [dbo].[InsertDDM_UserDashboard]
@p_email VARCHAR(255),
@p_dashboardPreferences VARCHAR(4000),
@p_userDefaultDashboard VARCHAR(500)

AS
begin
begin transaction

IF (NOT EXISTS(SELECT * FROM [dbo].[DDM_UserProfile] with (updlock, holdlock) WHERE Email = @p_email))
BEGIN
INSERT INTO [dbo].[DDM_UserProfile]
([Email]
,[DashboardPreferences]
,DefaultDashboard
)
VALUES
(@p_email
,@p_dashboardPreferences
,@p_userDefaultDashboard
)

END

ELSE
BEGIN
UPDATE [dbo].[DDM_UserProfile]
SET [DashboardPreferences]=@p_dashboardPreferences,
DefaultDashboard=@p_userDefaultDashboard
WHERE [Email]=@p_email

END

commit transaction
end

Avoiding deadlock by using NOLOCK hint

Occasional deadlocks on an RDBMS that locks like SQL Server/Sybase are expected.

You can code on the client to retry as recommended my MSDN "Handling Deadlocks".
Basically, examine the SQLException and maybe a half second later, try again.

Otherwise, you should review your code so that all access to tables are in the same order. Or you can use SET DEADLOCK_PRIORITY to control who becomes a victim.

On MSDN for SQL Server there is "Minimizing Deadlocks" which starts

Although deadlocks cannot be completely avoided

This also mentions "Use a Lower Isolation Level" which I don't like (same as many SQL types here on SO) and is your question. Don't do it is the answer... :-)

  • What can happen as a result of using (nolock) on every SELECT in SQL Server?
  • https://dba.stackexchange.com/q/2684/630

Note: MVCC type RDBMS (Oracle, Postgres) don't have this problem. See http://en.wikipedia.org/wiki/ACID#Locking_vs_multiversioning but MVCC has other issues.



Related Topics



Leave a reply



Submit