Lock Escalation - What's Happening Here

Lock Escalation - What's happening here?

"Lock Escalation" is how SQL handles locking for large updates. When SQL is going to change a lot of rows, it's more efficient for the database engine to take fewer, larger locks (e.g. entire table) instead of locking many smaller things (e.g. row locks).

But this can be problematic when you have a huge table, because taking a lock on the entire table may lock out other queries for a long time. That's the tradeoff: many small-granularity locks are slower than fewer (or one) coarse-grained locks, and having multiple queries locking different parts of a table creates the possibility for deadlock if one process is waiting on another.

There is a table-level option, LOCK_ESCALATION, new in SQL 2008, which allows control of lock escalation. The default, "TABLE" allows locks to escalate all the way to the table level. DISABLE prevents lock escalation to the entire table in most cases. AUTO allows table locks except if the table is partitioned, in which case locks are only made up to the partition level. See this blog post for more info.

I suspect that the IDE adds this setting when re-creating a table because TABLE is the default in SQL 2008. Note that LOCK_ESCALATION isn't supported in SQL 2005, so you'll need to strip it if trying to run the script on a 2005 instance. Also, since TABLE is the default, you can safely remove that line when re-running your script.

Also note that, in SQL 2005 before this setting was present, all locks could escalate to table level-- in other words, "TABLE" was the only setting on SQL 2005.

SQL Server lock escalation issue

Could anyone explain in simple way why lock escalation is needed and what is the so-called lock overhead please?

When you update a table and lock a row, you need to record this fact somehow: that's a row, it's been updated and locked.

When you update million rows, you need to do this million times, and therefore have some space to keep million locks.

SQL Server keeps a list of locks in memory, while Oracle does in on the tablespaces.

This is probably because Oracle is old (older than me), and SQL Server is young compared to Oracle.

Keeping temporary resources (like locks) in a permanent storage is not so obvious solution from designer's point of view. Just one thing to mention: you may need a disk write to perform a SELECT FOR UPDATE.

Oracle's core features were developed in early 80's, when keeping things in memory was not an option at all. They just had to use disk space somehow.

If disk space was to be used anyway, you had to place a lock somewhere on disk.

And where to keep a lock for a row if not within the row itself?

Developers of SQL Server's lock system, when inventing design of their RDBMS called Sybase, decided to store temporary things (i. e. locks) in the temporary storage (i. e. RAM).

But Oracle's design is always balanced: if you have a 1,000,000 rows in your database, then you have a storage space for 1,000,000 locks, if you have a billion rows, you may store billion locks, etc.

SQL Server's design is flawy in this sense, because your RAM and HDD space may be unbalanced. You may easily have 16M of RAM and several terabytes of disk space. And you memory just cannot hold all the locks.

That's why when the lock count reaches a certain limit, SQL Server decides to escalate the locks: instead of keeping locks for, say, 10 individual rows in a data page (which requires 10 records), it locks the whole data page (which requires 1 record).

Oracle, on the other hand, when updating a row, just writes the lock right into the datapage.

That's why Oracle's locks are row-level.

Oracle doesn't "manage" the locks in a common sense of word: you can't, say, get a list of locked pages in Oracle.

When a transaction needs to update a row, it just goes to the row and sees if it's locked.

If it is, it looks which transaction holds a lock (this information is contained within the lock descriptor in the data page) and adds itself to that transaction's notification queue: when the locking transactions dies, the original one gets notified and locks the data.

From the concurrency's point of view, lock escalation is totally a poor man's solution: it adds nothing to concurrency. You can, say, get a lock on a row you didn't even touch.

From the performance's point of view, doing things in memory is, of course, faster than doing them on the disk.

But since Oracle caches the datablocks and the actual operations described above are performed in memory anyway, the performance is same or close to it.

Does disabling lock escalation put more stress on the data mdf?

We have a large multi-use application suffering from OBJECT and PAGE
blocks on a number of tables.

...

The disk i/o is already strained.
Will additional, individual locks require more i/o than the automatic
table locks?

You misunderstand lock escalation, this is clear from the parts of your question that I made bold.

Lock escalation goes from rows to table or from pages to table (I excluded partition as it's not your case), so if now you have page locks it's NOT lock escalation.

Lock granularity is choosen by server unless you use hints (rowlock, paglock) and if it choses page locks there is no escalation. If it then removes all the page locks and substitutes them with table lock, it means lock escalation occured.

The second thing that is wrong is your thinking that locks have to do with IO. This is not true. Locks are held in memory and have nothing to do with reads. You can check this article to see how CPU usage and query duration increase when the locks are more granular: Why ROWLOCK Hints Can Make Queries Slower and Blocking Worse in SQL Server.

You should understand what causes your lock escalation.

Lock Escalation Thresholds

Lock escalation is triggered when lock escalation is not disabled on
the table by using the ALTER TABLE SET LOCK_ESCALATION option, and
when either of the following conditions exists:

  • A single Transact-SQL statement acquires at least 5,000 locks on a single nonpartitioned table or index.
  • A single Transact-SQL statement acquires at least 5,000 locks on a single partition of a partitioned table and the
    ALTER TABLE SET LOCK_ESCALATION option is set to AUTO.
  • The number of locks in an instance of the Database Engine exceeds memory or configuration thresholds.

Lock Escalation (Database Engine)

So if you reach 5,000 locks per statement threshold you should split your operations to smaller batches.

And if you are under memory pressure, disabling lock escalation will make your situation even worse.

UPDATE

I've found this description of locks in the book Microsoft SQL Server 2012 Internals (Developer Reference)by Kalen Delaney (Author),‎ Bob Beauchemin (Author),‎ Conor Cunningham (Author),‎ Jonathan Kehayias (Author),‎ Paul S. Randal (Author),‎ Benjamin Nevarez (Author

Locks aren’t on-disk structures. You won’t find a lock field directly
on a data page or a table header, and the metadata that keeps track of
locks is never written to disk. Locks are internal memory structures:
They consume part of the memory used for SQL Server. A lock is
identified by lock resource, which is a description of the resource
that’s locked (a row, index key, page, or table). To keep track of the
database, the type of lock, and the information describing the locked
resource, each lock requires 64 bytes of memory on a 32-bit system and
128 bytes of memory on a 64-bit system. This 64-byte or 128-byte
structure is called a lock block.
... The lock manager maintains a lock hash table. Lock resources,
contained within a lock block, are hashed to determine a target hash
slot in the hash table. All lock blocks that hash to the same slot are
chained together from one entry in the hash table. Each lock block
contains a 15-byte field that describes the locked resource. The lock
block also contains pointers to lists of lock owner blocks. Each of
the three states has a separate list for lock owners.

Hope it helps.

How to efficiently use LOCK_ESCALATION in SQL Server 2008

LOCK Escalation triggers when a statement holds more than 5000 locks on a SINGLE object. A statement holding 3000 locks each on two different indexes of the same table will not trigger escalation.

When a lock escalation is attempted and a conflicting lock exists on the object, the attempt is aborted and retried after another 1250 locks (held, not acquired)

So if your updates are performed on individual rows and you have a supporting index on the column, then lock escalation is not your issue.

You will be able to verify this using the Locks-> lock escalation event from profiler.

I suggest you capture the deadlock trace to identify the actual cause of the deadlock.

How to I force lock escalation (to flush out deadlock issues) during testing?

One approach is to:

  1. In the test environment, run a script like the following to disable row and page locking on all indexes (making it go directly to table lock)
  2. Run the tests
  3. Run the script again setting it back to normal locking
--
-- Script to disable/enable row & page locking on dev/test environment to flush out deadlock issues
-- executes statement like this for each table in the database:
-- ALTER INDEX indexname ON tablename SET ( ALLOW_ROW_LOCKS = OFF, ALLOW_PAGE_LOCKS = OFF )
--
-- DO NOT RUN ON A PRODUCTION DATABASE!!!!
--
set nocount on
declare @newoption varchar(3)
--------------------------------------------------------------------
-- Change variable below to 'ON' or 'OFF' --------------------------
-- 'OFF' means row & page locking is disabled and everything
-- triggers a table lock
-- 'ON' means row & page locking is enabled and the server chooses
-- how to escalate the locks (this is the default setting)
set @newoption = 'OFF'
--------------------------------------------------------------------

DECLARE @TableName varchar(300)
DECLARE @IndexName varchar(300)
DECLARE @sql varchar(max)

DECLARE inds CURSOR FAST_FORWARD FOR
SELECT tablename, indname
FROM (
select top 100 percent
so.name as tablename
, si.indid
, si.name as indname
, INDEXPROPERTY( si.id, si.name, 'IsPageLockDisallowed') as IsPageLockDisallowed
, INDEXPROPERTY( si.id, si.name, 'IsRowLockDisallowed') as IsRowLockDisallowed
from sysindexes si
join sysobjects so on si.id = so.id
where si.status & 64 = 0
and objectproperty(so.id, 'IsMSShipped') = 0
and si.name is not null
and so.name not like 'aspnet%'
and so.name not like 'auditLog%'
order by so.name, si.indid
) t

OPEN inds
FETCH NEXT FROM inds INTO @TableName, @IndexName

WHILE @@FETCH_STATUS = 0
BEGIN

SET @sql = 'ALTER INDEX [' + @IndexName + '] ON [dbo].[' + @TableName + '] SET ( ALLOW_ROW_LOCKS = ' + @newoption + ', ALLOW_PAGE_LOCKS = ' + @newoption +' )'
PRINT @sql
EXEC(@sql)

FETCH NEXT FROM inds INTO @TableName, @IndexName
END

CLOSE inds
DEALLOCATE inds

PRINT 'Done'

Other notes:

  • I got the core of the above script from somebody else's article, but I've long-since forgotten where. Apologies for the lack of attribution.
  • Note that the above script will overwrite any existing custom lock escalation settings on your tables. You can check for these by first running the inner select ("SELECT TOP 100 PERCENT so.name...") to see the existing settings.


Related Topics



Leave a reply



Submit