How to Release Possible Postgres Row Locks

How to release possible Postgres row locks?

What version of PostgreSQL are you running? The following assumes 8.1.8 or later (it may apply to earlier versions too, I don't know).

I presume that you mean that phpPgAdmin timed out -- the PostgreSQL backend will take as long as it takes to complete a query/update. In that case, it's possible that the original session is still alive and the UPDATE query is still running. I suggest running the following query (taken from chapter 24 of the PostgreSQL docs) on the machine that hosts the PostgreSQL server process, to see whether the session is still alive:

ps auxwww|grep ^postgres

Several rows should appear: 1 for the postmaster master process, and 1 each for "writer", "stats buffer", and "stats collector" processes. Any remaining lines are for processes serving DB connections. These lines will contain the username and database name.

Hopefully, from that you can see whether the session you performed the original UPDATE in is still hanging around. Although in theory you could find more detailed info by SELECTing from the system view pg_stat_activity, by default PostgreSQL is not set up to populate the most useful fields (such as current_query and query_start). See chapter 24 for how to enable this in the future.

If you see the session is still there, kill it. You will need to be logged in as the user running the process (usually postgres) or root to do so -- if you don't run the server yourself, get your DBA to do this for you.

One more thing: for updating rows in a table, PostgreSQL avoids using locks. Instead, it allows every writing transaction to create a new "version" of the DB, which becomes the "current version" when the transaction is committed, provided it doesn't conflict with updates made in the meantime by other transactions. So I suspect the "hanging" you're seeing is caused by something else -- though what, I'm not sure. (Have you checked the obvious things, like whether the disk partition containing the DB is full?)

Lock row, release later

You can have only one transaction in a database session, so the question as such is moot.

But I assume that you do not really want to run a transaction, you want to block access to a certain row for a while.

It is usually not a good idea to use regular database locks for such a purpose (the exception are advisory locks, which serve exactly that purpose, but are not tied to table rows). The problem is that long database transactions keep autovacuum from doing its job.

I recommend that you add a status column to the table and change the status rather than locking the row. That would server the same purpose in a more natural fashion and make your problem go away.

If you are concerned that the status flag might not get cleared due to application logic problems, replace it with a visible_from column of type timestamp with time zone that initially contains -infinity. Instead of locking the row, set the value to current_timestamp + INTERVAL '5 minutes'. Only select rows that fulfill WHERE visible_from < current_timestamp. That way the “lock” will automatically expire after 5 minutes.

PostgreSQL obtain and release LOCK inside stored function

In Postgres 11 or later, consider a PROCEDURE which allows transaction control. See:

  • Do stored procedures run in database transaction in Postgres?

With functions, there is no way. Functions in Postgres are atomic (always inside a transaction) and locks are released at the end of a transaction.

You might be able to work around this with advisory locks. But those are not the same thing. All competing transactions have to play along. Concurrent access that is not aware of advisory locks will spoil the party.

Code example on dba.SE:

  • Postgres UPDATE ... LIMIT 1

Or you might get somewhere with "cheating" autonomous transactions with dblink:

  • How do I do large non-blocking updates in PostgreSQL?
  • Does Postgres support nested or autonomous transactions?

Or you re-assess your problem and split it up into a couple of separate transactions.

Is it possible to automatically release a lock in PostgreSQL?

This is not a deadlock, this is lost connection problem.

A deadlock occurs when two transactions try to lock the resources previously locked by each other. PostgreSQL detects these situations.

In your case, master locks a resource, slave waits for master, and master waits for the user input which it never receives because the connection is lost.

Whenever PostgreSQL detects a lost connection, it rollbacks its transaction automatically.

To control connection loss detection, you can use the following PostgreSQL connection options:

tcp_keepalives_idle (integer)

On systems that support the TCP_KEEPIDLE socket option, specifies the number of seconds between sending keepalives on an otherwise idle connection. A value of zero uses the system default. If TCP_KEEPIDLE is not supported, this parameter must be zero. This parameter is ignored for connections made via a Unix-domain socket.

tcp_keepalives_interval (integer)

On systems that support the TCP_KEEPINTVL socket option, specifies how long, in seconds, to wait for a response to a keepalive before retransmitting. A value of zero uses the system default. If TCP_KEEPINTVL is not supported, this parameter must be zero. This parameter is ignored for connections made via a Unix-domain socket.

tcp_keepalives_count (integer)

On systems that support the TCP_KEEPCNT socket option, specifies how many keepalives may be lost before the connection is considered dead. A value of zero uses the system default. If TCP_KEEPCNT is not supported, this parameter must be zero. This parameter is ignored for connections made via a Unix-domain socket.

PostgreSQL select for update lock, new rows

The issue is that each command only sees rows that have been committed before the query started. There are various possible solutions ...

Stricter isolation level

You can solve this with a stricter isolation level, but that's relatively expensive.

Laurenz already provided a solution for this.

Just start a new command

Keep the (cheap) default isolation level READ COMMITTED, and just start a new command.

Only few rows to lock

While only locking a hand full of rows, the dead simple solution is to repeat the same SELECT ... FOR UPDATE. The second iteration sees newly committed rows and locks them additionally.

There is a theoretical race condition with additional transactions that might lock new rows before the waiting transaction does. That would result in a deadlock. Highly unlikely, but to be absolutely sure, lock rows in consistent order:

BEGIN;  -- default READ COMMITTED

SELECT FROM table_a WHERE is_latest ORDER BY id FOR UPDATE; -- consistent order
SELECT * FROM table_a WHERE is_latest ORDER BY id FOR UPDATE; -- just repeat !!

-- DO SOME APP LOGIC TO TEST VALIDITY

-- pseudo-code
IF all_good
UPDATE table_a SET is_latest = true WHERE ...;
INSERT table_a (IS_LATEST, ...) VALUES (true, ...);
COMMIT;
ELSE
ROLLBACK;
END;

A partial index on (id) WHERE is_latest would be ideal.

More rows to lock

For more than a hand full of rows, I would instead create a dedicated one-row token table. A bullet-proof implementation could look like this, run as admin or superuser:

CREATE TABLE public.single_task_x (just_me bool CHECK (just_me) PRIMARY KEY DEFAULT true);
INSERT INTO public.single_task_x VALUES (true);
REVOKE ALL ON public.single_task_x FROM public;
GRANT SELECT, UPDATE ON public.single_task_x TO public; -- or just to those who need it

See:

  • How to allow only one row for a table?

Then:

BEGIN;  -- default READ COMMITTED

SELECT FROM public.single_task_x FOR UPDATE;
SELECT * FROM table_a WHERE is_latest; -- FOR UPDATE? ①

-- DO SOME APP LOGIC TO TEST VALIDITY

-- pseudo-code
IF all_good
ROLLBACK;
ELSE
UPDATE table_a SET is_latest = true WHERE ...;
INSERT table_a (IS_LATEST, ...) VALUES (true, ...);
COMMIT;
END;

A single lock is cheaper.

① You may or may not want to lock additionally, to defend against other writes, possibly with a weaker lock ....

Either way, all locks are released at the end of the transaction automatically.

Advisory lock

Or use an advisory lock. pg_advisory_xact_lock() persists for the duration of the transaction:

BEGIN;  -- default READ COMMITTED

SELECT pg_advisory_xact_lock(123);
SELECT * FROM table_a WHERE is_latest;

-- do stuff

COMMIT; -- or ROLLBACK;

Make sure to use a unique token for your particular task. 123 in my example. Consider a look-up table if you have many different tasks.

To release the lock at a different point in time (not when the transaction ends), consider a session-level lock with pg_advisory_lock(). Then you can (and must) unlock manually with pg_advisory_unlock() - or close the session.

Both of these wait for the locked resource. There are alternative functions returning false instead of waiting ...

Remove locks without a pid in postgres

try identifying who exactly blocks you with this statement:

SELECT blocked_locks.pid     AS blocked_pid,
blocked_activity.usename AS blocked_user,
blocking_locks.pid AS blocking_pid,
blocking_activity.usename AS blocking_user,
blocked_activity.query AS blocked_statement,
blocking_activity.query AS current_statement_in_blocking_process,
blocked_activity.application_name AS blocked_application,
blocking_activity.application_name AS blocking_application
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks
ON blocking_locks.locktype = blocked_locks.locktype
AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
AND blocking_locks.pid != blocked_locks.pid

JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.GRANTED;

blocking_pid is pid you want to terminate

You can check if your statement is blocked by smbd with shorter qry:

t=# select datname,pid,usename,query from pg_stat_activity where waiting ;
datname | pid | usename | query
---------+-------+----------+---------------------
t | 30930 | postgres | select * from so24;
(1 row)

Update:
for 9.6 column waiting was replaced with wait_event and wait_event_type, so the query will be

select datname,pid,usename,query from pg_stat_activity where wait_event is not null;

Uncommitted Postgresql Row lock

As soon as the function has ended and the transaction if finished, the locks are gone.

How do table locks and row locks interact with each other in PostgreSQL?

Table locks don't interact with row locks at all.

Table locks are taken before the operation starts to prevent conflicting activity on a table-wide level (for example, dropping a table which somebody else is using).

Row locks are taken as the rows are processed. They prevent for example concurrent updates on the same row, while it is perfectly fine for concurrent transactions to update different rows in the same table at the same time.



Related Topics



Leave a reply



Submit