Flock Locking Order

flock locking order?

If there are multiple processes waiting for an exclusive lock, it's not specified which one succeeds in acquiring it first. Don't rely on any particular ordering.

Having said that, the current kernel code wakes them in the order they blocked. This comment is in fs/locks.c:

/* Insert waiter into blocker's block list.
* We use a circular list so that processes can be easily woken up in
* the order they blocked. The documentation doesn't require this but
* it seems like the reasonable thing to do.
*/

If you want to have a set of processes run in order, don't use flock(). Use SysV semaphores (semget() / semop()).

Create a semaphore set that contains one semaphore for each process after the first, and initialise them all to -1. For every process after the first, do a semop() on that process's semaphore with a sem_op value of zero - this will block it. After the first process is complete, it should do a semop() on the second process's semaphore with a sem_op value of 1 - this will wake the second process. After the second process is complete, it should do a semop() on the third process's semaphore with a sem_op value of 1, and so on.

Does flock lock the file across processes?

On Linux (and other UNIX) systems, flock() is purely a advisory lock. It will prevent other processes from obtaining a conflicting lock on the same file with flock(), but it will not prevent the file from being modified or removed.

On Windows systems, flock() is a mandatory lock, and will prevent modifications to the file.

Does flock maintain a queue when there are multiple files waiting for a lock?

I tried testing this scenarios with a working example script and I found that the waiting jobs are processed in a random manner.

Linux flock, how to just lock a file?

To lock the file:

exec 3>filename # open a file handle; this part will always succeed
flock -x 3 # lock the file handle; this part will block

To release the lock:

exec 3>&-       # close the file handle

You can also do it the way the flock man page describes:

{
flock -x 3
...other stuff here...
} 3>filename

...in which case the file is automatically closed when the block exits. (A subshell can also be used here, via using ( ) rather than { }, but this should be a deliberate decision -- as subshells have a performance penalty, and scope variable modifications and other state changes to themselves).


If you're running a new enough version of bash, you don't need to manage file descriptor numbers by hand:

# this requires a very new bash -- 4.2 or so.
exec {lock_fd}>filename # open filename, store FD number in lock_fd
flock -x "$lock_fd" # pass that FD number to flock
exec $lock_fd>&- # later: release the lock

...now, for your function, we're going to need associative arrays and automatic FD allocation (and, to allow the same file to be locked and unlocked from different paths, GNU readlink) -- so this won't work with older bash releases:

declare -A lock_fds=()                        # store FDs in an associative array
getLock() {
local file=$(readlink -f "$1") # declare locals; canonicalize name
local op=$2
case $op in
LOCK_UN)
[[ ${lock_fds[$file]} ]] || return # if not locked, do nothing
exec {lock_fds[$file]}>&- # close the FD, releasing the lock
unset lock_fds[$file] # ...and clear the map entry.
;;
LOCK_EX)
[[ ${lock_fds[$file]} ]] && return # if already locked, do nothing
local new_lock_fd # don't leak this variable
exec {new_lock_fd}>"$file" # open the file...
flock -x "$new_lock_fd" # ...lock the fd...
lock_fds[$file]=$new_lock_fd # ...and store the locked FD.
;;
esac
}

If you're on a platform where GNU readlink is unavailable, I'd suggest replacing the readlink -f call with realpath from sh-realpath by Michael Kropat (relying only on widely-available readlink functionality, not GNU extensions).

Perl flock on Linux: what if many processes wait to open a locked file?

Only one process can hold an exclusive lock at any given time.

As such, one of the following should be true for you:

  • The file you are locking is located on an NFS devices. flock doesn't support these.
  • The calls to flock you describe occurred in the same process. flock can't be used to exclude threads of the same process.
  • The lock was released. Keep in mind the unlock could have happened in a forked process.

Change flock (util-linux) source so that locks are closed on exec()

In the event that you want to create a local variant of the flock utility, rather than to replace the system's flock, this is what you're dealing with:

On modern Linux, the flock(1) program is a wrapper around the flock(2) system call (older Linux has a flock(3) library function instead, which has different semantics). The documentation for flock(2) explicitly and unequivocally says:

Locks created by flock() are preserved across an execve(2).

There is no provision for flock locks to be closed on exec. However, it also says:

Locks created by flock() are associated with an open file table entry. This means that duplicate file descriptors (created by, for example, fork(2) or dup(2)) refer to the same lock, and this lock may be modified or released using any of these descriptors. Furthermore, the lock is released either by an explicit LOCK_UN operation on any of these duplicate descriptors, or when all such descriptors have been closed.

You therefore have the alternative of using fcntl() to mark the file descriptor to be closed on exec. When the FD is closed (on exec), the flock lock will be removed, too, provided that there are no other handles on the same open file description. Obviously, however, the FD is closed as part of this approach, which may be an undesirable result. Also, it is ineffective at releasing the lock if other handles on the same open file description survive the exec (which could be viewed as a feature).



Related Topics



Leave a reply



Submit