Synchronize Shell Script Execution

Synchronize shell script execution

The "lockfile" command provides what you're trying to do for shell scripts without the race condition. The command was written by the procmail folks specifically for this sort of purpose and is available on most BSD/Linux systems (as procmail is available for most environments).

Your test becomes something like this:

lockfile -r 3 $FLAC.lock
if test $? -eq 0 ; then
flac -dc "$FLAC" | lame${lame_opts} \
--tt "$TITLE" \
--tn "$TRACKNUMBER" \
--tg "$GENRE" \
--ty "$DATE" \
--ta "$ARTIST" \
--tl "$ALBUM" \
--add-id3v2 \
- "$MP3"
fi
rm -f $FLAC.lock

Alternatively, you could make lockfile keep retrying indefinitely so you don't need to test the return code, and instead can test for the output file for determining whether to run flac.

Synchronizing four shell scripts to run one after another in unix

You are experiencing a classical race condition. To solve this issue, you need a shared "lock" (or similar) between your 4 scripts.

There are several ways to implement this. One way to do this in bash is by using the flock command, and an agreed-upon filename to use as a lock. The flock man page has some usage examples which resemble this:

(
flock -x 200 # try to acquire an exclusive lock on the file
# do whatever check you want. You are guaranteed to be the only one
# holding the lock
if [ -f "$paramfile" ]; then
# do something
fi
) 200>/tmp/lock-life-for-all-scripts
# The lock is automatically released when the above block is exited

You can also ask flock to fail right away if the lock can't be acquired, or to fail after a timeout (e.g. to print "still trying to acquire the lock" and restart).

Depending on your use case, you could also put the lock on the 'informatica' binary (be sure to use 200< in that case, to open the file for reading instead of (over)writing)

Run shell script from Java Synchronously

You want to wait for the Process to finish, that is waitFor() like this

public void executeScript() {
try {
ProcessBuilder pb = new ProcessBuilder(
"myscript.sh");
Process p = pb.start(); // Start the process.
p.waitFor(); // Wait for the process to finish.
System.out.println("Script executed successfully");
} catch (Exception e) {
e.printStackTrace();
}
}

Synchronize a Linux system command and a while-loop in Python

Just add an & at the end, to make the process detach to the background:

os.system("raspivid -t 20000 /home/pi/test.h264 &")

According to bash man pages:

If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.

Also, if you want to minimize the time it takes for the loop to start after executing raspivid, you should allocate your data and indx prior to the call:

data = np.zeros(20000, dtype="float")
indx=0
os.system("raspivid -t 20000 /home/pi/test.h264 &")
while True:
# ....

Update:
Since we discussed further in the comments, it is clear that there is no really a need to start the loop "at the same time" as raspivid (whatever that might mean), because if you are trying to read data from the I2C and make sure you don't miss any data, you will be best of starting the reading operation prior to running raspivid. This way you are certain that in the meantime (however big of delay there is between those two executions) you are not missing any data.

Taking this into consideration, your code could look something like this:

data = np.zeros(20000, dtype="float")
indx=0
os.system("(sleep 1; raspivid -t 20000 /home/pi/test.h264) &")
while True:
# ....

This is the simplest version in which we add a delay of 1 second before running raspivid, so we have time to enter our while loop and start waiting for I2C data.

This works, but it is hardly a production quality code. For a better solution, run the data acquisition function in one thread and the raspivid in a second thread, preserving the launch order (the reading thread is started first).

Something like this:

import Queue
import threading
import os

# we will store all data in a Queue so we can process
# it at a custom speed, without blocking the reading
q = Queue.Queue()

# thread for getting the data from the sensor
# it puts the data in a Queue for processing
def get_data(q):
for cnt in xrange(20000):
# assuming readbysensor() is a
# blocking function
sens = readbysensor()
q.put(sens)

# thread for processing the results
def process_data(q):
for cnt in xrange(20000):
data = q.get()
# do something with data here
q.task_done()

t_get = threading.Thread(target=get_data, args=(q,))
t_process = threading.Thread(target=process_data, args=(q,))
t_get.start()
t_process.start()

# when everything is set and ready, run the raspivid
os.system("raspivid -t 20000 /home/pi/test.h264 &")

# wait for the threads to finish
t_get.join()
t_process.join()

# at this point all processing is completed
print "We are all done!"

Quick-and-dirty way to ensure only one instance of a shell script is running at a time

Here's an implementation that uses a lockfile and echoes a PID into it. This serves as a protection if the process is killed before removing the pidfile:

LOCKFILE=/tmp/lock.txt
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
echo "already running"
exit
fi

# make sure the lockfile is removed when we exit and then claim it
trap "rm -f ${LOCKFILE}; exit" INT TERM EXIT
echo $$ > ${LOCKFILE}

# do stuff
sleep 1000

rm -f ${LOCKFILE}

The trick here is the kill -0 which doesn't deliver any signal but just checks if a process with the given PID exists. Also the call to trap will ensure that the lockfile is removed even when your process is killed (except kill -9).

Writing an rsync shell script on Mac to continuously sync local directory with remote web server

Here's what I would do:

First, test that you can use ssh (login with user name and password):

$ ssh example.com
^D

Create an SSH key:

$ ssh-keygen

(don't enter a password)

This will create the ~/.ssh/id_rsa (private key) and ~/.ssh/id_rsa.pub (public key files)

You'll need to transer the public key (id_rsa.pub) to your remote server (example.com) and then on the remote server, do the following:

$ cat id_rsa.pub >> ~/.ssh/authorized_keys
$ rm id_rsa.pub
^D

This adds the public key to the set of authorised keys.

You'll now be able to use ssh to connect to your remote server without having to use a username and password.

Next would be to use the rsync command, the following should suffice:

$ rsync -avz -e ssh 
--exclude '*.ht*' --exclude '*.sublime-*' --exclude 'cache/'
--exclude 'administrator/cache'
someUser@example.com:/directory/on/server /directory/on/local

(should be all on one line)

Now, once you've satisfied that this works for you, you want to put that command into a shell script (rsync_script.sh)

Then, you can use launchctl to schedule it:

In ~/Library/LaunchAgents/, create com.example.rsync.plist

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.example.rsync</string>
<key>KeepAlive</key>
<true/>
<key>ProgramArguments</key>
<array>
<string>/bin/sh</string>
<string>/path/to/rsync_script.sh</string>
</array>
<key>StartInterval</key>
<integer>30</integer>
</dict>
</plist>

Couple of gotchas:

  • Make sure that the rsync_script.sh is executable, i.e. do chmod 755 /path/to/the/rsync_script.sh
  • Make sure that the user which created the SSH keys is the same user as sets up the launchd plist.


Related Topics



Leave a reply



Submit