Run a Shell Command When a File Is Added

Run a shell command when a file is added

I don't know how people are uploading content to this folder, but you might want to use something lower-tech than monitoring the directory with inotify.

If the protocol is FTP and you have access to your FTP server's log, I suggest tailing that log to watch for successful uploads. This sort of event-triggered approach will be faster, more reliable, and less load than a polling approach with traditional cron, and more portable and easier to debug than something using inotify.

The way you handle this will of course depend on your FTP server. I have one running vsftpd whose logs include lines like this:

Fri May 25 07:36:02 2012 [pid 94378] [joe] OK LOGIN: Client "10.8.7.16"
Fri May 25 07:36:12 2012 [pid 94380] [joe] OK UPLOAD: Client "10.8.7.16", "/path/to/file.zip", 8395136 bytes, 845.75Kbyte/sec
Fri May 25 07:36:12 2012 [pid 94380] [joe] OK CHMOD: Client "10.8.7.16", "/path/to/file.zip 644"

The UPLOAD line only gets added when vsftpd has successfully saved the file. You could parse this in a shell script like this:

#!/bin/sh

tail -F /var/log/vsftpd.log | while read line; do
if echo "$line" | grep -q 'OK UPLOAD:'; then
filename=$(echo "$line" | cut -d, -f2)
if [ -s "$filename" ]; then
# do something with $filename
fi
fi
done

If you're using an HTTP upload tool, see if that tool has a text log file it uses to record incoming files. If it doesn't consider adding some sort of logger function to it, so it'll produce logs that you can tail.

How to run a shell script when a file or directory changes?

Use inotify-tools.

The linked Github page has a number of examples; here is one of them.

#!/bin/sh

cwd=$(pwd)

inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}

rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam@example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done

Execute shell command line by line from a file

You can convert your file to a list of substitution commands by removing all occurrences of sed -i ' and ' ./db.sql.

Using process substitution, the list can then be processed as a file passed to the sed -f option.

sed -i -f <(sed "s/[^']*'//;s/'.*//" file) ./db.sql

Executing a bash script upon file creation

How about incron? It triggering Commands On File/Directory Changes.

sudo apt-get install incron

Example:

<path> <mask> <command>

Where <path> can be a directory (meaning the directory and/or the files directly in that directory (not files in subdirectories of that directory!) are watched) or a file.

<mask> can be one of the following:

IN_ACCESS           File was accessed (read) (*)
IN_ATTRIB Metadata changed (permissions, timestamps, extended attributes, etc.) (*)
IN_CLOSE_WRITE File opened for writing was closed (*)
IN_CLOSE_NOWRITE File not opened for writing was closed (*)
IN_CREATE File/directory created in watched directory (*)
IN_DELETE File/directory deleted from watched directory (*)
IN_DELETE_SELF Watched file/directory was itself deleted
IN_MODIFY File was modified (*)
IN_MOVE_SELF Watched file/directory was itself moved
IN_MOVED_FROM File moved out of watched directory (*)
IN_MOVED_TO File moved into watched directory (*)
IN_OPEN File was opened (*)

<command> is the command that should be run when the event occurs. The following wildards may be used inside the command specification:

$$   dollar sign
$@ watched filesystem path (see above)
$# event-related file name
$% event flags (textually)
$& event flags (numerically)

If you watch a directory, then $@ holds the directory path and $# the file that triggered the event. If you watch a file, then $@ holds the complete path to the file and $# is empty.

Working Example:

$sudo echo spatel > /etc/incron.allow
$sudo echo root > /etc/incron.allow

Start Daemon:

$sudo /etc/init.d/incrond start

Edit incrontab file

$incrontab -e
/home/spatel IN_CLOSE_WRITE touch /tmp/incrontest-$#

Test it

$touch /home/spatel/alpha

Result:

$ls -l /tmp/*alpha*
-rw-r--r-- 1 spatel spatel 0 Feb 4 12:32 /tmp/incrontest-alpha

Notes: In Ubuntu you need to activate inotify at boot time. Please add following line in Grub menu.lst file:

kernel /boot/vmlinuz-2.6.26-1-686 root=/dev/sda1 ro inotify=yes

Execute command on all files in a directory

The following bash code will pass $file to command where $file will represent every file in /dir

for file in /dir/*
do
cmd [option] "$file" >> results.out
done

Example

el@defiant ~/foo $ touch foo.txt bar.txt baz.txt
el@defiant ~/foo $ for i in *.txt; do echo "hello $i"; done
hello bar.txt
hello baz.txt
hello foo.txt

Run text file as commands in Bash

you can make a shell script with those commands, and then chmod +x <scriptname.sh>, and then just run it by

./scriptname.sh

Its very simple to write a bash script

Mockup sh file:

#!/bin/sh
sudo command1
sudo command2
.
.
.
sudo commandn

How do you run a command eg chmod, for each line of a file?

Read a file line by line and execute commands: 4+ answers

This is because there is not only 1 answer...


  1. Shell command line expansion
  2. xargs dedicated tool
  3. while read with some remarks
  4. while read -u using dedicated fd, for interactive processing (sample)
  5. running shell with inline generated script

Regarding the OP request: running chmod on all targets listed in file, xargs is the indicated tool. But for some other applications, small amount of files, etc...

0. Read entire file as command line argument.

If your file is not too big and all files are well named (without spaces or other special chars like quotes), you could use shell command line expansion. Simply:

chmod 755 $(<file.txt)

For small amounts of files (lines), this command is the lighter one.

1. xargs is the right tool

For bigger amount of files, or almost any number of lines in your input file...

For many binutils tools, like chown, chmod, rm, cp -t ...

xargs chmod 755 <file.txt

If you have special chars and/or a lot of lines in file.txt.

xargs -0 chmod 755 < <(tr \\n \\0 <file.txt)

If your command need to be run exactly 1 time for each entry:

xargs -0 -n 1 chmod 755 < <(tr \\n \\0 <file.txt)

This is not needed for this sample, as chmod accepts multiple files as arguments, but this matches the title of question.

For some special cases, you could even define the location of the file argument in commands generated by xargs:

xargs -0 -I '{}' -n 1 myWrapper -arg1 -file='{}' wrapCmd < <(tr \\n \\0 <file.txt)

Test with seq 1 5 as input

Try this:

xargs -n 1 -I{} echo Blah {} blabla {}.. < <(seq 1 5)
Blah 1 blabla 1..
Blah 2 blabla 2..
Blah 3 blabla 3..
Blah 4 blabla 4..
Blah 5 blabla 5..

where your command is executed once per line.

2. while read and variants.

As OP suggests,

cat file.txt |
while read in; do
chmod 755 "$in"
done

will work, but there are 2 issues:

  • cat | is a useless fork, and

  • | while ... ;done will become a subshell whose environment will disappear after ;done.

So this could be better written:

while read in; do
chmod 755 "$in"
done < file.txt

But

  • You may be warned about $IFS and read flags:

help read

read: read [-r] ... [-d delim] ... [name ...]
...
Reads a single line from the standard input... The line is split
into fields as with word splitting, and the first word is assigned
to the first NAME, the second word to the second NAME, and so on...
Only the characters found in $IFS are recognized as word delimiters.
...
Options:
...
-d delim continue until the first character of DELIM is read,
rather than newline
...
-r do not allow backslashes to escape any characters
...
Exit Status:
The return code is zero, unless end-of-file is encountered...

In some cases, you may need to use

while IFS= read -r in;do
chmod 755 "$in"
done <file.txt

for avoiding problems with strange filenames. And maybe if you encounter problems with UTF-8:

while LANG=C IFS= read -r in ; do
chmod 755 "$in"
done <file.txt

While you use a redirection from standard inputfor reading file.txt`, your script cannot read other input interactively (you cannot use standard input for other input anymore).

3. while read, using dedicated fd.

Syntax: while read ...;done <file.txt will redirect standard input to come from file.txt. That means you won't be able to deal with processes until they finish.

This will let you use more than one input simultaneously, you could merge two files (like here: scriptReplay.sh), or maybe:

You plan to create an interactive tool, you have to avoid use of standard input and use some alternative file descriptor.

Constant file descriptors are:

  • 0 for standard input
  • 1 for standard output
  • 2 for standard error.

3.1 posix shell first

You could see them by:

ls -l /dev/fd/

or

ls -l /proc/$$/fd/

From there, you have to choose unused numbers between 0 and 63 (more, in fact, depending on sysctl superuser tool) as your file descriptor.

For this demo, I will use file descriptor 7:

while read <&7 filename; do
ans=
while [ -z "$ans" ]; do
read -p "Process file '$filename' (y/n)? " foo
[ "$foo" ] && [ -z "${foo#[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done 7<file.txt

If you want to read your input file in more differents steps, you have to use:

exec 7<file.txt      # Without spaces between `7` and `<`!
# ls -l /dev/fd/

read <&7 headLine
while read <&7 filename; do
case "$filename" in
*'----' ) break ;; # break loop when line end with four dashes.
esac
....
done

read <&7 lastLine

exec 7<&- # This will close file descriptor 7.
# ls -l /dev/fd/

3.2 Same under bash

Under bash, you could let him choose any free fd for you and store into a variable:
exec {varname}</path/to/input:

while read -ru ${fle} filename;do
ans=
while [ -z "$ans" ]; do
read -rp "Process file '$filename' (y/n)? " -sn 1 foo
[ "$foo" ] && [ -z "${foo/[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done {fle}<file.txt

Or

exec {fle}<file.txt
# ls -l /dev/fd/
read -ru ${fle} headline

while read -ru ${fle} filename;do
[[ -n "$filename" ]] && [[ -z ${filename//*----} ]] && break
....
done

read -ru ${fle} lastLine

exec {fle}<&-
# ls -l /dev/fd/

5 filtering input file for creating shell commands

sed <file.txt 's/.*/chmod 755 "&"/' | sh

This won't optimise forks, but this could be usefull for more complex (or conditional) operation:

sed <file.txt 's/.*/if [ -e "&" ];then chmod 755 "&";fi/' | sh

sed 's/.*/[ -f "&" ] \&\& echo "Processing: \\"&\\"" \&\& chmod 755 "&"/' \
file.txt | sh

Run bash commands from txt file

Just do bash file:

$ cat file 
date
echo '12*12' | bc

$ bash file
Mon Nov 26 15:34:00 GMT 2012
144

In case of aliases just run bash -i file

No need to worry about file extensions or execution rights.



Related Topics



Leave a reply



Submit