How to Get "Instant" Output of "Tail -F" as Input

Ending tail -f started in a shell script

The best answer I can come up with is this

  1. Put a timeout on the read, tail -f logfile | read -t 30 line
  2. Start tail with --pid=$$, that way it'll exit when the bash-process has finished.

It'll cover all cases I can think of (server hangs with no output, server exits, server starts correctly).

Dont forget to start your tail before the server.

tail -n0 -F logfile 2>/dev/null | while read -t 30 line

the -F will 'read' the file even if it doesn't exist (start reading it when it appears). The -n0 won't read anything already in the file, so you can keep appending to the logfile instead of overwriting it each time, and to standard log rotation on it.

EDIT:
Ok, so a rather crude 'solution', if you're using tail. There are probably better solutions using something else but tail, but I got to give it to you, tail gets you out of the broken-pipe quite nicely. A 'tee' which is able to handle SIGPIPE would probably work better. The java process actively doing a file system drop with an 'im alive' message of some sort is probably even easier to wait for.

function startServer() {
touch logfile

# 30 second timeout.
sleep 30 &
timerPid=$!

tail -n0 -F --pid=$timerPid logfile | while read line
do
if echo $line | grep -q 'Started'; then
echo 'Server Started'
# stop the timer..
kill $timerPid
fi
done &

startJavaprocess > logfile &

# wait for the timer to expire (or be killed)
wait %sleep
}

How do I implement 'tail -f' with timeout-on-read in Perl?

Have you tried File::Tail to handle the actual tailing instead of trying to coerce <STDIN> to do the job?

Or, if that piece does work, in what way is this failing?

tail -f | awk and end tail once data is found

Based on this question How to break a tail -f command in bash you could try

#! /bin/bash

XMLF=/appl/logs/abc.log

aa_pam=${1-xml}
[[ ${2-xml} = "xml" ]] && tof=xml_$(date +%Y%m%d%H%M%S).xml || tof=$2

mkfifo log.pipe
tail -f "$XMLF" > log.pipe & tail_pid=$!

awk -vpar1="$aa_pam" -vtof="$tof" -f t.awk < log.pipe
kill $tail_pid
rm log.pipe

where t.awk is:

/<\?xml version\=/ {
if (Print_SW==1) {
p_out(Cnt_Line)
}
Print_SW=0
Cnt_line=0
}

{
Trap_arry[++Cnt_line]=$0
}

$0 ~ par1 {
Print_SW=1;
}

/<\/XYZ_999/ {
if (Print_SW==1)
p_out(Cnt_Line)
Print_SW=0
Cnt_line=0
}

function p_out(Cnt_Line, i) {
for (i=1; i<Cnt_line; i++) {
print Trap_arry[i] | ("tee " tof)
}
exit 1
}

tail -f | sed to file doesn't work

It's the sed buffering. Use sed -u. man sed:

-u, --unbuffered

load minimal amounts of data from the input files and flush the
output buffers more often

And here's a test for it (creates files fooand bar):

$ for i in {1..3} ; do echo a $i ; sleep 1; done >> foo &
[1] 12218
$ tail -f foo | sed -u 's/a/b/' | tee -a bar
b 1
b 2
b 3

Be quick or increase the {1..3} to suit your skillz.

Cygwin reading input piped in from tail -f

  1. By default stdin/stdout are line-buffered if they are terminal and block-buffered otherwise. That affects not just your program (actually gets will return immediately when something is available and you are printing lines), but also the grep. It needs --line-buffered flag.
  2. Sed should be able to do the work for you. Try just:

    tail -f serverlog | sed -une 's/myMessage/\a&/p'

    (-u sets unbuffered—hopefuly cygwin supports it—I am checking on Linux)

Perl: Reading from a 'tail -f' pipe via STDIN

tail -f doesn't generally buffer output, but grep probably does. Move the "grep" functionality into your Perl one-liner:

tail -f snmplistener.log | perl -ne 'print "LINE: $_\n" if /IPaddress/'


Related Topics



Leave a reply



Submit