Bash Tail the Newest File in Folder Without Variable

bash tail the newest file in folder without variable

tail -n 100 "$(ls -at | head -n 1)"

You do not need ls to actually print timestamps, you just need to sort by them (ls -t). I added the -a option because it was in your original code, but note that this is not necessary unless your logfiles are "dot files", i.e. starting with a . (which they shouldn't).

Using ls this way saves you from parsing the output with sed and tail -c. (And you should not try to parse the output of ls.) Just pick the first file in the list (head -n 1), which is the newest. Putting it in quotation marks should save you from the more common "problems" like spaces in the filename. (If you have newlines or similar in your filenames, fix your filenames. :-D )

Instead of saving into a variable, you can use command substitution in-place.

How to follow (tail -f) the latest file (matching a pattern) in a directory and call as alias with parameter

tail -f "$(find . -maxdepth 1 -name "logfile*" -printf "%Ts/%f\n" | sort -n | tail -1 | cut -d/ -f2)"

Tail the result of the find command. Search for files prefixed with logfile in the current directory and print the epoch time of creation as well as the file path and name, separated by a forward slash Pipe this through to sort and then print the latest entry with tail -1 before stripping out to to leave only the file path with cut.

Get most recent file in a directory on Linux

ls -Art | tail -n 1

This will return the latest modified file or directory. Not very elegant, but it works.

Used flags:

-A list all files except . and ..

-r reverse order while sorting

-t sort by time, newest first

How to recursively find the latest modified file in a directory?

find . -type f -printf '%T@ %p\n' \
| sort -n | tail -1 | cut -f2- -d" "

For a huge tree, it might be hard for sort to keep everything in memory.

%T@ gives you the modification time like a unix timestamp, sort -n sorts numerically, tail -1 takes the last line (highest timestamp), cut -f2 -d" " cuts away the first field (the timestamp) from the output.

Edit: Just as -printf is probably GNU-only, ajreals usage of stat -c is too. Although it is possible to do the same on BSD, the options for formatting is different (-f "%m %N" it would seem)

And I missed the part of plural; if you want more then the latest file, just bump up the tail argument.

tail the last modified file and monitor for new files in bash

It seems you need something like this:

#!/bin/bash

TAILPID=0
WATCHFILE=""

trap 'kill $(jobs -p)' EXIT # Makes sure we clean our mess (background processes) on exit

while true
do
NEWFILE=`ls --sort=time | head -n 1`
if [ "$NEWFILE" != "$WATCHFILE" ]; then

echo "New file has been modified"
echo "Now watching: $NEWFILE";
WATCHFILE=$NEWFILE
if [ $TAILPID -ne 0 ]; then
# Kill old tail
kill $TAILPID
wait $! &> /dev/null # supress "Terminated" message
fi
tail -f $NEWFILE &
TAILPID=$! # Storing tail PID so we could kill it later
fi
sleep 1
done

How to get the second latest file in a folder in Linux

To do diff of the last (lately modified) two files:

ls -t | head -n 2 | xargs diff

Bash : How to tail latest file in continuous loop

#!/bin/bash
# ^^^^ -- **NOT** /bin/sh

substring=";A database exception has occurred: FATAL DBERR: SQL_ERROR: ORA-00001: unique constraint (IX_TEST1) violated"
newest=
timeout=10 # number of seconds of no input after which to look for a newer file

# sets a global shell variable called "newest" when run
# to keep overhead down, this avoids invoking any external commands
find_newest() {
set -- "${DIR?The variable DIR is required}"/env/log/LOG*
[[ -e $1 || -L $1 ]] || return 1
newest=$1; shift
while (( $# )); do
[[ $1 -nt $newest ]] && newest=$1
shift
done
}

while :; do
find_newest # check for newer files
# if the newest file isn't the one we're already following...
if [[ $tailing_from_file != "$newest" ]]; then
exec < <(tail -f -- "$newest") # start a new copy of tail following the newer one
tailing_from_file=$newest # and record that file's name
fi
if read -t "$timeout" -r line && [[ $line = *"$substring"* ]]; then
echo "Do something here"
fi
done

Get the newest file based on timestamp

For those who just want an answer, here it is:

ls | sort -n -t _ -k 2 | tail -1

Here's the thought process that led me here.

I'm going to assume the [RANGE] portion could be anything.

Start with what we know.

  • Working Directory: /incoming/external/data
  • Format of the Files: [RANGE]_[YYYYMMDD].dat

We need to find the most recent [YYYYMMDD] file in the directory, and we need to store that filename.

Available tools (I'm only listing the relevant tools for this problem ... identifying them becomes easier with practice):

  • ls
  • sed
  • awk (or nawk)
  • sort
  • tail

I guess we don't need sed, since we can work with the entire output of ls command. Using ls, awk, sort, and tail we can get the correct file like so (bear in mind that you'll have to check the syntax against what your OS will accept):

NEWESTFILE=`ls | awk -F_ '{print $1 $2}' | sort -n -k 2,2 | tail -1`

Then it's just a matter of putting the underscore back in, which shouldn't be too hard.

EDIT: I had a little time, so I got around to fixing the command, at least for use in Solaris.

Here's the convoluted first pass (this assumes that ALL files in the directory are in the same format: [RANGE]_[yyyymmdd].dat). I'm betting there are better ways to do this, but this works with my own test data (in fact, I found a better way just now; see below):

ls | awk -F_ '{print $1 " " $2}' | sort -n -k 2 | tail -1 | sed 's/ /_/'

... while writing this out, I discovered that you can just do this:

ls | sort -n -t _ -k 2 | tail -1

I'll break it down into parts.

ls

Simple enough ... gets the directory listing, just filenames. Now I can pipe that into the next command.

awk -F_ '{print $1 " " $2}'

This is the AWK command. it allows you to take an input line and modify it in a specific way. Here, all I'm doing is specifying that awk should break the input wherever there is an underscord (_). I do this with the -F option. This gives me two halves of each filename. I then tell awk to output the first half ($1), followed by a space (" ")
, followed by the second half ($2). Note that the space was the part that was missing from my initial suggestion. Also, this is unnecessary, since you can specify a separator in the sort command below.

Now the output is split into [RANGE] [yyyymmdd].dat on each line. Now we can sort this:

sort -n -k 2

This takes the input and sorts it based on the 2nd field. The sort command uses whitespace as a separator by default. While writing this update, I found the documentation for sort, which allows you to specify the separator, so AWK and SED are unnecessary. Take the ls and pipe it through the following sort:

sort -n -t _ -k 2

This achieves the same result. Now you only want the last file, so:

tail -1

If you used awk to separate the file (which is just adding extra complexity, so don't do it sheepish), you can replace the space with an underscore again with sed:

sed 's/ /_/'

Some good info here, but I'm sure most people aren't going to read down to the bottom like this.

Delete all but the most recent X files in bash

For Linux (GNU tools), an efficient & robust way to keep the n newest files in the current directory while removing the rest:

n=5

find . -maxdepth 1 -type f -printf '%T@ %p\0' |
sort -z -nrt ' ' -k1,1 |
sed -z -e "1,${n}d" -e 's/[^ ]* //' |
xargs -0r rm -f

For BSD, find doesn't have the -printf predicate, stat can't output NULL bytes, and sed + awk can't handle NULL-delimited records.

Here's a solution that doesn't support newlines in paths but that safeguards against them by filtering them out:

#!/bin/bash
n=5

find . -maxdepth 1 -type f ! -path $'*\n*' -exec stat -f '%.9Fm %N' {} + |
sort -nrt ' ' -k1,1 |
awk -v n="$n" -F'^[^ ]* ' 'NR > n {printf "%s%c", $2, 0}' |
xargs -0 rm -f

note: I'm using bash because of the $'\n' notation. For sh you can define a variable containing a literal newline and use it instead.


Solution for UNIX & Linux (inspired from AIX/HP-UX/SunOS/BSD/Linux ls -b):

Some platforms don't provide find -printf, nor stat, nor support NULL-delimited records with stat/sort/awk/sed/xargs. That's why using perl is probably the most portable way to tackle the problem, because it is available by default in almost every OS.

I could have written the whole thing in perl but I didn't. I only use it for substituting stat and for encoding-decoding-escaping the filenames. The core logic is the same than the other solutions above and is implemented with POSIX tools.

BTW, perl's default stat has a resolution of a second, but starting from perl-5.8.9 you can get sub-second resolution with the the stat function of the module Time::HiRes (when the OS and the filesystem support it). That's what I'm using here; if your perl doesn't provide it then you can remove the ‑MTime::HiRes=stat from the command line.

n=5

find . '(' -name '.' -o -prune ')' -type f \
-exec perl -MTime::HiRes=stat -l -e '
foreach (@ARGV) {
@st = stat($_);
if ( @st > 0 ) {
s/([\\\n])/sprintf( "\\%03o", ord($1) )/ge;
print sprintf( "%.9f %s", $st[9], $_ );
}
else { print STDERR "stat: $_: $!"; }
}
' {} + |

sort -nrt ' ' -k1,1 |

sed -e "1,${n}d" -e 's/[^ ]* //' |

perl -l -ne '
s/\\([0-7]{3})/chr(oct($1))/ge;
s/(["\n])/"\\$1"/g;
print "\"$_\"";
' |

xargs -E '' sh -c '[ "$#" -gt 0 ] && rm -f "$@"' sh

Explanations:

  • For each file found, the first perl gets the modification time and outputs it along the encoded filename (each newline and backslash characters are replaced with the literals \n and \\ respectively).

  • Now each time filename is guaranteed to be single-line, so POSIX sort and sed can safely work with this stream.

  • The second perl decodes the filenames and escapes them for POSIX xargs.

  • Lastly, xargs calls rm for deleting the files. The sh command is a trick that prevents xargs from running rm when there's no files to delete.



Related Topics



Leave a reply



Submit