Implement tail with awk
for(i=NR-num;i<=NR;i++)
print vect[$i]
$
indicates a positional parameter. Use just plain i
:
for(i=NR-num;i<=NR;i++)
print vect[i]
The full code that worked for me is:
#!/usr/bin/awk -f
BEGIN{
num=ARGV[1];
# Make that arg empty so awk doesn't interpret it as a file name.
ARGV[1] = "";
}
{
vect[NR]=$0;
}
END{
for(i=NR-num;i<=NR;i++)
print vect[i]
}
You should probably add some code to the END
to handle the case when NR
< num
.
How to filter a tail output throug awk and grep?
Don't use grep
, do the pattern match in awk
.
tail -f /var/log/syslog | awk '/Fieldname/ {print $2,$1,$9,$3,"\033[1;36m"$17 "\033[0m","\033[1;33m"$23 "\033[0m","\033[1;36m"$19 "\033[0m","\033[1;33m"$24 "\033[0m","\033[1;38m"$26"\033[0m","\033[1;32m"$13"\033[0m","\033[1;31m"$20 "\033[0m";}'
If you really need to use grep
, you can use the --line-buffered
option so it doesn't buffer the output.
tail -f /var/log/syslog | grep --line-buffered Fieldname | awk '{print $2,$1,$9,$3,"\033[1;36m"$17 "\033[0m","\033[1;33m"$23 "\033[0m","\033[1;36m"$19 "\033[0m","\033[1;33m"$24 "\033[0m","\033[1;38m"$26"\033[0m","\033[1;32m"$13"\033[0m","\033[1;31m"$20 "\033[0m";}'
If you want to grep
the output of awk
, you should use fflush()
after printing each line to flush the buffer immediately.
tail -f /var/log/syslog | awk '{print $2,$1,$9,$3,"\033[1;36m"$17 "\033[0m","\033[1;33m"$23 "\033[0m","\033[1;36m"$19 "\033[0m","\033[1;33m"$24 "\033[0m","\033[1;38m"$26"\033[0m","\033[1;32m"$13"\033[0m","\033[1;31m"$20 "\033[0m"; fflush();}' | grep Fieldname
How to get data from file with tail and awk
I have solved the problem.
Before:
cmd = "awk '{ arg=$2 } END {sub(/\.\..*$/,arg); print arg}' scan.txt"
x = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
AvgAPI.lastscanned = x.stdout.read()
Now:
Line_len = 1200
SEEK_END = 2
file = open('scan.txt', "r")
file.seek(-Line_len, SEEK_END)
data_scanfile_not_cleaned = str(file.read(Line_len)).split(" ")[1].strip()
if not data_scanfile_not_cleaned.startswith('/'):
file.close()
AvgAPI.lastscanned = ""
time.sleep(0.1)
else:
data_scanfile_re = re.sub(r'[~\s+(\d+)%]','',data_scanfile_not_cleaned)
data_scanfile_strip = data_scanfile_re.strip("[.]")
data_scanfile = data_scanfile_strip.strip("[K")
AvgAPI.lastscanned = data_scanfile
file.close()
time.sleep(0.1)
There are some minor flaws with the new solution, but it works satisfactorily.
tail -f | awk and end tail once data is found
Based on this question How to break a tail -f command in bash you could try
#! /bin/bash
XMLF=/appl/logs/abc.log
aa_pam=${1-xml}
[[ ${2-xml} = "xml" ]] && tof=xml_$(date +%Y%m%d%H%M%S).xml || tof=$2
mkfifo log.pipe
tail -f "$XMLF" > log.pipe & tail_pid=$!
awk -vpar1="$aa_pam" -vtof="$tof" -f t.awk < log.pipe
kill $tail_pid
rm log.pipe
where t.awk
is:
/<\?xml version\=/ {
if (Print_SW==1) {
p_out(Cnt_Line)
}
Print_SW=0
Cnt_line=0
}
{
Trap_arry[++Cnt_line]=$0
}
$0 ~ par1 {
Print_SW=1;
}
/<\/XYZ_999/ {
if (Print_SW==1)
p_out(Cnt_Line)
Print_SW=0
Cnt_line=0
}
function p_out(Cnt_Line, i) {
for (i=1; i<Cnt_line; i++) {
print Trap_arry[i] | ("tee " tof)
}
exit 1
}
Filter logs with awk for last 100 lines
awk don't know about end of file until it change of reading file but you can read twhice the file, first time to find the end, second to treat line that are in the scope. You could also keep the X last line in a buffer but it's a bit heavy in memory consuption and process. Notice that the file need to be mentionned twice at the end for this.
awk 'FNR==NR{L=NR-500;next};FNR>=L && /ERROR/{ print FNR":"$0}' my_log my_log
With explanaition
awk '# first reading
FNR==NR{
#last line is this minus 500
LL=NR-500
# go to next line (for this file)
next
}
# at second read (due to previous section filtering)
# if line number is after(included) LL AND error is on the line content, print it
FNR >= LL && /ERROR/ { print FNR ":" $0 }
' my_log my_log
on gnu sed
sed '$-500,$ {/ERROR/ p}' my_log
capture last line of file as integer variable and use in awk command
There's a space at the beginning of the last line, so the command is becoming
awk -v a= 99 '{print $2/a}' conf.txt
This is setting a
to an empty string, treating 99
as the awk script, and the rest as filenames.
Remove the spaces from $div
.
div=${div// /}
Related Topics
History Command Works in a Terminal, But Doesn't When Written as a Bash Script
Insert New Line to Bash Prompts
Overwrite Input File Using Awk
Ffmpeg Fix Watermark Size or Percentage
Connect to Host Localhost Port 22: Connection Refused
Is It Safe to Delete the Journal File of Mongodb
Grep Recursively for a Specific File Type on Linux
Ubuntu "No Space Left on Device" But There Is Tons of Space
How to Open Vim with a Particular Line Number at the Top
Can't Remove a Directory in Unix
Why Does '/Proc/Meminfo' Show 32Gb When Aws Instance Has Only 16Gb
How to Configure Bash to Handle Crlf Shell Scripts
Linux:How to Set Default Route from C
Shell Script for Multithreading a Process
Determining the Path That a Yum Package Installed To
Writing to Serial Port from Linux Command Line
How to Pass a File Argument to My Bash Script Using a Terminal Command in Linux