How to View Log Files in Linux and Apply Custom Filters While Viewing

How can I view log files in Linux and apply custom filters while viewing?

Try the multitail tool - as well as letting you view multile logs at once, I'm pretty sure it lets you apply regex filters interactively.

Filter log file entries based on date range

yes, there are multiple ways to do this. Here is how I would go about this. For starters, no need to pipe the output of cat, just open the log file with awk.

awk -vDate=`date -d'now-2 hours' +[%d/%b/%Y:%H:%M:%S` '$4 > Date {print Date, $0}' access_log

assuming your log looks like mine (they're configurable) than the date is stored in field 4. and is bracketed. What I am doing above is finding everything within the last 2 hours. Note the -d'now-2 hours' or translated literally now minus 2 hours which for me looks something like this: [10/Oct/2011:08:55:23

So what I am doing is storing the formatted value of two hours ago and comparing against field four. The conditional expression should be straight forward.I am then printing the Date, followed by the Output Field Separator (OFS -- or space in this case) followed by the whole line $0. You could use your previous expression and just print $1 (the ip addresses)

awk -vDate=`date -d'now-2 hours' +[%d/%b/%Y:%H:%M:%S` '$4 > Date {print $1}' | sort  |uniq -c |sort -n | tail

If you wanted to use a range specify two date variables and construct your expression appropriately.

so if you wanted do find something between 2-4hrs ago your expression might looks something like this

awk -vDate=`date -d'now-4 hours' +[%d/%b/%Y:%H:%M:%S` -vDate2=`date -d'now-2 hours' +[%d/%b/%Y:%H:%M:%S` '$4 > Date && $4 < Date2 {print Date, Date2, $4} access_log'

Here is a question I answered regarding dates in bash you might find helpful.
Print date for the monday of the current week (in bash)

Loop to filter out lines from apache log files

If you have a variable and you append _clean to its name, that's a new variable, and not the value of the old one with _clean appended. To fix that, use curly braces:

$ var=file.log
$ echo "<$var>"
<file.log>
$ echo "<$var_clean>"
<>
$ echo "<${var}_clean>"
<file.log_clean>

Without it, your pipeline tries to redirect to the empty string, which results in an error. Note that "$file"_clean would also work.

As for your pipeline, you could combine that into a single grep command:

grep -Ev 'term_to_grep|term_to_grep_2|term_to_grep_3|term_to_grep_n' "$logs" > "${logs}_clean"

No cat needed, only a single invocation of grep.

Or you could stick all your terms into a file:

$ cat excludes
term_to_grep_1
term_to_grep_2
term_to_grep_3
term_to_grep_n

and then use the -f option:

grep -vf excludes "$logs" > "${logs}_clean"

If your terms are strings and not regular expressions, you might be able to speed this up by using -F ("fixed strings"):

grep -vFf excludes "$logs" > "${logs}_clean"

I think GNU grep checks that for you on its own, though.

Best way of monitoring multiple log files

I finally found the one that suits my needs.

I'm sharing this in case anyone who wants to use the same solution.

Thanks to sourav19, I followed your advice, even though it took me 8-10 hours to install and configure everything, but it's really what I want.

I had to buy a Digital Ocean droplet, cost me $20 to get a 4 GB of RAM, but I think it's much cheaper than buying the other log monitoring applications which are way too expensive.

Before installing docker, we have to enable Virtual Private Cloud (VPC), we will use the provided IP Address for our docker containers, so they can communicate between each other, by following this article.

I used a dockerized ELK, link is here

All we need to do is to clone the dockerized ELK to our server, and then go inside the cloned folder, and build the Dockerfile

docker run -p 5601:5601 -p 9200:9200  -p 5044:5044 \
-v /var/log:/var/lib/elasticsearch --name elk sebp/elk

Then, open kibana, in the website, HTTP://your_site:5601

after that, install the Filebeat into the other server which having the log files you want to monitor, this Filebeat will send the logs to Kibana, by following this instructions, and then configure it here.

if everything is okay, we will see the logs in the Kibana.

Filter output of script to a file, an the full output to another file

How about this?

script.py | tee "fulldata_$date.log" | grep 'MAAS' > "filtered_$date.log"

First tee the unfiltered data, and grep still gets to see everything and filters it down.

You could invert it with process substitution, something like

script.py | tee >(grep 'MAAS' > "filtered_$date.log") > "fulldata_$date.log"

but that feels neither simpler nor easier to read.



Related Topics



Leave a reply



Submit