Shell Script to Find The Nth Occurrence of a String and Print The Line Number

How to get the line number of nth match?

$ awk '/cat/{c++} c==2{print NR;exit}' file
3

count the cats, print the line number and exit after the required match value.

How to find the nth string's line number, print and store it into a variable in a makefile?

Using read to get data from Make's input seems like a terrible idea. But if you're going to do that, you have to reference the variable in the same shell reads it. That is:

FILENAME = list.txt
initial:
@read choice; \
awk '/PERSON/ && ++n == c {print NR; exit}' c="$$choice" $(FILENAME)

Bash tool to get nth line from a file

head and pipe with tail will be slow for a huge file. I would suggest sed like this:

sed 'NUMq;d' file

Where NUM is the number of the line you want to print; so, for example, sed '10q;d' file to print the 10th line of file.

Explanation:

NUMq will quit immediately when the line number is NUM.

d will delete the line instead of printing it; this is inhibited on the last line because the q causes the rest of the script to be skipped when quitting.

If you have NUM in a variable, you will want to use double quotes instead of single:

sed "${NUM}q;d" file

How to get substring between nth and n+1th occurrence of pipe '|' in shell script

You can use the following command:

Line=3;
awk -v n="$Line" -F'|' 'NR == n {print $5;exit;}' file

Sample Image

This will produce as requested:

r3_c2

if I pass 3 as input

How to determine the line number of the last occurrence of a string in a file using a shell script?

Using awk:

awk '/Fedora/ { ln = FNR } END { print ln }'

Using grep:

grep -n 'Fedora' | tail -n1 | cut -d: -f1

Using the shell (tested in bash only):

unset ln lnr
while read -r; do
((lnr++))
case "$REPLY" in
*Fedora*) ln="$lnr";;
esac
done < grub.cfg
echo $ln

find text in file starting from nth line onwards, using shell script

It would be best to do this entirely in awk since you are already using awk to slice the file.

Example:

tgt="def"
n=3

awk -v tgt="$tgt" -v n="$n" '
BEGIN{flag="false"}
FNR>=n && index($0,tgt){
flag="true"
exit
}
END{print flag}' file

Alternatively, you can make a pipe and then inspect $? to see if grep found your match:

tgt="def"
n=2

tail -n "+$n" file | grep "$tgt" >/dev/null

Now $? will be 0 if the grep finds the pattern and 1 if it is not found. Then you can set a flag like so:

flag="false"
tail -n "+$n" file | grep "$tgt" >/dev/null
[ $? ] && flag="true"

Now flag is set true / false based on the grep. The command tail -n +[some number] file will print the file contents from the absolute line number onward.

For large files, the awk is significantly more efficient since it exits on the first match.


Edit based on update.

The issue is setting a Bash flag to true or false based on a process.

The $? Special Parameter is set based on the exit status of the most recently executed foreground pipeline. So pick your method to slice the file and detect the string and then set the flag in your script based on $? immediately after the pipe. Be aware that testing $? resets $? before it is tested -- so you either need to capture the value of $? before the test or use it in the pipeline itself.

These methods work:

1) Capture $? and test:

awk -v tgt="$tgt" -v n="$n" -v flag=1 '
FNR>=n && index($0,tgt){
flag=0
exit
}
END{ exit flag }
' ./testtext.txt
res=$?
[ $res -eq 1 ] && flagz=false || flagz=true

2) Capture a string result and test that:

res=$(awk -v tgt="$tgt" -v n="$n" -v flag="false" '
FNR>=n && index($0,tgt) {
flag="true"
exit
}
END{ print flag }' ./testtext.txt)

[ $res = "false" ] && flagz=false || flagz=true

3) Use a pipe and test in the pipe:

tail -n "+$n" file | grep "$tgt" >/dev/null && flagz=true || flagz=false 

My preference is 3 for small files and 2 for big files.



Related Topics



Leave a reply



Submit