Print a file, skipping the first X lines, in Bash
You'll need tail. Some examples:
$ tail great-big-file.log
< Last 10 lines of great-big-file.log >
If you really need to SKIP a particular number of "first" lines, use
$ tail -n +<N+1> <filename>
< filename, excluding first N lines. >
That is, if you want to skip N lines, you start printing line N+1. Example:
$ tail -n +11 /tmp/myfile
< /tmp/myfile, starting at line 11, or skipping the first 10 lines. >
If you want to just see the last so many lines, omit the "+":
$ tail -n <N> <filename>
< last N lines of file. >
How to skip a line every two lines starting by skipping the first line?
You have to invert your sed
command: it should be n;p
instead of p;n
:
Your code:
for x in {1..20}; do echo $x ; done | sed -n 'p;n'
1
3
5
7
9
11
13
15
17
19
The version with sed
inverted:
for x in {1..20}; do echo $x ; done | sed -n 'n;p'
Output:
2
4
6
8
10
12
14
16
18
20
Skip first match in file using Bash
You can add a variable firstline
(init it as true).
When you have sn
and descr
matched, set the var to false
, else print.
EDIT: Alternative.
You can use tr and sed for manipulting the file.
First make sure that all lines (except the first) start with DESCR
:
tr -d "\n" < file | sed 's/DESCR/\n&/g; $ s/$/\n/'
The first line is without DESCR
, the second one is the one you want to ignore.
So process this stream from the third line:
tr -d "\n" < file | sed 's/DESCR/\n&/g; $ s/$/\n/' |
sed -rn '3,$ s/DESCR: "([^"]+).*SN: ([^[:space:]]+).*/\1,\2/p'
unix split skip first n of lines
Use tail -n +1001
to get lines starting from 1001st line:
cat *.txt | tail -n +1001 | split --lines=1000
How to print the first line of multiple files?
I'm guessing your version of head
doesn't support the -q
option; what does man head
(run on your Mac) show?
A small awk
solution:
awk '{ print ; nextfile}' *
As a file is processed the first line is printed, then we skip to the next file; net result is that we only print the first line of each file.
Keep in mind this could generate some odd results depending on what matches *
... binary files? directories?
How can I remove the first line of a text file using bash/sed script?
Try tail:
tail -n +2 "$FILE"
-n x
: Just print the last x
lines. tail -n 5
would give you the last 5 lines of the input. The +
sign kind of inverts the argument and make tail
print anything but the first x-1
lines. tail -n +1
would print the whole file, tail -n +2
everything but the first line, etc.
GNU tail
is much faster than sed
. tail
is also available on BSD and the -n +2
flag is consistent across both tools. Check the FreeBSD or OS X man pages for more.
The BSD version can be much slower than sed
, though. I wonder how they managed that; tail
should just read a file line by line while sed
does pretty complex operations involving interpreting a script, applying regular expressions and the like.
Note: You may be tempted to use
# THIS WILL GIVE YOU AN EMPTY FILE!
tail -n +2 "$FILE" > "$FILE"
but this will give you an empty file. The reason is that the redirection (>
) happens before tail
is invoked by the shell:
- Shell truncates file
$FILE
- Shell creates a new process for
tail
- Shell redirects stdout of the
tail
process to$FILE
tail
reads from the now empty$FILE
If you want to remove the first line inside the file, you should use:
tail -n +2 "$FILE" > "$FILE.tmp" && mv "$FILE.tmp" "$FILE"
The &&
will make sure that the file doesn't get overwritten when there is a problem.
Print first few and last few lines of file through a pipe with ... in the middle
An awk:
awk -v head=2 -v tail=2 'FNR==NR && FNR<=head
FNR==NR && cnt++==head {print "..."}
NR>FNR && FNR>(cnt-tail)' file file
Or if a single pass is important (and memory allows), you can use perl
:
perl -0777 -lanE 'BEGIN{$head=2; $tail=2;}
END{say join("\n", @F[0..$head-1],("..."),@F[-$tail..-1]);}' file
Or, an awk that is one pass:
awk -v head=2 -v tail=2 'FNR<=head
{lines[FNR]=$0}
END{
print "..."
for (i=FNR-tail+1; i<=FNR; i++) print lines[i]
}' file
Or, nothing wrong with being a caveman direct like:
head -2 file; echo "..."; tail -2 file
Any of these prints:
1
2
...
9
10
It terms of efficiency, here are some stats.
For small files (ie, less than 10 MB or so) all these are less than 1 second and the 'caveman' approach is 2 ms.
I then created a 1.1 GB file with seq 99999999 >file
- The two pass awk: 50 secs
- One pass perl: 10 seconds
- One pass awk: 29 seconds
- 'Caveman': 2 MS
Related Topics
"No Such File or Directory" Error When Executing a Binary
How to Configure Qt For Cross-Compilation from Linux to Windows Target
How to Force a Makefile to Rebuild a Target
Automating Telnet Session Using Bash Scripts
Connection Refused to Mongodb Errno 111
How to Find the Original User Through Multiple Sudo and Su Commands
Using the "Alternate Screen" in a Bash Script
Why Are the Backslash and Semicolon Required With the Find Command'S -Exec Option
Exit Code of Variable Assignment to Command Substitution in Bash
&&' Vs. '&' With the 'Test' Command in Bash
Linux Flock, How to "Just" Lock a File
Getting a Unique Id from a Unix-Like System