Take Nth Column in a Text File

Take nth column in a text file

iirc :

cat filename.txt | awk '{ print $2 $4 }'

or, as mentioned in the comments :

awk '{ print $2 $4 }' filename.txt

Print the 1st and every nth column of a text file using awk

Awk field numbers, strings, and array indices all start at 1, not 0, so when you do:

for (i=0;i<=3;i++) printf "%s ",$i 

the first iteration prints $0 which is the whole record.

You're on the right track with:

$ awk -F"\t" '{OFS="\t"} { for(i=5;i<10177;i+=14) printf ($i) }' test2.txt > test3.txt

but never do printf with input data as the only argument to printf since then printf will treat it as a format string without data (rather than what you want which is a plain string format with your data) and then that will fail cryptically if/when your input data contains formatting characters like %s or %d. So, always use printf "%s", $i, never printf $i.

The problem you're having with excel, I would guess, is you're trying to double click on the file and hoping excel knows what to do with it (it won't, unlike if this was a CSV). You can import tab-separated files into excel after it's opened though - google that.

You want something like:

awk '
BEGIN { FS=OFS="\t" }
{
for (i=1; i<=3; i++) {
printf "%s%s", (i>1?OFS:""), $i
}
for (i=5; i<=NF; i+=14) {
printf "%s%s", OFS, $i
}
print ""
}
' file

I highly recommend the book Effective Awk Programming, 4th Edition, by Arnold Robbins.

Read a txt file, convert nth column, save to separate file

Using the csv module

import csv

lines = []
with open('myFile.csv', 'r+') as f1:
reader = csv.reader(f1, delimiter=',')
for row in reader:
lines.append(row)
f1.close()

for x in lines:
x[1] = str(int(x[1]) * 3)

lines = [",".join(line) for line in lines]

f2 = open('newFile.csv', 'a')
f2.write("\n".join(lines))
f2.close()

First, all lines of the csv file are converted to arrays and saved in the array lines.

Then, the second element of each line is converted to an int, multiplied by 3, and converted back to a str, so it can be written to newFile later.

Each line is converted from an array back to a comma-separated String.

Lastly, we convert the array lines to a String, separated with newlines, and write it to newFile.csv

How to print the Nth column of a text file with AWK using argv

awk -v x=2 '{print $x}'

or in a shell script:


#!/bin/sh
num=$1
awk < /tmp/in -v x=$num '{print $x}' > /tmp/out

Get the nth column out of a text document (Python 3)

Why not doing it directly via bash?

Using cut

# something like that
$ cat /var/log/dpkg.log | grep 'install' | cut -f4 -d" "

The field parameter -f<number> can be different, I have status inbetween, for me it's -f5. The -d parameter says that it's separated by spaces not tabs.

Exclude unwanted output via grep -v

And if you want to exclude something like <none> in the output, you can extend the command with inverted grep (grep -v) like this:

# something like that
$ cat /var/log/dpkg.log | grep 'install' | cut -f4 -d" " | grep -v '<none>'

It's easy to pipe more grep -v commands after the whole command to get more excluded (which could also be done with one regular expression, but this way is more easy to understand).

Removing duplicates at the end with sort and uniq

If you have duplicates in the output, you can also remove them using sort and uniq.

# something like that
$ cat /var/log/dpkg.log | grep 'install' | cut -f4 -d" " | grep -v '<none>' | sort | uniq

Python

If you really want to do it with Python, you can do something like this:

# the with statement is not really necessary, but recommended.
with open("/var/log/dpkg.log") as logfile:
for line in logfile:
# covers also 'installed', 'half-installed', …
# for deeper processing you can use re module, but it's very likely not necessary
if "install" in line.split()[3]: # or [4]
# your code here
print(line)

reading the Nth column of a .txt

Perhaps:

real, dimension (25) :: temp
real :: keep
read (20,*) temp
keep = temp (19)

how to check if a file is sorted on nth column in unix?

The sort is not guaranteed to be stable. But some implementations of sort support an option which will force that. Try adding -s:

sort -sc -t, -k2,2 test.csv

but note that I would expect the first out of order line to be Ravindra Jadeja India, since the 2nd field of that line is the empty string which should sort before "India".

Replace each nth occurrence of 'foo' and 'bar' on two distincts columns by numerically respective nth line of a supplied file in respective columns

With your shown samples, please try following answer. Written and tested in GNU awk.

awk -F'\\[|\\] \\[|\\]' '
FNR==NR{
foo[FNR]=$2
bar[FNR]=$3
next
}
NF{
gsub(/\<foo\>/,foo[++count])
gsub(/\<bar\>/,bar[count])
}
1
' source.txt FS=" " target.txt

Explanation: Adding detailed explanation for above.

awk -F'\\[|\\] \\[|\\]' '       ##Setting field separator as [ OR ] [ OR ] here.
FNR==NR{ ##Checking condition FNR==NR which will be TRUE when source.txt will be read.
foo[FNR]=$2 ##Creating foo array with index of FNR and value of 2nd field here.
bar[FNR]=$3 ##Creating bar array with index of FNR and value of 3rd field here.
next ##next will skip all further statements from here.
}
NF{ ##If line is NOT empty then do following.
gsub(/\<foo\>/,foo[++count]) ##Globally substituting foo with array foo value, whose index is count.
gsub(/\<bar\>/,bar[count]) ##Globally substituting bar with array of bar with index of count.
}
1 ##printing line here.
' source.txt FS=" " target.txt ##Mentioning Input_files names here.



EDIT: Adding following solution also which will handle n number of occurrences of [...] in source and matching them at target file also. Since this is a working solution for OP(confirmed in comments) adding this in here. Also fair warning this will fail when source.txt contains a &.

awk '
FNR==NR{
while(match($0,/\[[^]]*\]/)){
arr[++count]=substr($0,RSTART+1,RLENGTH-2)
$0=substr($0,RSTART+RLENGTH)
}
next
}
{
line=$0
while(match(line,/\(?[[:space:]]*(\<foo\>|\<bar\>)[[:space:]]*\)?/)){
val=substr(line,RSTART,RLENGTH)
sub(val,arr[++count1])
line=substr(line,RSTART+RLENGTH)
}
}
1
' source.txt target.txt


Related Topics



Leave a reply



Submit