Overwrite Input File Using Awk

AWK: replace and write a column value in the input file

Awk isn't designed to edit things in-place. It's designed to process data and write it to stdout (or another file). You can do something like this:

$ awk 'BEGIN {FS="\t"} {if($3 > 100) $3=$3/100;print}' test.stat > test.stat.new \
&& mv test.stat test.stat.old && mv test.stat.new test.stat

how to write finding output to same file using awk command

Not possible per se. You need a second temporary file because you can't read and overwrite the same file. Something like:

awk '(PROGRAM)' testfile.txt > testfile.tmp && mv testfile.tmp testfile.txt

The mktemp program is useful for generating unique temporary file names.

There are some hacks for avoiding a temporary file, but they rely mostly on caching and read buffers and quickly get unstable for larger files.

awk print overwrite strings

Your input file contains carriage returns (\r aka control-M). Run dos2unix on it before running a UNIX tool on it.

idk what you're using paste for though, and you should not be using awk for this at all anyway, it's just a job for a simple shell script, e.g. remove the echo once you've tested this:

$ < file xargs -n 1 -I {} echo mv "{}" "../dir"
mv file1 ../dir
mv file2 ../dir
mv file3 ../dir

Can I overwrite output to the screen with awk?

You can use ANSI Escape sequences with awk. Try this:

seq 1 100000 | awk '{print $1 "\033[1A"}'

Esc[ValueA Cursor Up:

Moves the cursor up by the specified number of lines without changing columns. If the cursor is already on the top line, ANSI.SYS ignores this sequence.


To solve problem raised by Jlliagre you can do:

seq 100000 -1 1  | awk '{print "\033[2J\033[;H" $1}'

It clears the screen and sets the location of the cursor to position 0,0

combine 2 awk or sed statements into one and save the existing file

You can inline multi-line awk script, or you can put the two statements in a file. Use distinct variable names for each 'pass'

awk '
$1 ~ /^data/ {
$0 = sprintf("%s%*s%s\n",substr($0,1,m-1),n-m,"'$fifteen_min_ago'",substr($0,n))
$0 = sprintf("%s%*s%s\n",substr($0,1,m2-1),n2-m2,"'$ctime'",substr($0,n2))
}
{ print }
' m=51 n=80 m2=97 n2=126 file > file.new &&
mv file.new

Note that there are other (simpler) ways to achieve he replacement that is implemented in the question. This is the most similar to the approach described in the question.

With SED the replacement are easier:

ctime=$(date +%Y-%m-%dT%H:%M:%S.%3N-00:00)
fifteen_min_ago=$(date -d "15 mins ago" +%Y-%m-%dT%H:%M:%S.%3N-00:00)

sed -e 's/"start_date": "[^"]*"/"start_date": "'$ctime'"/' \
-e 's/"end_date": "[^"]*"/"end_date": "'$fifteen_min_ago'"/' < file > file.new && mv file.new file

Save modifications in place with awk

In GNU Awk 4.1.0 (released 2013) and later, it has the option of "inplace" file editing:

[...] The "inplace" extension, built using the new facility, can be used to simulate the GNU "sed -i" feature. [...]

Example usage:

$ gawk -i inplace '{ gsub(/foo/, "bar") }; { print }' file1 file2 file3

To keep the backup:

$ gawk -i inplace -v INPLACE_SUFFIX=.bak '{ gsub(/foo/, "bar") }
> { print }' file1 file2 file3

awk/gawk seems to overwrite columns on result

As pointed out by tripleee, the issue is likely due to DOS line terminators, a simple fix could be to strip the special characters using tr and feed it to awk for processing.

< in.txt tr -dc '[:print:]\n' |  gawk '{print $2 " " $1}'

In the above example, tr -dc '[:print:]\n' allows only the printable characters from the input file before feeding it to awk.



Related Topics



Leave a reply



Submit