Why a Linux Redirect Truncates the File

Why a linux redirect truncates the file?

perl accepts the parameter -i for inplace. With this, you can process a file with a perl program and immediately have it written back.

Truncating a file while it's being used (Linux)

Take a look at the utility split(1), part of GNU Coreutils.

Why does my output redirection not write to file?

You cannot read and write to same file because shell will truncate the output file to zero byte even before executing the command line.

Use inline editing option in sed:

sed -i.bak 's/ //g; s/|/./g' columns

Bash - stdout text gets truncated when redirected to a file

You can try using COLUMNS variable before executing commands in bash;

feel the difference:

COLUMNS=80 dpkg -l
and
COLUMNS=300 dpkg -l

How can I use a file in a command and redirect output to the same file without truncating it?

You cannot do that because bash processes the redirections first, then executes the command. So by the time grep looks at file_name, it is already empty. You can use a temporary file though.

#!/bin/sh
tmpfile=$(mktemp)
grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name > ${tmpfile}
cat ${tmpfile} > file_name
rm -f ${tmpfile}

like that, consider using mktemp to create the tmpfile but note that it's not POSIX.

Truncate/Delete the log file contents which are generated via output redirection

For long running processes that open the file for write mode (using '>', or otherwise), the process tracks the offset of the next write. Even if the file size is truncated to 0, the next write will resume at the last location. Most likely, based on description is that the long running process continue to log at the old offset (effectively leaving lot of zero-byte data in the start of the file.

  • Verify by inspecting the file - did the initial content disappear ?

Solution is simple, instead of logging in write mode, use append mode.

# Start with a clean file
rm -f sysout.log
# Force Append mode.
java - jar my_app.jar >> sysout.log 2>>&1 &

...
truncate ...
# New data should be written to the START of the file, based on truncated size.

Notice that all writing processes, and connections should use append mode.

Problem with Bash output redirection

Redirecting from a file through a pipeline back to the same file is unsafe; if file.txt is overwritten by the shell when setting up the last stage of the pipeline before tail starts reading off the first stage, you end up with empty output.

Do the following instead:

tail -1 file.txt >file.txt.new && mv file.txt.new file.txt

...well, actually, don't do that in production code; particularly if you're in a security-sensitive environment and running as root, the following is more appropriate:

tempfile="$(mktemp file.txt.XXXXXX)"
chown --reference=file.txt -- "$tempfile"
chmod --reference=file.txt -- "$tempfile"
tail -1 file.txt >"$tempfile" && mv -- "$tempfile" file.txt

Another approach (avoiding temporary files, unless <<< implicitly creates them on your platform) is the following:

lastline="$(tail -1 file.txt)"; cat >file.txt <<<"$lastline"

(The above implementation is bash-specific, but works in cases where echo does not -- such as when the last line contains "--version", for instance).

Finally, one can use sponge from moreutils:

tail -1 file.txt | sponge file.txt

error with linux uniq command with redirection

Typically in bash, you should avoid piping to a file being used in a preceding command. The piping (from >) is often working in parallel with the command (in this case, uniq), so unexpected behavior can occur. Some commands like sed and perl have options like -i that allows inplace file manipulation, but generally it's usually safest just to assume it's not allowed. The dirty workaround is something like so:

uniq 1.txt > tmpfile
mv tmpfile 1.txt


Related Topics



Leave a reply



Submit