Bash output stream write to a file
The output is being buffered because the C standard library changes the output buffering mode depending on whether or not stdout is a terminal device. If it's a terminal device (according to isatty(3)
), then stdout is line-buffered: it gets flushed every time a newline character gets written. If it's not a terminal device, then it's fully buffered: it only gets flushed whenever a certain amount of data (usually something on the order of 4 KB to 64 KB) gets written.
So, when you redirect the command's output to a file using the shell's >
redirection operator, it's no longer outputting to a terminal and it buffers its output. A program can change its buffering mode with setvbuf(3)
and friends, but the program has to cooperate to do this. Many programs have command line options to make them line-buffered, e.g. grep(1)
's --line-buffered
option. See if your command has a similar option.
If you don't have such an option, you can try using a tool such as unbuffer(1)
to unbuffer the output stream, but it doesn't always work and isn't a standard utility, so it's not always available.
write to output stream and returning value from shell script function
Would this work?:
#this is a function that returns a value, as well as
#print some messages
function logic(){
echo "start of logic"
echo "perform logic, to get value"
echo "ok" >&2
}
function smain(){
{ local result=$( { { logic ; } 1>&3 ; } 2>&1); } 3>&1
echo "result is >$result<"
if [ "$result" == "ok" ];then
echo "script successful"
else
echo "script failed"
fi
}
smain
Linux bash: grep from stream and write to file
If just grep
, without writing to the file, works, you encountered a buffering "problem". I/O buffering, unless manually implemented by the program will get handled by the libc. If the program's stdout is a termial, buffering will be line-based. If not, the libc buffers output until the buffer reached a size limit.
On Linux, meaning with glibc
you can use the stdbuf
command to configure that buffering:
tail -f A.log | stdbuf -oL grep "keyword" >> B.log
-oL
specifies that the output stream should be line-buffered.
PHP writing to a file from an output stream only writes the first few lines
It sounds like the SSH connection hasn't returned, so the fwrite()
loop hasn't finished and the file isn't closed. As a result, some of the output may be buffered. Try flushing the buffer after each write:
while($line = fgets($stream_out)) {
fwrite($fopenText, $line);
fflush($fopenText);
}
Bash script - Modify output of command and print into file
Create a function to execute commands and capture sterr an stdout to variables.
function execCommand(){
local command="$@"
{
IFS=$'\n' read -r -d '' STDERR;
IFS=$'\n' read -r -d '' STDOUT;
} < <((printf '\0%s\0' "$($command)" 1>&2) 2>&1)
}
function testCommand(){
grep foo bar
echo "return code $?"
}
execCommand testCommand
echo err: $STDERR
echo out: $STDOUT
execCommand "touch /etc/foo"
echo err: $STDERR
echo out: $STDOUT
execCommand "date"
echo err: $STDERR
echo out: $STDOUT
output
err: grep: bar: No such file or directory
out: return code 2
err: touch: cannot touch '/etc/foo': Permission denied
out:
err:
out: Mon Jan 31 16:29:51 CET 2022
Now you can modify $STDERR & $STDOUT
execCommand testCommand && { echo "$STDERR" > err.log; echo "$STDOUT" > out.log; }
Explanation: Look at the answer from madmurphy
How to redirect output to a file and stdout
The command you want is named tee
:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1
redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee
command.
Furthermore, if you want to append to the log file, use tee -a
as:
program [arguments...] 2>&1 | tee -a outfile
Displaying stdout on screen and a file simultaneously
I can't say why tail
lags, but you can use tee
:
Redirect output to multiple files, copies standard input to standard output and also to any files given as arguments. This is useful when you want not only to send some data down a pipe, but also to save a copy.
Example: <command> | tee <outputFile>
detect output stream on linux shell script
When stdout
is not connected to a terminal, it's fully buffered by default. So if you want to be able to detect output immediately (as suggested by the sleep(1);
in the code) you need to flush the buffer after printing.
#include "stdio.h"
void main(){
int i;
for (i=0; i<100; i++){
printf("data: %d\n", i);
fflush(stdout);
sleep(1); // delay 1s
}
}
Then you can pipe the output of the program to something in the script and it will detect the output without waiting for the program to finish.
Redirect all output to file in Bash
That part is written to stderr, use 2>
to redirect it. For example:
foo > stdout.txt 2> stderr.txt
or if you want in same file:
foo > allout.txt 2>&1
Note: this works in (ba)sh, check your shell for proper syntax
save stream output as multiple files
You're doing it the hard way.
for f in one two three; do pull "$f" > "$f.json" & done
Unless something in the script is not compatible with multiple simultaneous copies, this will make the process faster as well. If it is, just change the &
to ;
.
Update
Try just always writing the individual files. If you also need to be able to send them to stdout, just cat the file afterwards, or use tee
when writing it.
If that's not ok, then you will need to clearly identify and parse the data blocks. For example, if the start of a section is THE ONLY place {
appears as the first character on a line, that's a decent sentinel value. Split your output to files using that.
For example, throw this into another script:
awk 'NR==FNR { ndx=1; split($0,fn); name=""; next; } /^{/ { name=fn[ndx++]; } { if (length(name)) print $0 > name".json"; }' <( echo "$@" ) <( pull "$@" )
call that script with one two three
and it should do what you want.
Explanation
awk '...' <( echo "$@" ) <( pull "$@" )
This executes two commands and returns their outputs as "files", streams of input for awk
to process. The first just puts the list of arguments provided on one line for awk
to load into an array. The second executes your pull
script with those args, which provides the streaming output you already get.
NR==FNR { ndx=1; split($0,fn); name=""; next; }
This tells awk
to initialize a file-controlling index, read the single line from the echo command (the args) and split them into an array of filename bases desired, then skip the rest of processing for that record (it isn't "data", it's metadata, and we're done with it.) We initialize name
to an empty string so that we can check for length - otherwise those leading blank lines end up in .json
, which probably isn't what you want.
/^{/ { name=fn[ndx++]; }
This tells awk
each time it sees {
as the very first character on a line, set the output filename base to the current index (which we initialized at 1 above) and increment the index for the next time.
{ if (length(name)) print $0 > name".json"; }
This tells awk
to print each line to a file named whatever the current index is pointing at, with ".json" appended. if (length(name))
throws away the leading blank line(s) before the first block of JSON.
The result is that each new set will trigger a new filename from your given arguments.
That work for you?
In Use
$: ls *.json
ls: cannot access '*.json': No such file or directory
$: pull one two three # my script to simulate output
{ ...one... }
{
...two...
}
{ ...three... }
$: splitstream one two three # the above command in a file to receive args
$: grep . one* two* three* # now they exist
one.json:{ ...one... }
two.json:{
two.json: ...two...
two.json:}
three.json:{ ...three... }
Related Topics
Using Bash Environment Variables from Within a Perl Script
Makefile with Multiple Targets
Set Filetype and Comment Key Map with .S File
How to Find a List of Ip Addresses in Another File
Can't Find Out Where Does a Node.Js App Running and Can't Kill It
Linux Commands to Copy One File to Many Files
Create a Static Haskell Linux Executable
Writing to Eventfd from Kernel Module
Will Adding the -Rdynamic Linker Option to Gcc/G++ Impact Performance
Gcc: Putchar(Char) in Inline Assembly
Linux. Sol_Netlink Not Defined
Linux Task Schedule to Hour, Minute, Second
Bash Foreach Loop Works Differently When Executed from .Sh File
How to Set Up Curl to Permanently Use a Proxy
How to Convert a PDF into Jpg with Command Line in Linux
Limiting the Time a Program Runs in Linux
Explanation of Convertor of Cidr to Netmask in Linux Shell Netmask2Cdir and Cdir2Netmask