Capture Output of a Bash Command, Parse It and Store into Different Bash Variables

Capture output of a bash command, parse it and store into different bash variables

$ read IPETH0 IPLO <<< $(ifconfig | awk '/inet[[:space:]]/ { print $2 }')
$ echo "${IPETH0}"
192.168.23.2
$ echo "${IPLO}"
127.0.0.1

This assumes the order of the eth0 and lo interfaces, but it shows the basic idea.

How do I set a variable to the output of a command in Bash?

In addition to backticks `command`, command substitution can be done with $(command) or "$(command)", which I find easier to read, and allows for nesting.

OUTPUT=$(ls -1)
echo "${OUTPUT}"

MULTILINE=$(ls \
-1)
echo "${MULTILINE}"

Quoting (") does matter to preserve multi-line variable values; it is optional on the right-hand side of an assignment, as word splitting is not performed, so OUTPUT=$(ls -1) would work fine.

How to capture output of a shell command into variables

here's one way to do it

count=1
for i in $(tar -tzf testfile.tar); do eval "file${count}=$i"; count=$((count+1)); done

it will give you something like this:

file1=dir/
file2=dir/file2.txt
file3=dir/file1.txt

in my example, i created a dir folder which technically isn't the in yours hence the 1st line dir/

Capturing multiple line output into a Bash variable

Actually, RESULT contains what you want — to demonstrate:

echo "$RESULT"

What you show is what you get from:

echo $RESULT

As noted in the comments, the difference is that (1) the double-quoted version of the variable (echo "$RESULT") preserves internal spacing of the value exactly as it is represented in the variable — newlines, tabs, multiple blanks and all — whereas (2) the unquoted version (echo $RESULT) replaces each sequence of one or more blanks, tabs and newlines with a single space. Thus (1) preserves the shape of the input variable, whereas (2) creates a potentially very long single line of output with 'words' separated by single spaces (where a 'word' is a sequence of non-whitespace characters; there needn't be any alphanumerics in any of the words).

How to capture the output of a bash command into a variable when using pipes and apostrophe?

To capture the output of a command in shell, use command substitution: $(...). Thus:

pid=$(ps -ef | grep -v color=auto | grep raspivid | awk '{print $2}')

Notes

  • When making an assignment in shell, there must be no spaces around the equal sign.

  • When defining shell variables for local use, it is best practice to use lower case or mixed case. Variables that are important to the system are defined in upper case and you don't want to accidentally overwrite one of them.

Simplification

If the goal is to get the PID of the raspivid process, then the grep and awk can be combined into a single process:

pid=$(ps -ef | awk '/[r]aspivid/{print $2}')

Note the simple trick that excludes the current process from the output: instead of searching for raspivid we search for [r]aspivid. The string [r]aspivid does not match the regular expression [r]aspivid. Hence the current process is removed from the output.

The Flexibility of awk

For the purpose of showing how awk can replace multiple calls to grep, consider this scenario: suppose that we want to find lines that contain raspivid but that do not contain color=auto. With awk, both conditions can be combined logically:

pid=$(ps -ef  | awk '/raspivid/ && !/color=auto/{print $2}')

Here, /raspivid/ requires a match with raspivid. The && symbol means logical "and". The ! before the regex /color=auto/ means logical "not". Thus, /raspivid/ && !/color=auto/ matches only on lines that contain raspivid but not color=auto.

How to store the output of a command in a variable at the same time as printing the output?

Use tee to direct it straight to screen instead of stdout

$ var=$(echo hi | tee /dev/tty)
hi
$ echo $var
hi

How to parse the output of `ls -l` into multiple variables in bash?

for loop splits when it sees any whitespace like space, tab, or newline. So, IFS is needed before loop, (there are a lot of questions about ...)

IFS=$'\n' && for i in $(ncftpls -l 'ftp://theftpserver/path/to/files' | awk '{print $9, $5}'); do

echo $i | awk '{print $NF}' # filesize
echo $i | awk '{NF--; print}' # filename
# you may have spaces in filenames, so is better to use last column for awk

done

The better way I think is to use while not for, so

ls -l | while read i
do
echo $i | awk '{print $9, $5}'

#split them if you want
x=echo $i | awk '{print $5}'
y=echo $i | awk '{print $9}'

done

Capture stdout and stderr into different variables


Ok, it got a bit ugly, but here is a solution:

unset t_std t_err
eval "$( (echo std; echo err >&2) \
2> >(readarray -t t_err; typeset -p t_err) \
> >(readarray -t t_std; typeset -p t_std) )"

where (echo std; echo err >&2) needs to be replaced by the actual command. Output of stdout is saved into the array $t_std line by line omitting the newlines (the -t) and stderr into $t_err.

If you don't like arrays you can do

unset t_std t_err
eval "$( (echo std; echo err >&2 ) \
2> >(t_err=$(cat); typeset -p t_err) \
> >(t_std=$(cat); typeset -p t_std) )"

which pretty much mimics the behavior of var=$(cmd) except for the value of $? which takes us to the last modification:

unset t_std t_err t_ret
eval "$( (echo std; echo err >&2; exit 2 ) \
2> >(t_err=$(cat); typeset -p t_err) \
> >(t_std=$(cat); typeset -p t_std); t_ret=$?; typeset -p t_ret )"

Here $? is preserved into $t_ret

Tested on Debian wheezy using GNU bash, Version 4.2.37(1)-release (i486-pc-linux-gnu).



Related Topics



Leave a reply



Submit