The 'Eval' Command in Bash and Its Typical Uses

The 'eval' command in Bash and its typical uses

eval takes a string as its argument, and evaluates it as if you'd typed that string on a command line. (If you pass several arguments, they are first joined with spaces between them.)

${$n} is a syntax error in bash. Inside the braces, you can only have a variable name, with some possible prefix and suffixes, but you can't have arbitrary bash syntax and in particular you can't use variable expansion. There is a way of saying “the value of the variable whose name is in this variable”, though:

echo ${!n}
one

$(…) runs the command specified inside the parentheses in a subshell (i.e. in a separate process that inherits all settings such as variable values from the current shell), and gathers its output. So echo $($n) runs $n as a shell command, and displays its output. Since $n evaluates to 1, $($n) attempts to run the command 1, which does not exist.

eval echo \${$n} runs the parameters passed to eval. After expansion, the parameters are echo and ${1}. So eval echo \${$n} runs the command echo ${1}.

Note that most of the time, you must use double quotes around variable substitutions and command substitutions (i.e. anytime there's a $): "$foo", "$(foo)". Always put double quotes around variable and command substitutions, unless you know you need to leave them off. Without the double quotes, the shell performs field splitting (i.e. it splits value of the variable or the output from the command into separate words) and then treats each word as a wildcard pattern. For example:

$ ls
file1 file2 otherfile
$ set -- 'f* *'
$ echo "$1"
f* *
$ echo $1
file1 file2 file1 file2 otherfile
$ n=1
$ eval echo \${$n}
file1 file2 file1 file2 otherfile
$eval echo \"\${$n}\"
f* *
$ echo "${!n}"
f* *

eval is not used very often. In some shells, the most common use is to obtain the value of a variable whose name is not known until runtime. In bash, this is not necessary thanks to the ${!VAR} syntax. eval is still useful when you need to construct a longer command containing operators, reserved words, etc.

Why should eval be avoided in Bash, and what should I use instead?

There's more to this problem than meets the eye. We'll start with the obvious: eval has the potential to execute "dirty" data. Dirty data is any data that has not been rewritten as safe-for-use-in-situation-XYZ; in our case, it's any string that has not been formatted so as to be safe for evaluation.

Sanitizing data appears easy at first glance. Assuming we're throwing around a list of options, bash already provides a great way to sanitize individual elements, and another way to sanitize the entire array as a single string:

function println
{
# Send each element as a separate argument, starting with the second element.
# Arguments to printf:
# 1 -> "$1\n"
# 2 -> "$2"
# 3 -> "$3"
# 4 -> "$4"
# etc.

printf "$1\n" "${@:2}"
}

function error
{
# Send the first element as one argument, and the rest of the elements as a combined argument.
# Arguments to println:
# 1 -> '\e[31mError (%d): %s\e[m'
# 2 -> "$1"
# 3 -> "${*:2}"

println '\e[31mError (%d): %s\e[m' "$1" "${*:2}"
exit "$1"
}

# This...
error 1234 Something went wrong.
# And this...
error 1234 'Something went wrong.'
# Result in the same output (as long as $IFS has not been modified).

Now say we want to add an option to redirect output as an argument to println. We could, of course, just redirect the output of println on each call, but for the sake of example, we're not going to do that. We'll need to use eval, since variables can't be used to redirect output.

function println
{
eval printf "$2\n" "${@:3}" $1
}

function error
{
println '>&2' '\e[31mError (%d): %s\e[m' "$1" "${*:2}"
exit $1
}

error 1234 Something went wrong.

Looks good, right? Problem is, eval parses twice the command line (in any shell). On the first pass of parsing one layer of quoting is removed. With quotes removed, some variable content gets executed.

We can fix this by letting the variable expansion take place within the eval. All we have to do is single-quote everything, leaving the double-quotes where they are. One exception: we have to expand the redirection prior to eval, so that has to stay outside of the quotes:

function println
{
eval 'printf "$2\n" "${@:3}"' $1
}

function error
{
println '&2' '\e[31mError (%d): %s\e[m' "$1" "${*:2}"
exit $1
}

error 1234 Something went wrong.

This should work. It's also safe as long as $1 in println is never dirty.

Now hold on just a moment: I use that same unquoted syntax that we used originally with sudo all of the time! Why does it work there, and not here? Why did we have to single-quote everything? sudo is a bit more modern: it knows to enclose in quotes each argument that it receives, though that is an over-simplification. eval simply concatenates everything.

Unfortunately, there is no drop-in replacement for eval that treats arguments like sudo does, as eval is a shell built-in; this is important, as it takes on the environment and scope of the surrounding code when it executes, rather than creating a new stack and scope like a function does.

eval Alternatives

Specific use cases often have viable alternatives to eval. Here's a handy list. command represents what you would normally send to eval; substitute in whatever you please.

No-op

A simple colon is a no-op in bash:

:

Create a sub-shell

( command )   # Standard notation

Execute output of a command

Never rely on an external command. You should always be in control of the return value. Put these on their own lines:

$(command)   # Preferred
`command` # Old: should be avoided, and often considered deprecated

# Nesting:
$(command1 "$(command2)")
`command "\`command\`"` # Careful: \ only escapes $ and \ with old style, and
# special case \` results in nesting.

Redirection based on variable

In calling code, map &3 (or anything higher than &2) to your target:

exec 3<&0         # Redirect from stdin
exec 3>&1 # Redirect to stdout
exec 3>&2 # Redirect to stderr
exec 3> /dev/null # Don't save output anywhere
exec 3> file.txt # Redirect to file
exec 3> "$var" # Redirect to file stored in $var--only works for files!
exec 3<&0 4>&1 # Input and output!

If it were a one-time call, you wouldn't have to redirect the entire shell:

func arg1 arg2 3>&2

Within the function being called, redirect to &3:

command <&3       # Redirect stdin
command >&3 # Redirect stdout
command 2>&3 # Redirect stderr
command &>&3 # Redirect stdout and stderr
command 2>&1 >&3 # idem, but for older bash versions
command >&3 2>&1 # Redirect stdout to &3, and stderr to stdout: order matters
command <&3 >&4 # Input and output!

Variable indirection

Scenario:

VAR='1 2 3'
REF=VAR

Bad:

eval "echo \"\$$REF\""

Why? If REF contains a double quote, this will break and open the code to exploits. It's possible to sanitize REF, but it's a waste of time when you have this:

echo "${!REF}"

That's right, bash has variable indirection built-in as of version 2. It gets a bit trickier than eval if you want to do something more complex:

# Add to scenario:
VAR_2='4 5 6'

# We could use:
local ref="${REF}_2"
echo "${!ref}"

# Versus the bash < 2 method, which might be simpler to those accustomed to eval:
eval "echo \"\$${REF}_2\""

Regardless, the new method is more intuitive, though it might not seem that way to experienced programmed who are used to eval.

Associative arrays

Associative arrays are implemented intrinsically in bash 4. One caveat: they must be created using declare.

declare -A VAR   # Local
declare -gA VAR # Global

# Use spaces between parentheses and contents; I've heard reports of subtle bugs
# on some versions when they are omitted having to do with spaces in keys.
declare -A VAR=( ['']='a' [0]='1' ['duck']='quack' )

VAR+=( ['alpha']='beta' [2]=3 ) # Combine arrays

VAR['cow']='moo' # Set a single element
unset VAR['cow'] # Unset a single element

unset VAR # Unset an entire array
unset VAR[@] # Unset an entire array
unset VAR[*] # Unset each element with a key corresponding to a file in the
# current directory; if * doesn't expand, unset the entire array

local KEYS=( "${!VAR[@]}" ) # Get all of the keys in VAR

In older versions of bash, you can use variable indirection:

VAR=( )  # This will store our keys.

# Store a value with a simple key.
# You will need to declare it in a global scope to make it global prior to bash 4.
# In bash 4, use the -g option.
declare "VAR_$key"="$value"
VAR+="$key"
# Or, if your version is lacking +=
VAR=( "$VAR[@]" "$key" )

# Recover a simple value.
local var_key="VAR_$key" # The name of the variable that holds the value
local var_value="${!var_key}" # The actual value--requires bash 2
# For < bash 2, eval is required for this method. Safe as long as $key is not dirty.
local var_value="`eval echo -n \"\$$var_value\""

# If you don't need to enumerate the indices quickly, and you're on bash 2+, this
# can be cut down to one line per operation:
declare "VAR_$key"="$value" # Store
echo "`var_key="VAR_$key" echo -n "${!var_key}"`" # Retrieve

# If you're using more complex values, you'll need to hash your keys:
function mkkey
{
local key="`mkpasswd -5R0 "$1" 00000000`"
echo -n "${key##*$}"
}

local var_key="VAR_`mkkey "$key"`"
# ...

Bash: I am unable to run the eval command inside a double while loop

The problem is that interact in the Expect script switches to reading from standard input. Since stdin is redirected to $FILE2 at that point, it reads everything in that file. When the inner loop repeats, there's nothing left in the file, so the loop terminates.

You need to save the script's original standard input, and redirect ssh_connect's input to that.

#!/bin/bash
exec 3<&0 # duplicate stdin on FD 3
while read line1
do
while read line2
do
eval "ssh_connect $line1 $line2" <&3
done < $FILE2
done < $FILE1

why do I have to eval to in bash?

Redirections are simply not parsed after word splitting with a variable. Only pathname expansion and similar stuffs are recognized.

With your example, >/dev/null is treated literally there as if you ran:

">/dev/null" "false"

In which case, > doesn't have a special meaning.

That is the reason why eval is needed to re-evaluate the resulting arguments as a new command. Whatever you see with echo is what you get with eval.

# foo='>/dev/null'
# echo $foo true
>/dev/null true

So sometimes you also need to quote to prevent word splitting and other expansions before evaulation:

echo "$foo true"

Linux bash - eval vs. sh -c

xargs can only run an external executable -- something which the operating system's execv() family of calls can execute. (Keep in mind that xargs itself is not part of your shell but an external program, and thus it has no access to the shell that started it).

The shell builtin eval is not an executable. Thus, xargs cannot run it. (By its nature, it invokes code in the current shell. xargs is not a shell, so it has no ability to interpret shell scripts, and there is no "current shell" when xargs is the active process).

sh is an executable (starting a new shell). Thus, xargs can start it, and it can be used in contexts where no current shell exists.


By the way -- you tagged this question bash, but sh is not bash. If you want to be able to run bash code under your shell invoked as a separate process, use bash -c, not sh -c; since sh -c starts a separate shell out-of-process, you can also get a different shell, at a different version, with different capabilities (and even if your /bin/sh is a symlink to /bin/bash, bash turns off some functionality when started as sh).

why does eval mess up printf '\n', and where to find info on eval

eval printf '\n'

The shell reads this command as three words: eval, printf and \n. The single quotes don’t exist anymore after parsing these words.

The eval command now gets this to evaluate: printf \n, and, since the backslash is not enclosed in quotes, it is just discarded. Therefore this is equivalent to:

eval printf n

Eval command doesn't work at all, but it doesn't error

It seems like you put a number as your ID... Discord.js IDs are in strings so you should put your ID into a string.

    if (message.author.id !== "821682594830614578") {
return;
}

Why does bash eval return zero when backgrounded command fails?

The ampersand (for backgrounding) seems to cause the problem.

That is correct.

The shell cannot know a command's exit code until the command completes. When you put a command in background, the shell does not wait for completion. Hence, it cannot know the (future) return status of the command in background.

This is documented in man bash:

If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.

In other words, the return code after putting a command in background is always 0 because the shell cannot predict the future return code of a command that has not yet completed.

If you want to find the return status of commands in the background, you need to use the wait command.

Examples

The command false always sets a return status of 1:

$ false ; echo status=$?
status=1

Observe, though, what happens if we background it:

$ false & echo status=$?
[1] 4051
status=0

The status is 0 because the command was put in background and the shell cannot predict its future exit code. If we wait a few moments, we will see:

$ 
[1]+ Exit 1 false

Here, the shell is notifying us that the brackground task completed and its return status was just as it should be: 1.

In the above, we did not use eval. If we do, nothing changes:

$ eval 'false &' ; echo status=$?
[1] 4094
status=0
$
[1]+ Exit 1 false

If you do want the return status of a backgrounded command, use wait. For example, this shows how to capture the return status of false:

$ false & wait $!; echo status=$?
[1] 4613
[1]+ Exit 1 false
status=1


Related Topics



Leave a reply



Submit