Command Substitution Doesn't Work in Script Text Passed Over Ssh

Command substitution doesn't work in script text passed over SSH

$(ls -A $test_dir) is being executed locally on the client, not the server. You need to escape the $. You'll also need to use " around it, otherwise the command substitution won't be executed.

    if [ \"\$(ls -A $test_dir)\" ]; then

Often the best way to execute multiline commands is to use scp to copy a script to the remote machine, then use ssh to execute the script. Mixing local and remote expansion of variables and command substitutions gets complicated, especially when you need to quote them.

Remote command substitution fails

You are trying to pass a kill command to a server over ssh.

Unfortunately, all substitutions are done on the host side, and not the server side. The error you are getting from cat is an error which is generated on den16 and not BUILD_HOST. If you want to pass it to BUILD_HOST you have to use a pipe in this case. Normally you would use single quotes, but since you use shell variables already in there, you have to use a pipe

rundeck@den16 ~]$ ssh ${SSH_USER}@${BUILD_HOST} "cat ${WLS_E1DOMAIN_LOC}/nodemanager.process.id | xargs kill -9"

Why does this shell script work for one instance and not the other?

You did the most parts right, but forgot to escape the command-substitution construct in the here-doc $(..). Not doing it will make the command expand in the local shell and not in the remote host.

Also while running the find command an escaped \$relative will pass the literal string to the find command which it does not understand, i.e. the following happens on the local machine

find \$relative
# ^^^^ since $relative won't expand, find throws an error

So you need to escape the whole command-substitution constructs, to move the whole here-doc expansion in the remote host.

ssh -T username@host << EOF
relative="\$HOME/Documents"
command=\$(find "\$relative" -name GitHub)
command2=\$(echo "\$relative")
echo "HERE: \$command"
echo "HERE: \$command2"
EOF

Or altogether use an alternate form of heredocs that allows you to not interpret variables in the heredoc text. Simply quote the delimiting identifier as 'EOF'

ssh -T username@host <<'EOF'
relative="$HOME/Documents"
command=$(find "$relative" -name GitHub)
command2=$(echo "$relative")
echo "HERE: $command"
echo "HERE: $command2"
EOF

running script through ssh fails when locally same script succeeds

Parsing ps to find a process is fragile and error prone. Your example is a nice illustration why:

An unrelated process (the bash process started by ssh) contains the process name as part of the command line, and is accidentally picked up by your ps parser.

The unrelated process is removed by your grep -v grep when you make the command line include the word "grep".

Instead, just use pgrep or pkill. These tools list/kill processes based on the executable name and are therefore far more robust than parsing ps.

Command substitution in shell script without globbing

If I understand correctly, you want the user to provide a shell command as a command-line argument, which will be executed by the script, and is expected to produce an SQL string, which will be processed (upper-cased) and echoed to stdout.

The first thing to say is that there is no point in having the user provide a shell command that the script just blindly executes. If the script applied some kind of modification/preprocessing of the command before it executed it then perhaps it could make sense, but if not, then the user might as well execute the command himself and pass the output to the script as a command-line argument, or via stdin.

But that being said, if you really want to do it this way, then there are two things that need to be said. Firstly, this is the proper form to use:

out=$(eval "$cmd");

A fairly advanced understanding of the shell grammer and expansion rules would be required to fully understand the rationale for using the above syntax, but basically executing $cmd and executing eval "$cmd" have subtle differences that render the $cmd form inappropriate for executing a given shell command string.

Just to give some detail that will hopefully clarify the above point, there are seven kinds of expansion that are performed by the shell in the following order when processing input: (1) brace expansion, (2) tilde expansion, (3) parameter and variable expansion, (4) arithmetic expansion, (5) command substitution, (6) word splitting, and (7) pathname expansion. Notice that variable expansion happens somewhat in the middle of that sequence, and thus the variable-expanded shell command (which was provided by the user) will not receive the benefit of the prior expansion types. Other issues are that leading variable assignments, pipelines, and command list tokens will not be executed correctly under the $cmd form, because they are parsed and processed prior to variable expansion (actually prior to all expansions) as well.

By running the command through eval, properly double-quoted, you ensure that the full shell parsing/processing/execution algorithm will be applied to the shell command string that was given by the user of your script.

The second thing to say is this: If you try the above proper form in your script, you will find that it has not solved your problem. You will still get SELECT FILEA FILEB FILEC FROM TABLE as output.

The reason is this: Since you've decided you want to accept an arbitrary shell command from the user of your script, it is now the user's responsibility to properly quote all metacharacters that may be embedded in that piece of code. It does not make sense for you to accept a shell command as a command-line argument, but somehow change the processing rules for shell commands so that certain metacharacters will no longer be metacharacters when the given shell command is executed. Actually, you could do something like that, perhaps using set -o noglob as you discovered, but then that must become a contract between the script and the user of the script; the user must be made aware of exactly what the precise processing rules will be when the command is executed so that he can properly use the script.

Under this design, the user could call the script as follows (notice the extra layer of quoting for the shell command string evaluation; could alternatively backslash-escape just the asterisk):

$ sh foo.sh "echo 'select * from table'";

I'd like to return to my earlier comment about the overall design; it doesn't really make sense to do it this way. It makes more sense to take the text-to-process itself, not a shell command that is expected to produce the text-to-process.

Here is how that could be done:

## take the text-to-process via a command-line argument
sql="$1";

## process and echo it
echo "$sql"| tr a-z A-Z;

(I also removed the -s option of tr, which really doesn't make sense here.)

Notice that the script is simpler now, and usage is also simpler:

$ sh foo.sh 'select * from table';

How can I use read to substitute user input inside a bash command substitution?

echo "You entered: $(IFS= read -rp "Enter some text: "; printf '%s' "$REPLY")"

From help read:

read ... [name ...]

Reads a single line from the standard input ... The line is split into fields as with word
splitting, and the first word is assigned to the first NAME, the second
word to the second NAME, and so on, with any leftover words assigned to
the last NAME. Only the characters found in $IFS are recognized as word
delimiters.

If no NAMEs are supplied, the line read is stored in the REPLY variable.

  • IFS= ...: Don't trim leading and trailing spaces
  • -r: Don't mangle backslashes
  • echo "$REPLY": If you don't supply any NAME, the line read is stored in the $REPLY variable. However, read doesn't print it so the command substitution expands to nothing. Consequently, you have to print it explicitly with, for example, printf

Note that if you use read inside $(...), the variable is lost as soon as you leave the substitution. Better approach:

IFS= read -rp "Enter some text: " var
echo "You entered: $var"

Pass commands as input to another command (su, ssh, sh, etc)

A shell script is a sequence of commands. The shell will read the script file, and execute those commands one after the other.

In the usual case, there are no surprises here; but a frequent beginner error is assuming that some commands will take over from the shell, and start executing the following commands in the script file instead of the shell which is currently running this script. But that's not how it works.

Basically, scripts work exactly like interactive commands, but how exactly they work needs to be properly understood. Interactively, the shell reads a command (from standard input), runs that command (with input from standard input), and when it's done, it reads another command (from standard input).

Now, when executing a script, standard input is still the terminal (unless you used a redirection) but the commands are read from the script file, not from standard input. (The opposite would be very cumbersome indeed - any read would consume the next line of the script, cat would slurp all the rest of the script, and there would be no way to interact with it!) The script file only contains commands for the shell instance which executes it (though you can of course still use a here document etc to embed inputs as command arguments).

In other words, these "misunderstood" commands (su, ssh, sh, sudo, bash etc) when run alone (without arguments) will start an interactive shell, and in an interactive session, that's obviously fine; but when run from a script, that's very often not what you want.

All of these commands have ways to accept commands by ways other than in an interactive terminal session. Typically, each command supports a way to pass it commands as options or arguments:

su root -c 'who am i'
ssh user@remote uname -a
sh -c 'who am i; echo success'

Many of these commands will also accept commands on standard input:

printf 'uname -a; who am i; uptime' | su
printf 'uname -a; who am i; uptime' | ssh user@remote
printf 'uname -a; who am i; uptime' | sh

which also conveniently allows you to use here documents:

ssh user@remote <<'____HERE'
uname -a
who am i
uptime
____HERE

sh <<'____HERE'
uname -a
who am i
uptime
____HERE

For commands which accept a single command argument, that command can be sh or bash with multiple commands:

sudo sh -c 'uname -a; who am i; uptime'

As an aside, you generally don't need an explicit exit because the command will terminate anyway when it has executed the script (sequence of commands) you passed in for execution.



Related Topics



Leave a reply



Submit