Bash Ignoring Error for a Particular Command

Bash ignoring error for a particular command

The solution:

particular_script || true

Example:

$ cat /tmp/1.sh
particular_script()
{
false
}

set -e

echo one
particular_script || true
echo two
particular_script
echo three

$ bash /tmp/1.sh
one
two

three will be never printed.

Also, I want to add that when pipefail is on,
it is enough for shell to think that the entire pipe has non-zero exit code
when one of commands in the pipe has non-zero exit code (with pipefail off it must the last one).

$ set -o pipefail
$ false | true ; echo $?
1
$ set +o pipefail
$ false | true ; echo $?
0

Ignoring specific errors in a shell script

In order to cause bash to ignore errors for specific commands you can say:

some-arbitrary-command || true

This would make the script continue. For example, if you have the following script:

$ cat foo
set -e
echo 1
some-arbitrary-command || true
echo 2

Executing it would return:

$ bash foo
1
z: line 3: some-arbitrary-command: command not found
2

In the absence of || true in the command line, it'd have produced:

$ bash foo
1
z: line 3: some-arbitrary-command: command not found

Quote from the manual:

The shell does not exit if the command that fails is part of the
command list immediately following a while or until keyword, part of
the test in an if statement, part of any command executed in a && or
|| list except the command following the final && or ||, any command
in a pipeline but the last, or if the command’s return status is being
inverted with !. A trap on ERR, if set, is executed before the shell
exits.

EDIT: In order to change the behaviour such that in the execution should continue only if executing some-arbitrary-command returned file not found as part of the error, you can say:

[[ $(some-arbitrary-command 2>&1) =~ "file not found" ]]

As an example, execute the following (no file named MissingFile.txt exists):

$ cat foo 
#!/bin/bash
set -u
set -e
foo() {
rm MissingFile.txt
}
echo 1
[[ $(foo 2>&1) =~ "No such file" ]]
echo 2
$(foo)
echo 3

This produces the following output:

$ bash foo 
1
2
rm: cannot remove `MissingFile.txt': No such file or directory

Note that echo 2 was executed but echo 3 wasn't.

How to ignore failure of command called through command builtin?

Why does command false || echo fail?

Seems like this is a bug in bash versions below 4.0.

I downloaded the old versions 3.2.57 and 4.0, compiled them on Linux, and ran your script. I could reproduce your problem in 3.2.57. In 4.0 everything worked as expected.

Strangely, I couldn't find an according note in bash's lists list of changes, but if you search for set -e you find multiple other bugfixes regarding the behavior of set -ein other versions, for instance:

This document details the changes between this version, bash-4.4-rc1, and
the previous version, bash-4.4-beta.

[...]

o. Fixed a bug that caused set -e to be honored in cases of builtins invoking other builtins when it should be ignored.

How to fix the problem?

The best way would be to use more recent version of bash. Even on macOS this shouldn't be a problem. You can compile it yourself or install it from something like brew.

Other than that, you can use workarounds like leaving out command or adding a subshell ( command false; ) || echo ignore failure (courtesy of Nate Eldredge). In either case, things get quite cumbersome. As you don't know when exactly the bug happens you cannot be sure that you correctly worked around it every time.

With set -e, is it possible to ignore errors for certain commands?

Add || : (or anything else that is guaranteed not to fail but that's the simplest) to the end of the command.

Though many people would simply tell you that set -e isn't worth it because it isn't as useful as you might think (and causes issues like this) and manual error checking is a better policy.

(I'm not in that camp yet though the more I run into issues like this the more I think I might get there one day.)

From thatotherguy's comment explaining one of major the issues with set -e:

The problem with set -e isn't that you have to be careful about which commands you want to allow to fail. The problem is that it doesn't compose. If someone else reads this post and uses source yourfile || : to source this script while allowing failure, suddenly yourfile will no longer stop on errors anywhere. Not even on failing lines after an explicit set -e.

Bash ignore error and get return code

do_work || {
status=$?
echo "Error"
}

How can i show error message for a particular command , if bash script terminates due to set -e

After clarification, it appears that the requirement is to exit the script if any error happens, but that the commands which are described as "badcommand" in the question, might or might not fail.

In this answer, I am naming the commands simply first_command etc, to reflect the fact they might or might not fail.

The set -e command, as suggested in the question, will indeed terminate the script if an error occurs, and the trap ... ERR installs a handler which will run after an error (and before the script exits where set -e has been used).

In this case, you should:

  • wait until the trap is required before installing it (it does not need to be done at/near the start of the script)

  • disable the trap again when it is no longer required, using trap - ERR

so that commands to enable and disable the trap surround the command for which the trap is required.

For example:

#!/bin/bash

set -e

log_report() {
echo "Error on line $1"
}

echo "starting ..."
first_command

trap 'log_report $LINENO' ERR

echo "running"
second_command

trap - ERR

echo "still running"
third_command

This will exit if any command fails (because of the set -e at the top), but the trap will only be run if second_command fails.

(Note also that set -e similarly does not need to be applied at the start of the script. It can be enabled at any point, and disabled again using set +e. But in this example, it appears that the exit-on-error behaviour is required throughout.)

Bash wait command ignoring specified process IDs

GetFileSpace ... &

You are running the whole function as a subproces. So it immediately tries to move on to the next step and PID is unset, cause it beeing set in subprocess.

Do not run it in the background.

GetFileSpace ...   # no & on the end.

Notes: Consider using xargs or GNU parallel. Prefer lower case for script local variables. Quote variable expansions. Use shellcheck to check for such errors.

work() {
tmp=$(du -hs "$2")
echo "$tmp" >> "./${1}_filespace.txt"
}
export -f work
for i in "${directories[@]}"; do
printf "$i %s\n" /home/${1}/data/*
done | xargs -n2 -P$(nproc) bash -c 'work "$@"' _

Note that when job is I/O bound, running multiple processes (escpecially without no upper bound) doesn't really help much, if it's on one disc.

Bash throwing error continously which cannot be canceled with CTRL + C

When you hold the Termux icon, it should display a menu that makes it possible to run "Failsafe". There, you can mv .bashrc .bashrc.bad (or .profile or whatever causes the problem) and then run a normal session.

Consider a specific exit code not a failure and proceed

There are some problems cmd1 || $(($?==253 ? true : false) && cmd2:

  • A ) is missing after false.
  • You don't want $(( ... )) but (( ... )). The former would execute the result of the expression (that is a number!) as a command. The latter just evaluates the expression and fails if the result is 0 and succeeds otherwise. Note that is the opposite of how exit codes work.
  • true and false are not commands here, but variables. If an undefined variable is used inside (( ... )) its value is always 0. Therefore the command ((... ? true : false)) always fails.

Here is want you could have written instead:

cmd1 || (($?==253 ? 1 : 0)) && cmd2

Test:

prompt$ true || (($?==253 ? 1 : 0)) && echo ok
ok
prompt$ false || (($?==253 ? 1 : 0)) && echo ok
prompt$ ( exit 253; ) || (($?==253 ? 1 : 0)) && echo ok
ok

However, the ternary operator isn't really needed here. The following would be equivalent:

cmd1 || (($?==253)) && cmd2

Despite that, an if would probably be better. If you don't want to use one inline, you can write a function for that:

# allow all exit codes given as the first argument
# multiple exit codes can be comma separated
allow() {
ok=",$1,"
shift
"$@"
[[ "$ok" = *,$?,* ]]
}
allow 0,253 cmd1 && cmd2

Or define a wrapper just for this one command

cmd1() {
command cmd1 "$@" || (( $? == 253 ))
}
cmd1 && cmd2


Related Topics



Leave a reply



Submit