Bash detect an error in loop then continue processing
Does it help?
...
# process list of formulas that have been installed
for i in $(< "$FILE") ; do
echo "Installing $i ..."
# attempt to install formula; if error write error, process next formula
brew install "$i" || continue
done
...
Note that if formula contains blanks then the for
loop will split the line. It might be better to write:
...
# process list of formulas that have been installed
while read i ; do
# Jump blank lines
test -z "$i" && continue
echo "Installing $i ..."
# attempt to install formula; if error write error, process next formula
brew install "$i" || continue
done < "$FILE"
...
How do I keep repeating an iteration in a for loop if a particular error is shown in Bash?
Just execute the command in a while loop:
for i in ...; do
while ! command on file "$i"; do sleep 1; done
done
The sleep is useful if your command is failing immediately: it can be painful if you let it restart as quickly as it can.
Given the comment below, you can easily modify this to check for a particular error message:
for i in ...; do
while command on file "$i" 2>&1 |
grep -q 'Error, line 3: tried 100000 potential loci for entry'
do sleep 1; done
done
This discards all output from command, but can be modified relatively easily to retain it. (eg, while command 2>&1 | awk '/Error msg/{rv=1} 1 END {exit !rv}
)
Exit script with error code based on loop operations in bash
Alternative 1
#!/bin/bash
for i in `seq 1 6`; do
if test $i == 4; then
z=1
fi
done
if [[ $z == 1 ]]; then
exit 1
fi
With files
#!/bin/bash
touch ab c d e
for i in a b c d e; do
cat $i
if [[ $? -ne 0 ]]; then
fail=1
fi
done
if [[ $fail == 1 ]]; then
exit 1
fi
The special parameter $?
holds the exit value of the last command. A value above 0 represents a failure. So just store that in a variable and check for it after the loop.
The $? parameter actually holds the exit status of the previous pipeline, if present. If the command is killed with a signal then the value of $? will be 128+SIGNAL. For example 128+2 in case of SIGINT (ctrl+c).
Overkill solution with trap
#!/bin/bash
trap ' echo X $FAIL; [[ $FAIL -eq 1 ]] && exit 22 ' EXIT
touch ab c d e
for i in c d e a b; do
cat $i || export FAIL=1
echo F $FAIL
done
Skip bash while loop if function inside fails a condition
First, write the functions so that they return a nonzero status if they fail, zero if they succeed (actually, you should be doing this anyway as a general good practice). Something like this:
function2() {
if some condition that uses $test_name fails; then
echo "test condition failed in function2" >&2 # Error messages should be sent to stderr
return 1
fi
# Code here will only be executed if the test succeeded
do_something || return 1
# Code here will only be executed if the test AND do_something both succeeded
do_something_optional # No error check here means it'll continue even if this fails
do_something_else || {
echo "do_something_else failed in function2" >&2
return 1
}
return 0 # This is actually optional. By default it'll return the status
# of the last command in the function, which must've succeeded
}
Note that you can mix styles here (if
vs ||
vs whatever) as the situation warrants. In general, use the style that's clearest, since your biggest enemy is confusion about what the code's doing.
Then, in the main function, you can check each sub-function's exit status and return early if any of them fail:
main_function (){
do something to "$test_name" || return 1 # BTW, you should double-quote variable references
function2 "$test_name" || return 2 # optional: you can use different statuses for different problems
function3 "$test_name" || return 1
}
If you need to skip the end of the main loop, that's where you'd use continue
:
while true read -r line; do
if [[ ! "${line}" =~ ^# && ! -z "${line}" ]]; then
test_name=$line
main_function "$test_name" || continue
echo "Finished processing: $line" >&2 # Non-error status messages also go to stderr
fi
done < "$OS_LIST"
Stop on first error
Maybe you want set -e
:
www.davidpashley.com/articles/writing-robust-shell-scripts.html#id2382181:
This tells bash that it should exit the script if any statement returns a non-true return value. The benefit of using -e is that it prevents errors snowballing into serious issues when they could have been caught earlier. Again, for readability you may want to use set -o errexit.
In a Bash script, how can I exit the entire script if a certain condition occurs?
Try this statement:
exit 1
Replace 1
with appropriate error codes. See also Exit Codes With Special Meanings.
How to wait in bash for several subprocesses to finish, and return exit code !=0 when any subprocess ends with code !=0?
wait
also (optionally) takes the PID
of the process to wait for, and with $!
you get the PID
of the last command launched in the background.
Modify the loop to store the PID
of each spawned sub-process into an array, and then loop again waiting on each PID
.
# run processes and store pids in array
for i in $n_procs; do
./procs[${i}] &
pids[${i}]=$!
done
# wait for all pids
for pid in ${pids[*]}; do
wait $pid
done
Bash for loop stops after one iteration without error
If you need set -e
in other parts of the script, you need to handle grep
not to stop your script:
# cat $file | grep "2015-11-0"$i > $i.json
grep "2015-11-0"$i "$file" > $i.json || :
Related Topics
Using Awk to Get a Specific String in Line
Sort a Find Command to Respect a Custom Order in Unix
Suppressing Compile Time Linkage of Shared Libraries
How to Pass Local Shell Script Variable to Expect
Perl Script to Capture Stderr and Stdout of Command Executed in Back-Quotes
Implementation of Syscall() on Arm-Oabi. What Is "Svc #0X900071"
Code for Wait_Event_Interruptible
Centos Cgconfig Fails to Start
_Ldg Causes Slower Execution Time in Certain Situation
Why Proc Process Faster Than Others
Omnibus or Source - Can't Decide Which One to Use for Gitllab Backup/Restore
Cuda-Gdb Not Working in Nsight on Linux