Any way to exit bash script, but not quitting the terminal
The "problem" really is that you're sourcing and not executing the script. When you source a file, its contents will be executed in the current shell, instead of spawning a subshell. So everything, including exit, will affect the current shell.
Instead of using exit
, you will want to use return
.
Return an exit code without closing shell
You can use x"${BASH_SOURCE[0]}" == x"$0"
to test if the script was sourced or called (false if sourced, true if called) and return
or exit
accordingly.
In a Bash script, how can I exit the entire script if a certain condition occurs?
Try this statement:
exit 1
Replace 1
with appropriate error codes. See also Exit Codes With Special Meanings.
Is there some command that would guarantee stop of further processing but not exit terminal?
What I was looking for is kill -INT -$$
which interrupts the current process but does not exist the current shell (unlike exit 1
). Which allows kill -INT -$$
to be use in command line shell.
Is it possible to execute a bashscript after kill a terminal?
How I would execute some script when terminal is killed?
I interpret this as "execute a script when the terminal window is closed". To do so, add the following inside your .bashrc
or .bash_profile
:
trap '[ -t 0 ] || command to execute' EXIT
Of course you can replace command to execute
with source ~/.bash_exit
and put all the commands inside the file .bash_exit
in your home directory.
The special EXIT trap is executed whenever the shell exits (e.g. by closing the terminal, but also by pressing CtrlD on the prompt, or executing exit
, or ...).
[ -t 0 ]
checks whether stdin is connected to a terminal. Due to ||
the next command is executed only if that test fails, which it does when closing the terminal, but doesn't for other common ways to exit bash (e.g. pressing CtrlD on the prompt or executing exit
).
Failed attempts (read only if you try to find and alternative)
In the terminals I have heard of, bash always receives a SIGHUP signal when the window is closed. Sometimes there are even two SIGHUPs; one from the terminal, and one from the kernel when the pty (pseudoterminal) is closed. However, sometimes both SIGHUPs are lost in interactive sessions, because bash's readline temporarily uses its own traps. Strangely enough, the SIGHUPs always seem to get caught when there is an EXIT trap; even if that EXIT trap does nothing.
However, I strongly advise against setting any trap on SIGHUP. Bash processes non-EXIT traps only after the current command finished. If you ran sh -c 'while true; do true; done'
and closed the terminal, bash would continue to run in the background as if you had used disown
or nohup
.
How to quit terminal after script exits?
The approach you're trying won't work for the following reason:
The shell for your terminal session is one process; normally when you execute a shell script, the terminal session shell starts a second shell process and runs the script under that. So your exit
at the end of the script tells the second shell to terminate - which it would do anyway, as it's reached the end of the script. While you could try killing the first shell process from the second shell process (see comment about $PPID
in the other answer), this wouldn't be very good form.
For your script to work the way you expect it to work, you'll need to get your terminal session shell to run the commands in your script by using bash's builtin source
command - type source /path/to/your/script gedit
, or . /path/to/your/script gedit
for short.
Because of the way you'd need to execute this, putting the script on your PATH
wouldn't help - but you could create an alias (a "shortcut" that expands to a series of shell commands) to make running the script easier - add a line like alias your_alias_name='. /path/to/your/script'
to your ~/.bashrc
.
Why the process started by a bash script will not exit after the terminal has been closed?
What Jürgen Hötzel says in the answer is true but that's not exactly what's happening.
Case 1:
When the gnome-terminal
is closed, the tty (including pty) driver would get a disconnect event of the tty and send SIGHUP
to the controlling process associated with the tty. Here the controlling process is bash and bash will receive the SIGHUP
. Then, according to the bash manual:
The shell exits by default upon receipt of a
SIGHUP
. Before exiting, an interactive shell resends theSIGHUP
to all jobs, running or stopped. Stopped jobs are sentSIGCONT
to ensure that they receive theSIGHUP
.If the
huponexit
shell option has been set withshopt
, bash sends aSIGHUP
to all jobs when an interactive login shell exits.
So for Case 1, bash will resend SIGHUP
to the background job sleep
which will be killed by the signal (this is the default behavior of SIGHUP
).
(The huponexit
option mentioned by Jürgen Hötzel only affects interactive login shells which exit voluntarily. But for Case 1, the bash is killed by SIGHUP
. And, the bash running in gnome-terminal is not necessarily a login shell though it's really an interactive shell.)
Case 2:
Here there are 2 bash processes involved:
- bash#1: The bash process which is running in the gnome-terminal.
- bash#2: The bash process which runs the
t.sh
script.
When t.sh
(bash#2) is running, sleep 1000 &
starts as a background job of bash#2. After t.sh
(bash#2) exits, the sleep
will continue running but it'll become an orphan process and the init
process will take care of it and sleep
's PPID will become 1 (PID of the init
process).
When the gnome-terminal is closed, bash#1 will receive SIGHUP
and bash#1 will resend SIGHUP
to all it's jobs. But the sleep
is not a job of bash#1 so sleep
will not receive the SIGHUP
so it'll continue running after bash#1 completes.
Automatic exit from Bash shell script on error
Use the set -e
builtin:
#!/bin/bash
set -e
# Any subsequent(*) commands which fail will cause the shell script to exit immediately
Alternatively, you can pass -e
on the command line:
bash -e my_script.sh
You can also disable this behavior with set +e
.
You may also want to employ all or some of the the -e
-u
-x
and -o pipefail
options like so:
set -euxo pipefail
-e
exits on error, -u
errors on undefined variables, and -o (for option) pipefail
exits on command pipe failures. Some gotchas and workarounds are documented well here.
(*) Note:
The shell does not exit if the command that fails is part of the
command list immediately following a while or until keyword,
part of the test following the if or elif reserved words, part
of any command executed in a && or || list except the command
following the final && or ||, any command in a pipeline but
the last, or if the command's return value is being inverted with
!
(from man bash
)
Bash script does not quit on first exit call when calling the problematic function using $(func)
Do you see your please install some_command first
message anywhere? Is it in $conf_path
from the local conf_path="$(get_conf_dir)/foo.conf"
line? Do you have a $conf_path
value of please install some_command first/foo.conf
? Which then fails the -r
test?
No, you don't. (But feel free to echo the value of $conf_path
in that exit 200
block to confirm this fact.) (Also Error messages should, in general, get sent to standard error and not standard output anyway. So they should be echo "..." 2>&1
. That way they don't be caught by the normal command substitution at all.)
The reason you don't is because that exit 100
block is never happening.
You can see this with set -x
at the top of your script also. Go try it.
See what I mean?
The reason it isn't happening is that the failure return of some_command
is being swallowed by the local path=$(some_command)
assignment statement.
Try running this command:
f() { local a=$(false); echo "Returned: $?"; }; f
Do you expect to see Returned: 1
? You might but you won't see that.
What you will see is Returned: 0
.
Now try either of these versions:
f() { a=$(false); echo "Returned: $?"; }; f
f() { local a; a=$(false); echo "Returned: $?"; }; f
Get the output you expected in the first place?
Right. local
and export
and declare
and typeset
are statements on their own. They have their own return values. They ignore (and replace) the return value of the commands that execute in their contexts.
The solution to your problem is to split the local path
and path=$(some_command)
statements.
http://www.shellcheck.net/ catches this (and many other common errors). You should make it your friend.
In addition to the above (if you've managed to follow along this far) even with the changes mentioned so far your exit 100
won't exit the main script since it will only exit the sub-shell spawned by the command substitution in the assignment.
If you want that exit 100
to exit your script then you either need to notice and re-exit with it (check for get_conf_dir
failure after the conf_path
assignment and exit with the previous exit code) or drop the get_conf_dir
function itself and just do that inline in read_conf
.
Related Topics
Should I Rebuild a Dependent Lib After System Update
Trying to Search Files from User Keyword in Bash
How to Connect Github Desktop with Cpanel
Bash Script Runs One Command Before Previous. I Want Them One After the Other
Bash Script; How to Use Vars and Funcs Defined After Command
Different File Owner Inside Docker Container and in Host MAChine
How to Append to a File Using X86-64 Linux System Calls
Cpu Usage Percent from Linux Server
How to Run a Mips Binary on X86 Platform
Shell Script to Check If a Paragraph/Stream of Lines Exist in a File
Linux Sed Command - Using Variable with Backslash
Trailing Arguments with Find -Exec {} +
How to Paste Columns from Separate Files Using Bash
Pass Command-Line Arguments to Grep as Search Patterns and Print Lines Which Match Them All
Adding a Header into Multiple .Txt Files
Linux Nasm Assembly Print All Numbers from Zero to 100