How to Update a Varible from One Shell to Another Shell

Pass all variables from one shell script to another?

You have basically two options:

  1. Make the variable an environment variable (export TESTVARIABLE) before executing the 2nd script.
  2. Source the 2nd script, i.e. . test2.sh and it will run in the same shell. This would let you share more complex variables like arrays easily, but also means that the other script could modify variables in the source shell.

UPDATE:

To use export to set an environment variable, you can either use an existing variable:

A=10
# ...
export A

This ought to work in both bash and sh. bash also allows it to be combined like so:

export A=10

This also works in my sh (which happens to be bash, you can use echo $SHELL to check). But I don't believe that that's guaranteed to work in all sh, so best to play it safe and separate them.

Any variable you export in this way will be visible in scripts you execute, for example:

a.sh:

#!/bin/sh

MESSAGE="hello"
export MESSAGE
./b.sh

b.sh:

#!/bin/sh

echo "The message is: $MESSAGE"

Then:

$ ./a.sh
The message is: hello

The fact that these are both shell scripts is also just incidental. Environment variables can be passed to any process you execute, for example if we used python instead it might look like:

a.sh:

#!/bin/sh

MESSAGE="hello"
export MESSAGE
./b.py

b.py:

#!/usr/bin/python

import os

print 'The message is:', os.environ['MESSAGE']

Sourcing:

Instead we could source like this:

a.sh:

#!/bin/sh

MESSAGE="hello"

. ./b.sh

b.sh:

#!/bin/sh

echo "The message is: $MESSAGE"

Then:

$ ./a.sh
The message is: hello

This more or less "imports" the contents of b.sh directly and executes it in the same shell. Notice that we didn't have to export the variable to access it. This implicitly shares all the variables you have, as well as allows the other script to add/delete/modify variables in the shell. Of course, in this model both your scripts should be the same language (sh or bash). To give an example how we could pass messages back and forth:

a.sh:

#!/bin/sh

MESSAGE="hello"

. ./b.sh

echo "[A] The message is: $MESSAGE"

b.sh:

#!/bin/sh

echo "[B] The message is: $MESSAGE"

MESSAGE="goodbye"

Then:

$ ./a.sh
[B] The message is: hello
[A] The message is: goodbye

This works equally well in bash. It also makes it easy to share more complex data which you could not express as an environment variable (at least without some heavy lifting on your part), like arrays or associative arrays.

Modifying a variable in another shell script

Just use sed in 2nd script (script2.sh) as

currVar="000.00.00.000"
sed -r -i.bak "s/var=([[:graph:]]+)/var=$currVar/" script1.sh
var=000.00.00.000

where [[:graph:]] is a character class for [[:alnum:]] & [[:punct:]] to match values for var with printable characters/meta-characters.

Since you mentioned it is a proper IP address, use a proper regEx as

sed -r "s/(\b[0-9]{1,3}\.){3}[0-9]{1,3}\b/$currVar/" script1.sh
var=000.00.00.000

(\b[0-9]{1,3}\.){3}[0-9]{1,3} implies match 3 groups consisting of digits from 0-9, which each group could have from 1-3 digits each, preceded by a dot . and the 4th group also the same as the last. Remember the each group I am mentioning represents an IP octet

Updating variables by reference in bash script?

Updating variables by reference in bash script?

And similar to C++, once you assign the value of a variable, there is no way to track where from the value came from. In shell all variables store strings. You can store variable name as a string inside another variable, which acts as the reference. You can use:

Bash indirect expansion:

A="say"
B=A
echo "B is ${!B}"
A="say it"
echo "B is ${!B}"

Bash namereferences:

A="say"
declare -n B=A
echo "B is $B"
A="say it"
echo "B is $B"

Evil eval:

A="say"
B=A
eval "echo \"B is \$$B\""
A="say it"
eval "echo \"B is \$$B\""

Is this possible?

Yes - store the name of the variable in B, instead of the value.

envsubst from Lazy Evaluation in Bash. Is following the way to do it?

No, envsubst does something different.

How can I reuse a R variable from one shell script to the other?

One way to attack that problem is to allow the user to specify multiple pairs of arguments at the time of script call, so that the program can iterate over all of them at once (necessitating only one startup-cost).

Here's a sample script that uses a few things:

  1. library(optparse), for ease of arguments. There are others, nothing is required, I find it makes things look easy.
  2. The ability for the script to know if it is being sourced (and not run some code, useful for dev/testing) or being run from the command line (which would trigger some code to run). This is similar to python's if __name__ == '__main__': trick, something I answered a while ago as https://stackoverflow.com/a/47932989/3358272.

Neither of them are strictly necessary, but I find it helps demonstrate how to structure the script so that you can facilitate "one or more" type operations.

#!/usr/bin/env r
startup <- function() {
message(Sys.time(), " Some expensive data load ...")
Sys.sleep(3)
}

func1 <- function(x, y) {
message(Sys.time(), " Called with (x,y): ", jsonlite::toJSON(list(x=x,y=y)))
}

if (sys.nframe() == 0L) {
library(optparse)
P <- OptionParser()
P <- add_option(P, c("--param1"), dest = "p1", type = "character",
help = "Parameter 1", metavar = "P1")
P <- add_option(P, c("--param2"), dest = "p2", type = "character",
help = "Parameter 2", metavar = "P2")
P <- add_option(P, c("--param-csv"), dest = "pcsv", type = "character",
help = "CSV file with parameters in each column", metavar = "FILE")
args <- parse_args(P, commandArgs(trailingOnly = TRUE))

if (!is.null(args$pcsv)) {
if (!file.exists(args$pcsv)) {
stop("file not found: ", sQuote(args$pcsv))
}
params <- read.csv(args$pcsv, header = FALSE)
if (!ncol(params) >= 2L) {
stop("file does not have (at least) 2 columns")
}
} else {
params <- data.frame(
p1 = sapply(strsplit(args$p1, "[,[:space:]]+")[[1]], trimws),
p2 = sapply(strsplit(args$p2, "[,[:space:]]+")[[1]], trimws)
)
}

startup()

for (rownum in seq_len(nrow(params))) {
func1(params[[1]][rownum], params[[2]][rownum])
}
}

For the sake of this demo, startup is you loading your .Rds file (which takes 3 seconds here), and func1 is the rest of whatever processing you might be doing. (As a general hint, I tend to do as little work within the sys.nframe() == 0 block, so that the functions I write above it can be used interactively or with the script. It's just one way to organize code.)

This script supports three modalities:

  • your default invocation

    $ Rscript 64287443.R --param1 foo1 --param2 bar1
    2020-10-09 15:33:48 Some expensive data load ...
    2020-10-09 15:33:51 Called with (x,y): {"x":["foo1"],"y":["bar1"]}

    one "job" at a time.

  • comma-separated multiple arguments, as in

    $ Rscript 64287443.R --param1 foo1,foo2 --param2 bar1,bar2
    2020-10-09 15:33:55 Some expensive data load ...
    2020-10-09 15:33:58 Called with (x,y): {"x":["foo1"],"y":["bar1"]}
    2020-10-09 15:33:58 Called with (x,y): {"x":["foo2"],"y":["bar2"]}

    which is equivalent to running

    $ Rscript 64287443.R --param1 foo1 --param2 bar1
    $ Rscript 64287443.R --param1 foo2 --param2 bar2

    except that it is only incurring the startup cost once.

  • a CSV file of jobs, one param per column.

    $ cat params.csv
    foo1,bar1
    foo2,bar2
    foo3,bar3

    $ Rscript 64287443.R --param-csv params.csv
    2020-10-09 15:35:15 Some expensive data load ...
    2020-10-09 15:35:18 Called with (x,y): {"x":["foo1"],"y":["bar1"]}
    2020-10-09 15:35:18 Called with (x,y): {"x":["foo2"],"y":["bar2"]}
    2020-10-09 15:35:18 Called with (x,y): {"x":["foo3"],"y":["bar3"]}

TODO:

  • the logic to strsplit a comma-separated array for --param1 and 2 is trusting, and should be broken down a little to test for unequal pairings, and either error or do something meaningful; as of now, it will fail
  • in general, there is very little error checking here, but that's context-sensitive

Exporting variable from one shell script to another script in bash

use source:

source ./set-vars1.sh

Or:

. ./set-vars1.sh
#The first . is intentional

Is it possible to update a varible from one shell to another shell?

As each process' environment is protected, there's no way to share environment variables. I would suggest using a file on a shared filesystem to store the variable you want and reading that file in whenever you'd need to know what the new value is.



Related Topics



Leave a reply



Submit