Replace Key:Value from One File in Another File in Shellscript

How to read key=value variables from one file and replace ${key} in another file?

The simplest would be:

. .env
export key1 key2
envsubst '$key1 $key2' < config.yml

Can't figure out how to replace $

Seems you have to escape $ and { and }, like:

gsub("\\$\\{"i"\\}", vars[i])

Replace key:value from one file in another file in shellscript?

awk '
NR==FNR {url[$1]=$2; next}
{for (i=1; i<=NF; i++) if ($i in url) $i=url[$i]; print}
' keys.txt text.txt
notrelated somewhatrelated http://thisis.example.com/moooooar.asp
http://thisis.example.com/moooooar.asp unimportant
asdf asdf asdf https://maps.google.com dadadada

Replace in one file with value from another file not working properly

With GNU awk for word boundaries:

awk -F':' '
NR==FNR { map[$1] = $2; next }
{
for (old in map) {
new = map[old]
gsub("\\<"old"\\>",new)
}
print
}
' map input

The above will fail if old contains regexp metacharacters or escape characters or if new contains & but as long as both use word consituent characters it'll be fine.

How to read a file and replace a string in another file using shell script in linux?

#!/bin/sh
source "/opt/source/source.txt"
USERN="username: '$USERNAME',"
PWD="password: '$PASSWORD',"

echo $USERN
echo $PWD

sed -i "s/username:.*/$USERN/" target.js
sed -i "s/password:.*/$PWD/" target.js

There were a few issues:

  • USER is a reserved shell variable so you have to give it a different name
  • your sed script needs s/.../.../ to substitute and you also need to match the rest of the line in your regexp so the .* is needed or the old credentials are still in the output
  • -i option on sed is needed for an in-place edit
  • variable substitution is not performed in a string delimited with ', so changed to " here

Find and replace part of the text of one file with the contents of another file

In this case where you can generate the names of the files to be read as you go, I'd do the following (using any awk in any shell on every Unix box):

$ cat tst.awk
$0 == hereEnd {
hereEnd = ""
}
hereEnd == "" {
print
if ( /^cat <</ ) {
gsub(/["\047]/," ")
hereEnd = $3
file = "file" (++hereCnt) ".sh"
while ( (getline line < file) > 0 ) {
print line
}
}
}


$ awk -f tst.awk script.sh
#!/bin/bash
.... some commands
cat << 'EOF1' > scriptfile1.sh
new command1 for file1
new command2 for file1
new command3 for file1
EOF1
.... some commands
cat << 'EOF2' > scriptfile2.sh
new command1 for file2
new command2 for file2
new command3 for file2
EOF2
.... some commands

See http://awk.freeshell.org/AllAboutGetline for caveats of using getline.

Alternatively you could read each of the intermediate files specified on the command line into memory, e.g. with GNU awk for ARGIND:

$ cat tst.awk
ARGIND < (ARGC-1) {
file[ARGIND] = (FNR>1 ? file[ARGIND] ORS : "") $0
next
}
$0 == hereEnd {
hereEnd = ""
}
hereEnd == "" {
print
if ( /^cat <</ ) {
gsub(/["\047]/," ")
hereEnd = $3
print file[++hereCnt]
}
}


$ awk -f tst.awk file{1..2}.sh script.sh
#!/bin/bash
.... some commands
cat << 'EOF1' > scriptfile1.sh
new command1 for file1
new command2 for file1
new command3 for file1
EOF1
.... some commands
cat << 'EOF2' > scriptfile2.sh
new command1 for file2
new command2 for file2
new command3 for file2
EOF2
.... some commands

If you don't have GNU awk you can do the same by just adding FNR==1 { ARGIND++ } at the start of the script.

The above assumes you have at least as many fileN.sh files as you have here documents in script.sh. With the above you don't need to have separate EOF1, EOF2, delimiters for each here doc, they could all just use EOF or any other symbol.

Read and Replace key-value properties using Shell Script

envsubst, as the name implies, requires that its key/value pairs be in environment variables. Mere assignments create shell variables, not environment variables.

The following is an attempt at a best-practices replacement:

set -a # turn on auto-export
. appl.properties
set -a # turn off auto-export

while IFS= read -r -d '' filename; do
tempfile=$(mktemp "$filename.XXXXXX")
if envsubst <"$filename" >"$tempfile"; then
mv "$tempfile" "$filename"
else
rm -f "$tempfile"
fi
done < <(find /home/uname/config -name '*.txt' -print0)

Key points:

  • Quotes are important. Try putting spaces in your filenames and seeing how the original behaves if you're not so sure.
  • Using set -a before sourcing the file ensures that the variables it sets are present in the environment.
  • Using IFS= read -r -d '' filename to iterate over names from a NUL-delimited source (such as find -print0) -- unlike for filename in $(find ...) -- ensures that your code correctly handles filenames with spaces, filenames with glob characters in their names, and other unusual cases.
  • Using while read ...; done < <(find ...) instead of find ... | while read avoids the bug documented in BashFAQ #24, wherein variables set in the loop do not persist. See BashFAQ #1 for more on reading streams line-by-line via this method, and/or the UsingFind page.
  • Using mktemp to generate temporary filenames prevents symlink attacks -- think of what would happen if someone else with permissions to write to /home/uname/, but without permissions to write to /etc, created a symlink to /etc/passwd (or any other file you care about) named /home/uname/temp_file.txt. Moreover, using mktemp to generate random names means that multiple concurrent instances of the same script won't stomp on the same temporary filename.
  • Writing the output of envsubst to a temporary file, and then renaming that temporary file to your destination name, ensures that you don't overwrite your output until it's finished being generated. Thus, even if the process is interrupted when partially completed, you have some level of guarantee that the output file will be left in either its original state or fully written (details dependent on your filesystem's configuration and semantics).

shell script to read the line from one file and sed it in another file

This might work for you (GNU sed):

sed -e '/[group]/!b;:a;n;R file1' -e 'ba' file2 |
sed '1,/[group]/b;N;s/\n\S\+\s*/ /'

Interleave file1 with file2 in the first sed invocation.

Pipe the result to a second sed invocation that replaces the first field with the line above for the required replacement.

replace specific column values using reference of another file

You can try this, but its without error checking:

declare -A dict
while IFS=, read -r key value; do
dict[$key]="$value"
done < file1

while IFS=, read -r col1 col2 col3; do
printf "%s,%s,%s\n" "${dict[$col1]}" "$col2" "$col3"
done < file2

explanation

# create associativ array
declare -A dict

# read key and value from file, separator is ','
while IFS=, read -r key value; do
# write key and value into associativ array
dict[$key]="$value"
done < file1

# loop over file2, read three columns, separator is ','
while IFS=, read -r col1 col2 col3; do
# print columns, first column is the value from associativ array
printf "%s,%s,%s\n" "${dict[$col1]}" "$col2" "$col3"
done < file2


Related Topics



Leave a reply



Submit