Use bash to read a file and then execute a command from the words extracted
You can use the "for" loop to do this. something like..
for WORD in `cat FILE`
do
echo $WORD
command $WORD > $WORD
done
How to execute commands read from the txt file using shell?
A file with one valid command per line is itself a shell script. Just use the .
command to execute it in the current shell.
$ echo "pwd" > temp.txt
$ echo "ls" >> temp.txt
$ . temp.txt
Extract all the words from a text file in bash
Maybe, something like that:
WORDS=$(grep -o -E "(\w|')+" words.txt | sed -e "s/'.*\$//" | sort -u -f)
UPDATE
Explanations:
var=$(...command...)
: Execute command (newer and better solution than `...command...`) and put standard output tovar
variablegrep -o -E "(\w|')+" words.txt
: Read filewords.txt
and applygrep filter
grep
filter is : print only found tokens (-o
) from extended (-E
) rational expression(\w|')+
. This expression is form extract characters of words (\w
: synonym of[_[:alnum:]]
,alnum
is for alpha-numeric characters like[0-9a-zA-Z]
for english/american but extended to many other characters for other languages) or (|
) simple cote ('
), one or more times (+
) : seeman grep
- The standard ouptut of
grep
is the standard input of next commandsed
with the pipe (|
) sed -e "s/'.*\$//"
: Execute (-e
) expressions/'.*\$//
:sed
expression is substitution (s/
) of'.*\$
(simple cote followed by zero or any characters to the end of line) by empty string (between the last two slashes (//
)) : seeman sed
- The standard ouptut of
sed
is the standard input of next commandsort
with the pipe (|
) - sort the result of
sed
and remove doubles (-u
: uniq) and do not make a differences between upper and lower characters (case) : seeman sort
Shell script to execute command on each line of a file with space-delimited fields
Use read and a while loop in bash to iterate over the file line-by-line and call wget on each iteration:
while read -r NAME URL; do wget "$URL" -o "$NAME"; done < File.txt
How to read file and extract particular data in shell script?
You can use one liner such as awk or simplify by breaking into different commands.
First filter your
if [ -n "$1" ]; then
line=`grep $1 sltconfig.cfg |head -1`
param=`echo line | awk -F'=' '{print $2}'`
python /medaff/Scripts/python/iMedical_Consumption_load_Procs.py "${param}"
else
echo "Pass the application name as argument"
fi
Is it possible to start while read a file that is currently being written? (and will the entirety of the file be read?)
read
operates incrementally, a character at a time. It won't care whether a line is available in a file until it actually tries to read that specific line.
Thus, if your writer is writing and actually flushing its buffers faster than your reader reads, and appending to the same file in-place (rather than following a write-and-rename pattern), a reader won't know or care that content wasn't available yet when it first started execution.
extract words from a file
You could use grep
:
-E '\w+'
searches for words-o
only prints the portion of the line that matches
% cat temp
Some examples use "The quick brown fox jumped over the lazy dog,"
rather than "Lorem ipsum dolor sit amet, consectetur adipiscing elit"
for example text.
# if you don't care whether words repeat
% grep -o -E '\w+' temp
Some
examples
use
The
quick
brown
fox
jumped
over
the
lazy
dog
rather
than
Lorem
ipsum
dolor
sit
amet
consectetur
adipiscing
elit
for
example
text
If you want to only print each word once, disregarding case, you can use sort
-u
only prints each word once-f
tellssort
to ignore case when comparing words
# if you only want each word once
% grep -o -E '\w+' temp | sort -u -f
adipiscing
amet
brown
consectetur
dog
dolor
elit
example
examples
for
fox
ipsum
jumped
lazy
Lorem
over
quick
rather
sit
Some
text
than
The
use
Read a file line by line assigning the value to a variable
The following reads a file passed as an argument line by line:
while IFS= read -r line; do
echo "Text read from file: $line"
done < my_filename.txt
This is the standard form for reading lines from a file in a loop. Explanation:
IFS=
(orIFS=''
) prevents leading/trailing whitespace from being trimmed.-r
prevents backslash escapes from being interpreted.
Or you can put it in a bash file helper script, example contents:
#!/bin/bash
while IFS= read -r line; do
echo "Text read from file: $line"
done < "$1"
If the above is saved to a script with filename readfile
, it can be run as follows:
chmod +x readfile
./readfile filename.txt
If the file isn’t a standard POSIX text file (= not terminated by a newline character), the loop can be modified to handle trailing partial lines:
while IFS= read -r line || [[ -n "$line" ]]; do
echo "Text read from file: $line"
done < "$1"
Here, || [[ -n $line ]]
prevents the last line from being ignored if it doesn't end with a \n
(since read
returns a non-zero exit code when it encounters EOF).
If the commands inside the loop also read from standard input, the file descriptor used by read
can be chanced to something else (avoid the standard file descriptors), e.g.:
while IFS= read -r -u3 line; do
echo "Text read from file: $line"
done 3< "$1"
(Non-Bash shells might not know read -u3
; use read <&3
instead.)
Related Topics
Gcc: Putchar(Char) in Inline Assembly
A Modification to %Esp Cause Sigsegv
How to Ignore Line Breaks in Input Using Nasm Assembly
Qt Development on Linux Using Eclipse
Pipe Bash Command Output to Stdout and to a Variable
Best Way Elevate the Privileges Programmatically Under Different Versions of Linux
Bash, Linux, Need to Remove Lines from One File Based on Matching Content from Another File
Git Diff with Line Numbers and Proper Code Alignment/Indentation
How to Solve "Server Terminated Early with Status 127" When Running Node.Js on Linux
How to Use an Older Version of Gcc in Linux
Monitoring Pthread Context Switching
Add Blank Line Between Lines from Different Groups
Fast Linux File Count for a Large Number of Files
Cut or Awk Command to Print First Field of First Row