Bash Script Read All the Files in Directory

How to get the list of files in a directory in a shell script?

search_dir=/the/path/to/base/dir/
for entry in "$search_dir"/*
do
echo "$entry"
done

Read all files line by line in a directory and do a command on them

If you want to save the file names in a variable, use array=(a b c) syntax to create an array. There's no dollar sign. Then loop over the array with for item in "${array[@]}".

To read from a file with a while read loop, use while read; do ...; done < "$file". It's odd-looking, but the file is redirected into the loop as a whole.

files=(/dos2unix/*.txt)

for file in "${files[@]}"; do
while IFS=',' read -ra line; do
...
done < "$file"
done

Another way is to use cat to concatenate all the files together, which lets you get rid of the outer for loop.

cat /dos2unix/*.txt | while IFS=',' read -ra line; do
...
done

Shell Script to list files in a given directory and if they are files or directories

Your line:

for file in $dir; do

will expand $dir just to a single directory string. What you need to do is expand that to a list of files in the directory. You could do this using the following:

for file in "${dir}/"* ; do

This will expand the "${dir}/"* section into a name-only list of the current directory. As Biffen points out, this should guarantee that the file list wont end up with split partial file names in file if any of them contain whitespace.

If you want to recurse into the directories in dir then using find might be a better approach. Simply use:

for file in $( find ${dir} ); do

Note that while simple, this will not handle files or directories with spaces in them. Because of this, I would be tempted to drop the loop and generate the output in one go. This might be slightly different than what you want, but is likely to be easier to read and a lot more efficient, especially with large numbers of files. For example, To list all the directories:

find ${dir} -maxdepth 1 -type d

and to list the files:

find ${dir} -maxdepth 1 -type f

if you want to iterate into directories below, then remove the -maxdepth 1

bash check all file in a directory for their extension

"ls $1" doesn't execute ls on $1, it just a plain string. Command substitution syntax is $(ls "$1")

However there is no need to use ls, just use globbing:

count1=0
count2=0

for file in "$1"/*; do
if [[ $file == *.sh ]]; then
echo "is a txt file"
(( count1++ ))
elif [[ $file == *.mp3 ]]; then
echo "is a mp3 file"
(( count2++ ))
fi
done

echo "counts: $count1 $count2"

for file in "$1"/* will iterate through all the files/directories in the directory denoted by $1


EDIT: For doing it recursively inside a directory:

count1=0
count2=0

while IFS= read -r -d '' file; do
if [[ $file == *.sh ]]; then
echo "is a txt file"
(( count1++ ))
elif [[ $file == *.mp3 ]]; then
echo "is a mp3 file"
(( count2++ ))
fi
done < <(find "$1" -type f -print0)

echo "counts: $count1 $count2"

Shell script to list all files in a directory

Try this Shellcheck-clean pure Bash code for the "further plan" mentioned in a comment:

#! /bin/bash -p

# List all subdirectories of the directory given in the first positional
# parameter. Include subdirectories whose names begin with dot. Exclude
# symlinks to directories.

shopt -s dotglob
shopt -s nullglob
for d in "$1"/*/; do
dir=${d%/} # Remove trailing slash
[[ -L $dir ]] && continue # Skip symlinks
printf '%s\n' "$dir"
done
  • shopt -s dotglob causes shell glob patterns to match names that begin with a dot (.). (find does this by default.)
  • shopt -s nullglob causes shell glob patterns to expand to nothing when nothing matches, so looping over glob patterns is safe.
  • The trailing slash on the glob pattern ("$1"/*/) causes only directories (including symlinks to directories) to be matched. It's removed (dir=${d%/}) partly for cleanliness but mostly to enable the test for a symlink ([[ -L $dir ]]) to work.
  • See the accepted, and excellent, answer to Why is printf better than echo? for an explanation of why I used printf instead of echo to print the subdirectory paths.

How to loop over files in directory and change path and add suffix to filename

A couple of notes first: when you use Data/data1.txt as an argument, should it really be /Data/data1.txt (with a leading slash)? Also, should the outer loop scan only for .txt files, or all files in /Data? Here's an answer, assuming /Data/data1.txt and .txt files only:

#!/bin/bash
for filename in /Data/*.txt; do
for ((i=0; i<=3; i++)); do
./MyProgram.exe "$filename" "Logs/$(basename "$filename" .txt)_Log$i.txt"
done
done

Notes:

  • /Data/*.txt expands to the paths of the text files in /Data (including the /Data/ part)
  • $( ... ) runs a shell command and inserts its output at that point in the command line
  • basename somepath .txt outputs the base part of somepath, with .txt removed from the end (e.g. /Data/file.txt -> file)

If you needed to run MyProgram with Data/file.txt instead of /Data/file.txt, use "${filename#/}" to remove the leading slash. On the other hand, if it's really Data not /Data you want to scan, just use for filename in Data/*.txt.

Execute command on all files in a directory

The following bash code will pass $file to command where $file will represent every file in /dir

for file in /dir/*
do
cmd [option] "$file" >> results.out
done

Example

el@defiant ~/foo $ touch foo.txt bar.txt baz.txt
el@defiant ~/foo $ for i in *.txt; do echo "hello $i"; done
hello bar.txt
hello baz.txt
hello foo.txt


Related Topics



Leave a reply



Submit