Convert Ls Output into Csv

Convert ls output into csv

It's a bit long to type in at the command-line, but it properly preserves spaces in the filename (and quotes it, too!)

find . -ls | python -c '
import sys
for line in sys.stdin:
r = line.strip("\n").split(None, 10)
fn = r.pop()
print ",".join(r) + ",\"" + fn.replace("\"", "\"\"") + "\""
'

ls - how to output only filename, date (ISO 8601) to CSV?

Does something like ls -l --time-style=full-iso | awk '/^-/ {printf "%s,%s %s\n",$NF,$6,$7}' do what you want?

send the unix output to a csv file

Try this :

printf '%s\n' A B C | paste -sd ' ' >> file.csv

or more classical for a CSV (delimiter with a , :

printf '%s\n' A B C | paste -sd ',' >> file.csv

printf '%s\n' A B C is just an example to have the same sample input as you. My solution works with spaces in a same line too.

EDIT from your comments, you seems to need to treat with a for loop, so :

for i in {0..5}; do printf '%s\n' {A..C} | paste -sd " " >> file.csv; done

or in pseudo code :

for ...:
unix_command | paste -sd " " >> file.csv
endfor

Read command output into array and parsing array to convert into .csv

First of all ,I know this is not the exact solution you looking for, and also lookup for keywords is missing. I still not sure what you like to achieve, it might because of my English.

I only hope this code might help you to achieve your goal.

# Awk not required here if you not use long listing -l
file_list=$(ls -rt *.bin)

let "x=1"
for filename in ${file_list[@]}; do

echo '--- file #'"${x}"' ---'
let "y=1"

# as awk give output: "var1 var2", read can put them into different variable.
while read col1 col2;do

# col1 contain first column (ID UD Ver Time)
# col2 contain the value
echo "Line${y}: ${col1},${col2}"
let y++

# cat is used instead of script.bash as only its output is provided.
# awk cut down any space and remove : separator from each line.
done <<< "$(cat "${filename}" | awk -F ":" '{gsub(/ /,"");print $1 " " $2}')"

let x++
done

Files:

f1.bin <-- newer

UD  :   JJ533
ID : 117
Ver : 8973
Time: 15545

f2.bin <-- older

UD  :   ZZ533
ID : 118
Ver : 9324
Time: 15548

Output:

--- file #1 ---
Line1: UD,ZZ533
Line2: ID,118
Line3: Ver,9324
Line4: Time,15548
--- file #2 ---
Line1: UD,JJ533
Line2: ID,117
Line3: Ver,8973
Line4: Time,15545

Bash script Sort the text files in directory and export data into csv

there's a / in x and thus in your expression. Change your sed separator by something not likely to occur in x, like:

sed -i "1s#^#${x}\n#" ${x}

and to change "in-place", just enable the -i option (if not available in your system, use a temp file and move back to the original file)

Now for your file sort: the problem is that wildcard matching or even ls sorts the files but using alphabetical order so regional_vol_GM2.txt comes after regional_vol_GM100.txt.

So even if it's a bit of a hack you could replace this:

tail -q -n 1 "$dir"/t1/regional_vol*.txt

by this:

tail -q -n 1 (cd "$dir"/t1;ls -C1 regional_vol_GM*.txt | sort -k2 -tM -n)

Why it works:

  • I'm using the numerical mode of sort, using second field, delimited by M (the digits come after _GM).

Why it's a hack:

  • it relies on the output of ls which is generally frowned upon. Here it's a simple ls on 1 column, no spaces in your names, should be OK
  • it has to perform a cd just in case there's a M in the directory path and the sort would find the wrong field

What you should do to simply fix that:

  • you should generate your files/ask the people who do to do it with zero padding: 1 becomes 001, 2 becomes 002, etc. so alphanumerical sorting works, no need to do the complex sort hack.

Converting csv format column to comma separated list using bash

The real problem is apparently that the output has DOS line feeds. See Are shell scripts sensitive to encoding and line endings? for a broader discussion, but for the immediate solution, try

tr -s '\015\012' , <file | sed 's/^[,]*,//;s/,$/\n/'

The arguments to for should just be a list of tokens anyway, no commas between them. However, a better solution altogether is to use while read instead. See also Don't read lines with for

gcloud dns managed-zones list --format='csv(name)' \
--project 'sandbox-001' |
tr -d '\015' |
tail -n +2 | # skip header line
while read -r i; do
echo "$i" # notice also quoting
done

I don't have access to gcloud but its manual page mentions several other formats which might be more suitable for your needs, though. See if the json or list format might be easier to manipulate. (CSV with a single column is not really CSV anyway, just a text file.)



Related Topics



Leave a reply



Submit