Awk Date to Epoch

Convert date to epoch time using AWK in linux

TZ=PST awk -F, '{split($1,date,"/");
$1=mktime(date[3] " " date[1] " " date[2] " " "00 00 00");
print}'

Or, invoking date:

TZ=PST awk -F, '{ OFS = FS;
command="date -d" $1 " +%s";
command | getline $1;
close(command);
print}'

Convert datetime in to epoch using awk piped data

This assumes gawk -- can't do any timezone translation though, strictly local time.

... | gawk '
BEGIN {OFS = "|"}
{
split($1, d, "-")
split($2, t, ":")
epoch = mktime(d[1] " " d[2] " " d[3] " " t[1] " " t[2] " " t[3])
print epoch, $5, $10, $11, $12
}
'

Converting date to unix epoch using awk in log files

Assuming you have gawk (fair assumption since you are using GNU date) you can do this all internally to gawk:

$ awk  'match($0, /\[(.*)\] (.*)/, a) && 
match(a[1], /([0-9]{2})\.([0-9]{2})\.([0-9]{4}) ([0-9:]+)(\.[0-9]+)/,b) {
gsub(/:/," ",b[4])
s=b[3] " " b[2] " " b[1] " " b[4]
print mktime(s) "|" a[2]
}' file
1377896089|Foo
1377896209|Bar
1377896389|Foo bar

Or, a Bash solution:

while IFS= read -r line; do 
if [[ "$line" =~ \[([[:digit:]]{2})\.([[:digit:]]{2})\.([[:digit:]]{4})\ +([[:digit:]:]+)\.([[:digit:]]+)\]\ +(.*) ]]
then
printf "%s|%s\n" $(gdate +"%s" --date="${BASH_REMATCH[3]}${BASH_REMATCH[2]}${BASH_REMATCH[1]} ${BASH_REMATCH[4]}") "${BASH_REMATCH[6]}"
fi
done <file

AWK: convert timestamp to epoch; first record always returns -1

If you wanted to use FPAT to split your input into fields then you'd be using it in the wrong place, it'd have to be in the BEGIN section, but that's NOT what you're trying to do, your data is ,-separated and you're just trying to separate your timestamp into 2-digit segments. To do that you'd use patsplit() instead of FPAT (both gawk-only, just like mktime()):

$ cat tst.awk
BEGIN { FS=OFS="," }
{
# change timestamp to epoch (without TZ correction)
epoch_returned = timestamp_to_epoch($1)
print $0, epoch_returned
}

function timestamp_to_epoch(timestamp_in, t, epoch_out) {
patsplit(timestamp_in,t,/[0-9][0-9]/)
epoch_out = mktime(t[1] t[2] " " t[3] " " t[4] " " t[5] " " t[6] " " t[7])
return epoch_out
}


$ awk -f tst.awk file
2017-09-16 18:14:00,80465,1505603640
2017-09-19 18:23:00,80898,1505863380
2017-09-21 08:05:00,81253,1505999100
2017-09-27 18:20:00,82155,1506554400
2017-10-03 18:36:00,82902,1507073760
2017-10-09 18:33:00,83699,1507591980

but personally I'd use plain, old split() instead of patsplit():

split(timestamp_in,t,/[- :]/)
epoch_out = mktime(t[1] " " t[2] " " t[3] " " t[4] " " t[5] " " t[6])

Bash: Obtain epoch time from date time string taken from file using awk

this may be what you want, needs gawk

$ awk '{t=$1 FS $2; gsub(/[-:]/," ",t); print mktime(t)}' file

1528173351
1528191142

or perhaps this

$ awk '/Controller (startup|shutdown)/{t=$1 FS $2; 
gsub(/[-:]/," ",t);
print mktime(t)}' file

AWK command to change epoch time to date and list epoch on same line

You could try:

awk '{ printf "%s -- %s\n", strftime("%c",$1), $0 }' file

AWK date to epoch

I think awk is a bit heavy for this job, cut maybe a little bit lighter:

tail -1 MyFile | date -d "`cut -d, -f7`" +%s

But of course you can do it with awk as well:

tail -1 MyFile | date -d "`awk -F, '{ print $7 }'`" +%s

Using awk mktime to transform datetime fields to Epoch in cvs

Just set the timezone (TZ) variable to UTC before calling awk or set the UTC flag for mktime():

$ awk 'BEGIN{print mktime("2018 11 25 23 00 26")}'
1543208426

$ TZ=UTC awk 'BEGIN{print mktime("2018 11 25 23 00 26")}'
1543186826

$ awk 'BEGIN{print mktime("2018 11 25 23 00 26",1)}'
1543186826

$ awk 'BEGIN{print mktime("2018 11 25 22 00 26",1)}'
1543183226


Related Topics



Leave a reply



Submit