How to Grep Download Speed from Wget Output

Parse download speed from wget output in terminal

I changed the URL to one that works. Redirected stderr onto stdout. Used grep --only-matching (-o) and a regex.

sudo wget -O /dev/null http://www.google.com 2>&1 | grep -o '\([0-9.]\+ [KM]B/s\)'

wget speed result as pass or fail

Add

| awk ' { if (($1 > 10) && ($2 == "MB/s")) { printf("SPEED IS TOO DAMN HIGH - %s\n", $0); } elif (($1 > 5) && ($2 == "MB/s")) { printf("PASS - %s\n", $0); } else { printf ("FAIL - %s\n", $0); } } '

at the end of your command line.

Wget option to output last line to the file on openWRT

See How to grep download speed from wget output?

One of the solution :
wget -O /dev/null your_url 2>&1 | grep '([0-9.]+ [KM]B/s)'

How do I Find out words within a line using grep in Linux?

You could try with this grep:

$ grep -oE "[0-9\.]* seconds" file
244.434 seconds

where file:

Transferred 6.7828 MB in 244.434 seconds (28.4151 KB/sec)

EDIT
(suggested by @Cyrus)

Assuming that you just want the amount of seconds:

$ grep -oP "[0-9.]+(?= seconds)" file
244.434

remove duplicate lines in wget output

In some cases, tools like Beautiful Soup become more appropriate.

Trying to do this with only wget & grep becomes an interesting exercise, this is my naive try but I am very sure are better ways of doing it

$ wget -q "http://www.sawfirst.com/selena-gomez" -O -|
grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" |
grep -i "selena-gomez" |
while read url; do
if [[ $url == *jpg ]]
then
echo $url
else
wget -q $url -O - |
grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" |
grep -i "selena-gomez" |
grep "\.jpg$" &
fi
done | sort -u > selena-gomez

In the first round:

wget -q "http://www.sawfirst.com/selena-gomez" -O -|
grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" |
grep -i "selena-gomez"

URLs matching the desired name will be extracted, in the while loop could be the case that the $url is already ending with .jpg therefore it will be only printed instead of fetching the content again.

This approach just goes deep 1 level, and to try to speed up things it uses & ad the end with the intention to do multiple requests in parallel:

grep "\.jpg$" &

Need to check if the & lock or wait for all background jobs to finish

It ends with sort -u to return a unique list of items found.

How can I show the wget progress bar only?

You can use the following filter:

progressfilt ()
{
local flag=false c count cr=$'\r' nl=$'\n'
while IFS='' read -d '' -rn 1 c
do
if $flag
then
printf '%s' "$c"
else
if [[ $c != $cr && $c != $nl ]]
then
count=0
else
((count++))
if ((count > 1))
then
flag=true
fi
fi
fi
done
}

Usage:

$ wget --progress=bar:force http://somesite.com/TheFile.jpeg 2>&1 | progressfilt
100%[======================================>] 15,790 48.8K/s in 0.3s

2011-01-13 22:09:59 (48.8 KB/s) - 'TheFile.jpeg' saved [15790/15790]

This function depends on a sequence of 0x0d0x0a0x0d0x0a0x0d being sent right before the progress bar is started. This behavior may be implementation dependent.

wget/curl large file from google drive

WARNING: This functionality is deprecated. See warning below in comments.


Have a look at this question: Direct download from Google Drive using Google Drive API

Basically you have to create a public directory and access your files by relative reference with something like

wget https://googledrive.com/host/LARGEPUBLICFOLDERID/index4phlat.tar.gz

Alternatively, you can use this script: https://github.com/circulosmeos/gdown.pl



Related Topics



Leave a reply



Submit