Wget: Unsupported Scheme on Non-Http Url

wget: Unsupported scheme on non-http URL

man wget shows:

It supports HTTP, HTTPS, and FTP protocols, as well as retrieval
through HTTP proxies.

Try curl, it supports file URLs. Also note you probably want three slashes here. Two belong to the protocol indicator (file://) and one belongs to the path (/myhost/system.log)

export URL=file:///myhost/system.log

How to download a file through FILE URI scheme in bash?

Try doing this :

export URI="file:///etc/passwd"
curl -s "$URI" > /tmp/l
cat /tmp/l

If you need to download a remote file, you should use another protocol (and scheme), like :

curl ftp://user:password@host:port/path/to/file

or

scp host:/path/to/file file

etc...

NOTE

curl is able to download file scheme, despite what you said.

Alert!: Unsupported URL scheme! error when sending bulk sms using lynx

The problem is the single quote before http:. Quotes are not processed after expanding variables, so it's being sent literally to lynx. There's no 'http URL scheme, hence the error message.

Remove the quotes before http: and after +update.

#!/bin/bash
a="lynx -dump http://localhost:13013/cgi-bin/sendsms?from=8005&to="
b="&username=tester&password=foobar&smsc=smsc1&text=Test+mt+update"
for i in $(cat numbers.txt);do $a$i$b;echo sent $i; done;

For more information about this, see

Setting an argument with bash

Why can't I download from S3 using wget?

The root cause is a bug in S3, as described here: https://stackoverflow.com/a/38285197/4323

One workaround is to use the requests library instead:

r = requests.get('https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2015-01.csv')

This works fine. You can inspect r.text or write it to a file. For the most efficient way, see https://stackoverflow.com/a/39217788/4323

Why does curl allow use of the file URL scheme, but not wget

Because Wget has not been written to support file:// URLs. (It's front web page clearly states "GNU Wget is a free software package for retrieving files using HTTP, HTTPS and FTP")

Allow me to point to a little curl vs wget comparison.

R produces unsupported URL scheme error when getting data from https sites

Edit (May 2016): As of R 3.3.0, download.file() should handle SSL websites automatically on all platforms, making the rest of this answer moot.

You want something like this:

library(RCurl)
data <- getURL("https://dl.dropbox.com/u/7710864/data/csv_hid/ss06hid.csv",
ssl.verifypeer=0L, followlocation=1L)

That reads the data into memory as a single string. You'll still have to parse it into a dataset in some way. One strategy is:

writeLines(data,'temp.csv')
read.csv('temp.csv')

You can also separate out the data directly without writing to file:

read.csv(text=data)

Edit: A much easier option is actually to use the rio package:

library("rio")
import("https://dl.dropbox.com/u/7710864/data/csv_hid/ss06hid.csv")

This will read directly from the HTTPS URL and return a data.frame.

Error in download.file unsupported URL scheme

From the Details section of ?download.file.

 Note that 'https://' URLs are only supported if '--internet2' or
environment variable 'R_WIN_INTERNET2' was set or
'setInternet2(TRUE)' was used (to make use of Internet Explorer
internals), and then only if the certificate is considered to be
valid.

windows wget & being cut off

It's not Wget that cuts off the URL, but the command interpreter, which uses & to separate two commands, akin to ;. This is indicated by the =alpha,ascThe system cannot find the file specified. error on the following line.

To prevent this from happening, quote the entire URL:

wget.exe "http://www.imdb.com/search/title?genres=action&sort=alpha,asc&start=51&title_type=feature"


Related Topics



Leave a reply



Submit