Python Requests - No Connection Adapters

Python Requests - No connection adapters

You need to include the protocol scheme:

'http://192.168.1.61:8080/api/call'

Without the http:// part, requests has no idea how to connect to the remote server.

Note that the protocol scheme must be all lowercase; if your URL starts with HTTP:// for example, it won’t find the http:// connection adapter either.

No connection adapters were found?

So, the problem is you are getting a list, and you just need to pick the list element at the zero index,

for line in csv_urls:
r = requests.get(line[0])#.text

Python Requests Cannot Find Connection Adapters

change your get request to use only one type of quotes:

requests.get("http://example.com")

you are trying to do it with two types of quotes at the same time and it gives a proper error:

requests.get('"http://example.com"')

Python requests No connection adapters were found

There is a basic mistake.

You need to add http to URL.

try this

import requests
data = requests.get('http://127.0.0.1:8000/')

hope it helps

InvalidSchema(No connection adapters were found for {!r}.format(url)) while using the URL on requests module parsed by ConfigParser

The main issue is it tries to find a URL like 'url.com' which doesn't exist (correct would be url.com), so the solution is to not put apostrophes in config.ini files:

[default]
root_url = https://reqres.in/api/users?page=2

Also think about if you configure the parameters rather in your code than in the config file and use this instead:

requests.get(config['default']['root_url'], params={'page': 2})

InvalidSchema No connection adapters Python Requests

In the for url_row in reader loop url_row is a list of strings. This means that url in turn is a list and that is what you're passing in to requests.get.

If you know that the row you're reading in will always only have one cell or that the first cell will always be the one with the URL in then you can replace url = url_row with url = url_row[0]. If there's the possibility of empty rows then if may be worth adding a check like

if len(url_row) == 0:
continue

before accessing it.

Alternatively, if the file is just a list of URLs and not actually a CSV file then you could read them in directly:

with open('p.csv', newline='', encoding='utf-8') as File:  
for line in File:
url = line.strip()
get_page_data(get_html(url))


Related Topics



Leave a reply



Submit