What Is the Quickest Way to Http Get in Python

What is the quickest way to HTTP GET in Python?

Python 3:

import urllib.request
contents = urllib.request.urlopen("http://example.com/foo/bar").read()

Python 2:

import urllib2
contents = urllib2.urlopen("http://example.com/foo/bar").read()

Documentation for urllib.request and read.

Fastest way for python HTTP GET request

Write a C module that does everything. Or fire up a profiler to find out in which part of the code the time is spent exactly and then fix that.

Just as guideline: Python should be faster than the network, so the HTTP request code probably isn't your problem. My guess is that you do something wrong but since you don't provide us with any information (like the code you wrote), we can't help you.

What is the fastest way to send 100,000 HTTP requests in Python?

Twistedless solution:

from urlparse import urlparse
from threading import Thread
import httplib, sys
from Queue import Queue

concurrent = 200

def doWork():
while True:
url = q.get()
status, url = getStatus(url)
doSomethingWithResult(status, url)
q.task_done()

def getStatus(ourl):
try:
url = urlparse(ourl)
conn = httplib.HTTPConnection(url.netloc)
conn.request("HEAD", url.path)
res = conn.getresponse()
return res.status, ourl
except:
return "error", ourl

def doSomethingWithResult(status, url):
print status, url

q = Queue(concurrent * 2)
for i in range(concurrent):
t = Thread(target=doWork)
t.daemon = True
t.start()
try:
for url in open('urllist.txt'):
q.put(url.strip())
q.join()
except KeyboardInterrupt:
sys.exit(1)

This one is slighty faster than the twisted solution and uses less CPU.

How do you send an HTTP Get Web Request in Python?

You can use urllib2

import urllib2
content = urllib2.urlopen(some_url).read()
print content

Also you can use httplib

import httplib
conn = httplib.HTTPConnection("www.python.org")
conn.request("HEAD","/index.html")
res = conn.getresponse()
print res.status, res.reason
# Result:
200 OK

or the requests library

import requests
r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
r.status_code
# Result:
200

Fastest way to make 800+ get requests using same url but passing different ids everytime

You can use an async library, but the easiest solution here would be to do something like

from concurrent.futures import ThreadPoolExecutor

with ThreadPoolExecutor() as exc:
responses = exc.map(get, device_ids)

def get(device_id):
url_get_event = 'https://some_url&source={}'.format(device_id)
return requests.get(url_get_event)

If the other part of your code is small you may want to submit the functions to the executor and use as_completed to handle them in the main thread while waiting for other requests to run too.



Related Topics



Leave a reply



Submit