How to Use Python to Login to a Webpage and Retrieve Cookies for Later Usage

How to use Python to login to a webpage and retrieve cookies for later usage?

import urllib, urllib2, cookielib

username = 'myuser'
password = 'mypassword'

cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'username' : username, 'j_password' : password})
opener.open('http://www.example.com/login.php', login_data)
resp = opener.open('http://www.example.com/hiddenpage.php')
print resp.read()

resp.read() is the straight html of the page you want to open, and you can use opener to view any page using your session cookie.

Login to website using python

I would recommend using the wonderful requests module.

The code below will get you logged into the site and persist the cookies for the duration of the session.

import requests
import sys

EMAIL = ''
PASSWORD = ''

URL = 'http://friends.cisv.org'

def main():
# Start a session so we can have persistant cookies
session = requests.session(config={'verbose': sys.stderr})

# This is the form data that the page sends when logging in
login_data = {
'loginemail': EMAIL,
'loginpswd': PASSWORD,
'submit': 'login',
}

# Authenticate
r = session.post(URL, data=login_data)

# Try accessing a page that requires you to be logged in
r = session.get('http://friends.cisv.org/index.cfm?fuseaction=user.fullprofile')

if __name__ == '__main__':
main()

How to use Python Requests to login to website, store cookie, then access another page on the website?

You should be posting all the required data, you can use bs4 to parse the login page to get the values you need:

from requests import session
from bs4 import BeautifulSoup

data = {
'username': 'MY_USERNAME',
'password': 'MY_PASSWORD'
}

head = {"User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"}
with session() as s:
soup = BeautifulSoup(s.get("https://fif.com/login").content)
form_data = soup.select("form[action^=/login?task] input")
data.update({inp["name"]: inp["value"] for inp in form_data if inp["name"] not in data})
s.post('https://fif.com/login?task=user.login', data=data, headers=head)
resp = s.get('https://fif.com/tools/capacity')

If you make a requests and look in chrome tools or firebug, the form data looks like:

username:foo
password:bar
return:aW5kZXgucGhwP29wdGlvbj1jb21fdXNlcnMmdmlldz1wcm9maWxl
d68a2b40daf7b6c8eaa3a2f652f7ee62:1

Python - Retrieve and use a cookie to download a file

Have you tried a simple basic authentification :

from requests.auth import HTTPBasicAuth

url2='https://e4ftl01.cr.usgs.gov/MOLA/MYD14A2.006/2017.10.24/MYD14A2.A2017297.h19v01.006.2017310142443.hdf'
requests.get(url2, auth=HTTPBasicAuth('user', 'pass'))

or read this example

How to post data to a website, store cookies, then get another page

I came up with the following solution using Mechanize. Cookies are managed by mechanize.Browser.

br = mechanize.Browser()   
resp = br.open('https://xxxxxxxxxxxxxxxxxxx')
br.select_form(nr=0)
br['username'] = username
br['password'] = password
response = br.submit()
time.sleep(1)
resp_second = br.open('https://secretwebpage')
print resp_second.read()


Related Topics



Leave a reply



Submit