How to Retrieve Data from Dynamic Table - Selenium Python

how to retrieve data from dynamic table - selenium python

This error message...

TypeError: must be str, not int

...implies that while at the mentioned line your program expects a String type of argument where as an Interger type of argumnent was passed to it.

To retrieve the data from the dynamic web table you need to change the line of code as follows :

for row in range(rows):
for col in range(columns):
values = self.driver.find_element_by_xpath('.//*[@id="table01"]/tbody/tr["'+row+'"]/td["'+col+'"]').text

Getting Dynamic Table Data With Selenium Python

I modified your script as below.

You should retrieve element in for loop or it will cause stale element reference exception.

And using WebDriverWait to wait for elements to be visible before find element.

from selenium import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from time import sleep

browser = webdriver.Chrome()
browser.get('https://www.nyse.com/listings_directory/stock')

symbol_list = []


while True:
try:
table_data = WebDriverWait(browser, 10).until(EC.visibility_of_all_elements_located((By.XPATH, "//table//td")))
for i in range(1, len(table_data)+1):
td_text = browser.find_element_by_xpath("(//table//td)["+str(i)+"]").text
print(td_text)
symbol_list.append(td_text)
next_page = WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, '//a[@href="#" and contains(text(),"Next")]')))
next_clickable = next_page.find_element_by_xpath("..").get_attribute("class") # li
if next_clickable == 'disabled':
break
print("Go to next page ...")
next_page.click()
sleep(3)
except Exception as e:
print(e)
break

Scrape data from dynamic table using Python & Selenium

the first problem is as if the website is loading dynamically you need to give some time to load the page fully. to solve it you can use this

time.sleep(2) 
// change the number according to your need.
element = driver.find_element_by_xpath('//*[@id="ht-results-table"]/tbody[1]/tr[2]/td[4]').text

the best way is using Explicit Waits. this will wait for the element to load then execute the next step.

2nd problem is you shouldn't just copy the XPath from the chrome dev tools

to get all the names you can do this

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

PATH = 'C:\Program Files (x86)\chromedriver.exe'
driver = webdriver.Chrome(PATH)
driver.get('https://qmjhldraft.rinknet.com/results.htm?year=2018')

try:
elements = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, "//tr[@rnid]/td[3]"))
)
finally:
names = driver.find_elements_by_xpath('//tr[@rnid]/td[3]')

for name in names:
nm = name.text
print(nm)

driver.quit()

Dynamic content from table - can't scrape with Selenium

To extract the data from the Transfers table of Token Natluk Community - polygonscan webpage you need to induce WebDriverWait for the visibility_of_element_located() and using DataFrame from Pandas you can use the following Locator Strategy:

Code Block:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

options = Options()
options.add_argument("start-maximized")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
s = Service('C:\\BrowserDrivers\\chromedriver.exe')
driver = webdriver.Chrome(service=s, options=options)
driver.get("https://polygonscan.com/token/0x64a795562b02830ea4e43992e761c96d208fc58d")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button#btnCookie"))).click()
WebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe#tokentxnsiframe")))
data = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "table.table.table-md-text-normal"))).get_attribute("outerHTML")
df = pd.read_html(data)
print(df)

Console Output:

[                                             Txn Hash               Method  ...       Quantity Unnamed: 7
0 0x75411962e2e6527f5a032198816cafe4e1a475a4ebdf... Add Liquidity ET... ... 37929.272725 NaN
1 0x27f61026e9df4c0c14c6259f624917a12ce7f6c20eb7... Swap Exact ETH F... ... 50814.040553 NaN
2 0xd9ee0ed46ef8ce891e81787b25176530a30df6d2b98e... Add Liquidity ET... ... 55288.744543 NaN
3 0x3f3982a38ff3f5c5890eff12a9d3f7061fea88942d96... Add Liquidity ET... ... 978.219682 NaN
4 0x503fad1b044b98c58700d185eb8cb9c16a483fd748d7... Unstake ... 8884.911763 NaN
5 0x503fad1b044b98c58700d185eb8cb9c16a483fd748d7... Unstake ... 9026.302437 NaN
6 0xdc75ad4e37e232f8536305ef8c628fd9391c1f2c5d25... Transfer ... 114000.000000 NaN
7 0x218ae4183e632c47edf581705871a3f16dc32cc513ef... Add Liquidity ET... ... 45125.111655 NaN
8 0x9fbe017ebf37aea501050a68c8ab1d78734b576b5585... Swap Exact ETH F... ... 2563.443420 NaN
9 0xd30adcf551285d4b72495d55cc59ffaed82a224b138c... Claim ... 14923.359293 NaN
10 0x65c733e468df90eaed701bc4f1e21a4090924b1225c1... Swap Exact ETH F... ... 33055.752836 NaN
11 0x82c215000f9807a3a40fe3ef3e461ceac007513b49ff... Swap Exact ETH F... ... 6483.182959 NaN
12 0x6155da0b5b206a8ffffa300a5d75e23fa3833b9b079b... Swap Exact ETH F... ... 13005.174783 NaN
13 0x3435579c22e9fc42f6921229449c8cb18d133a207a66... Transfer ... 47500.000000 NaN
14 0x7a57be9b538e0c73df4b608a8323c2f678ba6136f9a9... Swap Exact ETH F... ... 19605.381370 NaN
15 0x8fe7787039c4a382f6420c78b48933dd59b0843c6ab4... Transfer ... 237500.000000 NaN
16 0x0e55aa0740f6c964db13efe52e1af58a35497f9a292d... Swap Exact ETH F... ... 6561.223602 NaN
17 0x9897d4a2f56a49a935a36183eee3dc846fc19610812c... Swap Exact ETH F... ... 19762.821100 NaN
18 0xf9c7d67bf679624640f20d69636f58f634bf66e7daed... Add Liquidity ET... ... 74224.394200 NaN
19 0x89b490947952e37e10a3619f8fbcb5a80b15f0e2f4aa... Add Liquidity ET... ... 14589.910231 NaN
20 0xc94e56bb3be04e610c6a89e934fb84bba58922f6641a... Transfer ... 142500.000000 NaN
21 0x68a5c142bbfa86b0aa4f469eb17f58e26b5251bd83e9... Swap Exact ETH F... ... 3307.607665 NaN
22 0x2597e521fd0a7e4edffe66007129c93d1dc22485b86a... Swap Exact ETH F... ... 66868.030051 NaN
23 0x14cc91039f59fd9143bc94132b9f053970947b79a16f... Swap Exact Token... ... 42683.069577 NaN
24 0xa5ab4179af827c6883e52cbc010509b701795a8136a0... Swap Exact ETH F... ... 3423.618394 NaN

[25 rows x 8 columns]]

Python Selenium - Scraping a Table from a Dynamic Page

Instead of copying and pasting the content, I used soup.find_all function from the BeautifulSoup library to find the table. Then, I used Pandas to create a dataframe from the table and send it to my Excel sheet.

Here is the code I used:

html = driver.page_source
soup = BeautifulSoup(html)
table = soup.find_all("table")[1]
df = pd.read_html(str(table))[0]
#print(df)
new_table = pd.DataFrame(df)
new_table.to_excel ("<PATH TO EXCEL SHEET", sheet_name="<SHEET NAME>", index=False, header=False)

Selenium, python dynamic table

Try this xPath:

//table[@class = 'dadosAgencia']//tr

It would be like this:

elements = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located(
(By.XPATH, "//table[@class = 'dadosAgencia']//tr")))

it gives you a list of all elements located. To print the text of each element you can use this:

for element in elements:
print(element.text)

Note: you have to add some imports:

from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait

web scraping with selenium on a dynamic table

First, you need wait until the data located, use .visibility_of_all_elements_located. You can use this locator to wait:

//table[contains(@class, "table-sm")]//a

After all the data located, then you can extract the table data. Try the below code:

driver.get('https://new.cryptoxscanner.com/binance/live')

#UPDATED HERE
option = Select(WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//select[contains(., "All")]'))))
option.select_by_visible_text('All')

WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, '//table[contains(@class, "table-sm")]//a')))
data = driver.find_element_by_class_name('table-responsive')
print(data.text)

Following import:

#UPDATED HERE
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By

from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC


Related Topics



Leave a reply



Submit