C# Selenium Refresh All State Elements
The short answer is no, there is no way to automatically refresh stale elements. Basically a stale element is one that is no longer attached to the DOM. That could be for multiple reasons... the page reloads, the browser is navigated away from and back to the page, etc. In your case, when you switch tabs, it sounds like it's treated like a page change which wipes out all of your references.
I would recommend not storing references but instead create methods that fetch and then use the element, e.g. click.
public void ClickDate()
{
driver.findElement(By.linkText(Utility.getSheetData(path, 7, 1, 2))).click();
}
That way you should get rid of all your stale elements because you aren't fetching them, switching tabs and then back again, and then clicking the stored reference. You always fetch and then click immediately.
Refresh page selenium
addtocart = driver.find_elements_by_xpath('somexpath')
while (not addtocart):
time.sleep(10) # wait for 10 seconds
driver.refresh()
addtocart = driver.find_elements_by_xpath('somexpath') # refind to avoid stale element exception
addtocart[0].click()
just use find elements and check for returned array length is zero or not
How to refresh Selenium Webdriver DOM data without reloading page?
Without knowing the content of the page, it's hard to craft a solution to your problem.
When your Selenium code selects elements from the webdriver, it does so on the page as it's loaded when your selector code executes, meaning that the page does not need to be reloaded in order to retrieve new elements. Instead, it seems like your problem is that the elements don't exist on the page yet, meaning it's possible that the search results hadn't loaded when your selector attempted to get a fresh copy of the elements.
A simple solution would be to increase the wait time between starting the search and selecting the search results, to give time for the page to load the search results
from selenium import webdriver
import time
# Load page
driver = webdriver.Firefox()
driver.get('https://www.example.com')
# Begin search
driver.find_element_by_tag_name('a').click()
# Wait for search results to load
time.sleep(5)
# Retrieve search results
results = driver.find_elements_by_class_name('result')
Downsides of this would be it's really dependent on network QoS and how long the search query takes to execute on your page.
A more complex but canonical solution would be to wait for the page to load the search results, perhaps by checking for an Ajax search loading icon or seeing if the results changed. A good place to start would be to look at WebDriverWait's in Selenium.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
# Load page
driver = webdriver.Firefox()
driver.get('https://www.example.com')
# Begin search
driver.find_element_by_tag_name('a').click()
# Wait for search results to load
WebDriverWait(driver, 30).until(
expected_conditions.invisibility_of_element_located((By.ID, 'ajax_loader'))
)
# Retrieve search results
results = driver.find_elements_by_class_name('result')
The downfall of this method is that it may take a lot of time to figure out how to get it working, and it needs to be customized for each page you want to wait for updates on.
You mentioned that this method seems not to work for you. A suggestion for that would be (if it doesn't break the page) to manipulate the DOM pre-search to clear any existing results or elements matching your selector before waiting for the new results to load. This should fix problems with your Selenium WebDriverWait
when waiting for the presence of elements matching the selector for your search results.
driver.execute_script("el = document.getElementById('#results');el.parentElement.removeChild(el)")
Additionally, since you mentioned that the page shouldn't reload, it may be that your page is using Ajax to load search results then modifying the DOM with JavaScript. It may be useful to inspect the network traffic (most browsers' DevTools should have a "Network" tab) and try to reverse engineer how the website is sending the search query and parsing the data.
import requests
# Search term (birds)
term = 'ja'
# Send request
request = requests.get('https://jqueryui.com/resources/demos/autocomplete/search.php?term=' + term)
# Print response
print(request.json())
This may violate certain sites' TOS or policies (actually any of these methods might), so watch out for that, and it may at first be difficult to find out how to send and parse requests on a lower level than what's loaded on the DOM after the page loads the search results more traditionally. On the plus side, this is probably the best (performance, reliability) way to get search results, assuming that an Ajax-like search was used.
Getting Error: Message: stale element reference: element is not attached to the page document
Selenium doesn't gives real objects but only reference to objects in browser's memory and when you load new url (driver.get(...)
or click()
) then it loads new data to browser's memory and references to objects on previous page are outadted. They are outdated even if you load the previous page again (because objects may be in different place in browser's memory.
You have to use two for
-loops.
If first for
-loop you have to get all "href"
(item_link
) and append to some list (instead of driver.get(item_link)
). And when you will have all "href"
then in second for
-loop you may use driver.get(item_link)
.
I can't test it but it could be something like this:
list_url = "URL"
staleElement = True
while staleElement:
staleElement = False
driver.get(list_url) # load page instead of refreshing because in next loop it may have different page in memory
#driver.refresh()
list_items = driver.find_elements_by_class_name("classname1")
# first for-loop: get all `hrefs` (as strings)
all_hrefs = [] # list for all strings with `"href"`
for item in list_items:
basket = False
try:
basket = item.find_elements_by_xpath('xpath')
except exceptions.StaleElementReferenceException as e:
basket = item.find_elements_by_xpath('xpath')
if basket[0] and "text1" in basket[0].text:
price = item.find_elements_by_xpath('xpath1')[0].text
item_link = item.find_element_by_class_name("classname2").get_attribute("href")
if int(price) < 101:
all_href.append(item_link) # add string `"href"` to list
# second for-loop: use all `hrefs`
for item_link in all_hrefs:
driver.get(item_link)
if len(driver.find_elements_by_xpath('xpath2')) > 0:
staleElement = True
#driver.get(list_url) # there is no need to go back to previous page
#else:
# driver.find_element_by_xpath('xpath3').click() # there is no need to go back to previous page
Python selenium how to wait until element is gone/becomes stale?
to wait till any elelemt becomes invisible we have this Expected conditions :
invisibility_of_element
also in code I could see :
class invisibility_of_element(invisibility_of_element_located):
""" An Expectation for checking that an element is either invisible or not
present on the DOM.
element is either a locator (text) or an WebElement
"""
def __init(self, element):
self.target = element
if you have a running webdriverwait object, then you can try this :
WebDriverWait(driver, 10).until(EC.invisibility_of_element((By.XPATH, "xpath here")))
this will wait till the invisibility of element, defined by xpath.
selenium <element is not attached to the page document in selenium> Java
You wrote "I'm trying to get data instantly..." Well, this could be a problem.
StaleElementReferenceException
are thrown when a web element is obtained before the contents of the webpage refreshes or during the refresh process. In other words, it is obtained prematurely. The solution is to wait until the page finishes loading completely.
"Element is not attached to the page document" means that the web element is probably no longer in the HTML document.
There are two ways of obtaining a web element:
WebDriver driver = new ChromeDriver();
driver.findElement(...);
Assuming that you are on the correct page, findElement
will attempt to locate the element without delay. If the page is in the process of loading, it will most likely result in the error mentioned in the OP's post. The correct way to fix this is to add an implicit wait.
WebDriver driver = new ChromeDriver();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); // Time amount and time units are arbitrary.
// get to the page...
driver.findElement(...);
In the above snippet, from the moment implicitlyWait
is called until the end of the test session, before any attempt to obtain a web element, the application will wait the minimum amount of time passed to the function; in my example that's 10 seconds.
A better way to do this is to use WebDriverWait
class.
WebDriver driver = new ChromeDriver();
WebDriverWait wait = new WebDriverWait(driver, 10); // max. wait time set to 10 seconds with default 500 ms polling interval
WebDriverWait
has a three-argument variant where the third argument is the polling interval in milliseconds. For example, WebDriverWait(driver, 10, 100)
will wait a maximum of 10 seconds, polling every 100 ms. Once the wait
object is obtained, it will be used to obtain a WebElement
object passing the correct expected condition provided by the ExpectedConditions
class. For example, to wait until a button becomes "clickable" (both visible and enabled)
WebDriver driver = new ChromeDriver();
WebDriverWait wait = new WebDriverWait(driver, 10);
WebElement button = wait.until(ExpectedConditions.elementToBeClickable(By.xpath(XPATH EXPRESSION HERE)));
button.click();
This approach, assuming you are already on the correct page, will attempt to locate the desired component the maximum amount of time passed to the WebDriverWait
constructor before it times out. But, if the component is located AND the expected condition is reached before timing out, it will return the requested component. This is a better way to avoid stale element (although not completely).
Probably the best approach is to combine both of the approaches.
To start, set the implicit wait time as soon as the web driver instance is obtained
WebDriver driver = new ChromeDriver();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
Then, navigate to the page
driver.navigate().to("https:.........");
Lastly, use the second approach to obtain the web element
WebDriverWait wait = new WebDriverWait(driver, 10);
WebElement button = wait.until(ExpectedConditions.elementToBeClickable(By.xpath("...")));
button.click();
When implicit and explicit waits are used together, the chances of stale element issues is reduced to almost zero. You just have to be smart and always obtain the web element just before you are going to use it.... And DEFINITELY DO NOT put code like this in an endless loop.
Basically, this code works.... sort of.
public class TickerPageTest {
@Test
public void printTickerValueEndlessLoop() {
System.setProperty("webdriver.chrome.driver", "F:\\Users\\Hector\\webdriver\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.manage().window().maximize();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
driver.get("https://coincheck.com/exchange/tradeview");
while (true) {
WebDriverWait wait = new WebDriverWait(driver, 10, 10);
WebElement element = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("/html/body/div[2]/div[1]/div/div[2]/div[2]/div[2]/div[2]/div[1]/div[1]/span[3]")));
System.out.println(element.getTagName() + ":" + element.getText());
}
}
}
The above code outputs the following (most of the time)
span:5898999
...
span:5900895
...
span:5898999
The best bet is to wrap the interaction with the web element in a try/catch where you will catch the StaleElementReferenceException
to ignore it and continue. In a real Selenium Test, you can you this strategy to retry obtaining the missed element, but in this case you don't need to.
try {
System.out.println(element.getTagName() + ":" + element.getText());
} catch (StaleElementReferenceException e) {
System.err.println("Test lost sync... this will be ignored.");
}
When I did this, after a few hundred lines (or more) I was able to catch a loss in synchronization:
span:5906025
Test lost sync... this will be ignored.
span:5906249
But, as you can see, I just ignored it and moved on to the next update.
How to avoid StaleElementReferenceException in Selenium?
This can happen if a DOM operation happening on the page is temporarily causing the element to be inaccessible. To allow for those cases, you can try to access the element several times in a loop before finally throwing an exception.
Try this excellent solution from darrelgrainger.blogspot.com:
public boolean retryingFindClick(By by) {
boolean result = false;
int attempts = 0;
while(attempts < 2) {
try {
driver.findElement(by).click();
result = true;
break;
} catch(StaleElementException e) {
}
attempts++;
}
return result;
}
Related Topics
How to Sort Arraylist of Objects by Timestamp and Get Last Five Elements
@Notnull:Validation Custom Message Not Displaying
How to Solve Gradle Error Cannot Find Symbol
Spring Kafka - How to Reset Offset to Latest With a Group Id
Gradle Java9 Could Not Target Platform: 'Java Se 9' Using Tool Chain: 'Jdk 8 (1.8)'
Adding a Custom Http Header to a Spring Boot Ws Call (Wstemplate)
Autosize Column Width Using Poi
Jdk Tools.Jar as Maven Dependency
Method to Find the Smallest Number from an Integer
How to Disable Fail_On_Empty_Beans in Jackson
Change the Textview Text Color When Button Is Clicked in Android
Android Classnotfoundexception: Didn't Find Class on Path
Java Streams Sum Values of a List of Maps
Newline Character in Jlabel.Settext()
Efficient Way of Processing Large CSV File Using Java
Dynamic Placeholder Substitution in Properties in Java
Setting Default Values to Null Fields When Mapping With Jackson