How does selenium search web pages handle the situation where website resources always load very stuck or fail?

How selenium handles the situation where website resources keep failing to load when searching web pages

Insert image description here

Selenium obtains a web page. The resources of a certain web page have been stuck for a long time and have not been loaded successfully. How to abandon the stuck data and continue going down?

There are 2 bell modes. Usually you can use the following method to handle this situation:

Method 1, WebDriverWait

This method is more suitable for web pages with more and more complex resources. For example, some pictures are abroad and cannot be loaded. mail.com is like this

When the web page has not been loaded for 15 seconds, the code will execute regardless of whether the element is found or not 会继续往下走:

  • If the element is not found, an exception will be thrown.
  • If the element is found, it will be clicked

At this time, if an exception is thrown. You can try more and more until no exception is thrown and you can continue going down.

    def ClickElementByXpath(self, brower, xPath):
        try:
            brower.implicitly_wait(5)
            self.insert_text_to_last_line(self.log_pass_file, xPath)
            result = WebDriverWait(brower, 15).until(EC.presence_of_element_located((By.XPATH, xPath)))
            result.click()
            return True
        except Exception as e:
            print('exception timeout!!!')
            return False

Method 2, find_element

This method is suitable for websites where resources can be loaded quickly.

find_element will wait until all resources of the website have been loaded before continuing to execute. This is more stable, but for some website resources themselves are stuck, they will always be stuck, resulting in a very long time, and finally the script fails to run.

def ClickElementByXpath(self, brower, xPath):
    try:
        brower.implicitly_wait(55)
        self.insert_text_to_last_line(self.log_pass_file, xPath)
        #element_input = brower.find_element_by_xpath(xPath)
        element_input = brower.find_element(By.XPATH, xPath)
        element_input.click()
        return True
    except Exception as e:
        print('exception timeout!!!')
        return False

Is it possible to set a timeout when the element is clicked and does not wait for the page to load before continuing?

In Selenium, you can use WebDriverthe set_page_load_timeoutmethod to set the timeout for page loading, but this setting is usually applied to the loading of the entire page, rather than the click operation of a single element. If you want to not wait for the page to load after clicking on the element, but to continue performing subsequent operations, you can take the following approach:

Usageset_page_load_timeout : You can set a shorter page load timeout to wait a shorter amount of time after clicking on an element. You can then continue with subsequent operations after catching the timeout exception.

from selenium import webdriver
from selenium.common.exceptions import TimeoutException

# 创建 WebDriver 实例
driver = webdriver.Chrome()

# 设置页面加载超时时间为5秒
driver.set_page_load_timeout(5)

try:
    # 打开网页
    driver.get("https://example.com")
    
    # 找到要点击的元素
    element = driver.find_element_by_id("my_element_id")
    
    # 点击元素
    element.click()
    
except TimeoutException:
    print("页面加载超时")

# 在这里可以继续执行后续操作,而不用等待页面加载完成

# 关闭 WebDriver
driver.quit()

Guess you like

Origin blog.csdn.net/huangbangqing12/article/details/133547358