[Crawler] Use selenium to crawl dynamically loaded pages

Google Chrome version query address: chrome://version/

Google driver download address: http://chromedriver.storage.googleapis.com/index.html

Note: The driver version and browser version must correspond

from selenium import webdriver

def main():
    browser = webdriver.Chrome('./chromedriver.exe')#实例化一个驱动对象
    browser.implicitly_wait(10)#如果 WebDriver没有在 DOM中找到元素,将继续等待,超出设定时间后则抛出找不到元素的异常
    browser.get("页面地址")#利用浏览器驱动访问
    data = browser.page_source  # 获取页面的源代码
    print(data)

if __name__ =="__main__":
    main()

Guess you like

Origin blog.csdn.net/xudahai513/article/details/126708990