Let me talk about the positioning method of selenium.
find_element_by_id
find_element_by_name
find_element_by_xpath
find_element_by_link_text
find_element_by_partial_link_text
find_element_by_tag_name
find_element_by_class_name
find_element_by_css_selector
is often used in the first eight kinds of familiar
1.id定位:find_element_by_id(self, id_)
2.name定位:find_element_by_name(self, name)
3.class定位:find_element_by_class_name(self, name)
4.tag定位:find_element_by_tag_name(self, name)
5.link定位:find_element_by_link_text(self, link_text)
6.partial_link定位find_element_by_partial_link_text(self, link_text)
7.xpath定位:find_element_by_xpath(self, xpath)
8.css定位:find_element_by_css_selector(self, css_selector)
These eight are plural forms
9.id plural positioning find_elements_by_id(self, id_)
10.name plural positioning find_elements_by_name(self, name)
11.class plural positioning find_elements_by_class_name(self, name)
12.tag plural positioning find_elements_by_tag_name(self, name)
13.link plural positioning find_elements_by_link_text(self, text)
14. partial_link plural positioning find_elements_by_partial_link_text(self, link_text)
15.xpath plural positioning find_elements_by_xpath(self, xpath)
16.css plural positioning find_elements_by_css_selector_(self, css_selector_(self, css)
from bs4 import BeautifulSoup
from selenium import webdriver
target = '网页网址'
option = webdriver.ChromeOptions()
option.add_argument('headless') # 设置option,后台运行
driver = webdriver.Chrome(chrome_options=option)
driver.get(target)
If a single button needs to be clicked, just follow the 1-8 above to find the label that needs to be clicked, and then add the click() method.
result= driver.find_element_by_class_name('需要点击的类名')
result.click()
In this way, the click effect is achieved.
Multiple buttons need to be clicked. You can choose the plural form 9-16. Then use the corresponding class name to find all the tags. Note that I use result_list here, because the return is a list of lists, and select the fourth -7 tabs, then click separately
result_list= driver.find_elements_by_class_name('需要点击的类名')
for i in range(4, 8):
result_list[i].click()
Then you can crawl the expanded webpage
selenium_page = driver.page_source
driver.quit()
soup = BeautifulSoup(selenium_page, 'html.parser')
# one = soup.find('div', {'class': '布拉布拉类名'}) 单个
many= cities.find_all('div', {
'class': '咕噜咕噜类名'}) #多个
for i in many:
content = i.find_all('p') #找到对应元素
nation = content[0].get_text() # 读取内容