Xpath 和 BeautifulSoup4区别对比

XPath

1. 永远返回一个列表:有数据的列表 或 空列表

2. XPath匹配时,下标从 1 开始

3. XPath取值的 目标值 两种:
-1. 指定标签的文本内容 (如取文本)
-2. 指定标签的指定属性值 (如取链接)

XPath取出的字符串数据,都是Unicode编码字符串。

4. 如果取值的目标值很多,可以先获取所有结点列表,再迭代取值:
获取结点列表

node_list = "//div[@class='f18 mb20']"

for node in node_list:
  item = {}
  item['text'] = " ".join(ode.xpath("./text()"))
  item['a_text'] = node.xpath("./a/text()")[0]
  item['link'] = node.xpath("./a/@href")[0]

html = response.read()
html = response.content

#导入lxml类库里的 etree模块
from lxml import etree
 通过 etree模块的 HTML类 获取 HTML DOM对象
html_obj = etree.HTML(html)
 html_obj = etree.parse("./baidu.html")
 html = etree.tostring(html_obj)

node_list = html_obj.xpath("//div[@class='f18 mb20']/a/@href")

BeautifulSoup4 的常用匹配方法:

1. find() : 匹配网页中第一个符合规则的结果,并返回该结果
2. find_all() :匹配网页中所有符合规则的结果,并返回结果列表
find() 和 find_all() 语法相同
3. select() : 匹配网页中所有符合规则的结果,并返回结果列表(使用CSS选择器用法)

url = "https://hr.tencent.com/position.php?&start=0" += 10


item_list = []
node_list = soup.find_all("tr", {"class" : ["even", "odd"]})

for node in node_list:
    item = {}
    item['position_name'] = node.find_all("td")[0].a.text
    item['position_link'] = node.find_all("td")[0].a.get("href")
    item['position_type'] = node.find_all("td")[1].text
    item['people_number'] = node.find_all("td")[2].text
    item['work_location'] = node.find_all("td")[3].text
    item['publish_times'] = node.find_all("td")[4].text
    item_list.append(item)

Xpath 和bs4使用对比:

import requests
from lxml import etree
from bs4 import BeautifulSoup
url = "https://hr.tencent.com/position.php?&start=10"
headers = {"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"}
html = requests.get(url, headers=headers).content

html_obj = etree.HTML(html)
html_obj.xpath("//tr[@class='even']")
html_obj.xpath("//tr[@class='odd']")
html_obj.xpath("//tr[@class='even'] | //tr[@class='odd']")

soup = BeautifulSoup(html, "lxml")
soup.find_all("tr")
# 找出所有的tr
len(soup.find_all("tr"))
# 找出所有指定属性的 tr
len(soup.find_all("tr", {"class" : "even"}))
len(soup.find_all("tr", {"class" : "odd"}))
len(soup.find_all("tr", {"class" : ["even", "odd"]}))

# 找出所有指定属性的 tr 和tmm,属性相同
len(soup.find_all(["tr", "tmm"], {"class" : ["even", "odd"]}))
# 根据属性查找所有指定的标签
len(soup.find_all(attrs={"class" : ["even", "odd"]}))
# 根据class属性超找所有指定的标签
len(soup.find_all(class_ = ["even", "odd"]))

# 找出所有class为 even 和 odd 的标签
len(soup.select(".even"))
len(soup.select(".even, .odd"))
len(soup.select("[class='even'], [class='odd']"))


bs4提取文本和属性值:

import requests
from bs4 import BeautifulSoup
url = "https://hr.tencent.com/position.php?&start=10"
headers = {"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"}
html = requests.get(url, headers=headers).content
soup = BeautifulSoup(html, "lxml")

node_list = soup.find_all("tr", {"class" : ["even", "odd"]})

node_list[0].td
node_list[0].find_all("td")
node_list[0].select("td")

node_list[0].select("td")[0]
node_list[0].select("td")[0].a

node_list[0].select("td")[0].a.string
node_list[0].select("td")[0].a.text
node_list[0].select("td")[0].a.get_text()

node_list[0].select("td")[0].a.get("href")
node_list[0].select("td")[0].a.attrs
node_list[0].select("td")[0].a.attrs["href"]

猜你喜欢

转载自blog.csdn.net/qq_39655431/article/details/84136380