python3爬虫之lxml的xpath二次匹配遇到的问题

1. lxml的xpath解析页面

from lxml import etree

text = '''
<div>
    <ul id='a'>
         <li class="item-0"><a href="link1.html">first item</a></li>
         <li class="item-1"><a href="link2.html">second item</a></li>
         <li class="item-inactive"><a href="link3.html">third item</a></li>
         <li class="item-1"><a href="link4.html">fourth item</a></li>
         <li class="item-0"><a href="link5.html">fifth item</a>
     </ul>
     <ul id='b'>
         <li class="item-0"><a href="link1.html">11</a></li>
         <li class="item-1"><a href="link2.html">22</a></li>
     </ul>
 </div>
'''
html = etree.HTML(text)
nodeList = html.xpath('//ul')  # 第一次匹配ul
# print('nodelist:', nodeList)

# 二次匹配ul中的li
for i in nodeList:
    ##### 错误内容
    # print(etree.tostring(i))  # 输出li的内容
    # i = i.xpath('//li/a/text()')  # 此处并不是从遍历的li中匹配,而是从html中匹配的
    # print(i)

    ##### 解决方法:重新解析内容并匹配
    i = etree.tostring(i)  # 将etree对象转换为字节
    # print('tostring: ', i)
    i = etree.HTML(i)  # 解析
    content = i.xpath('//li/a/text()')
    print('结果:', content)

输出如下:

结果: [‘first item’, ‘second item’, ‘third item’, ‘fourth item’, ‘fifth item’]
结果: [‘11’, ‘22’]

2. BeautifulSoup解析页面

from bs4 import BeautifulSoup

soup = BeautifulSoup(text, 'lxml')
all_ul = soup.select('ul')
# print(all_ul)
print(type(all_ul))  # list
for i in all_ul:
    all_li = i.select('li')
    for li in all_li:
        print('li标签的内容', li.get_text())

输出如下:

<class 'list'>
li标签的内容 first item
li标签的内容 second item
li标签的内容 third item
li标签的内容 fourth item
li标签的内容 fifth item

li标签的内容 11
li标签的内容 22

猜你喜欢

转载自blog.csdn.net/llf_cloud/article/details/83687547