Python: data cleaning for HTML content

scene description

When using Python crawlers, it is often necessary to clean the crawled data to filter out unnecessary content. Regularity ( ) is often used for data cleaning for data whose crawled result is textre.sub() , but for data whose crawled result is HTML , if regularization is still used for data cleaning, it will often get twice the result with half the effort , so for crawling The result is HTM How to clean the data of L?

code example

# -*- coding: utf-8 -*-
import scrapy
import htmlmin
from lxml import etree
from lxml import html
from html import unescape


class TestSpider(scrapy.Spider):
    name = 'test'
    allowed_domains = ['www.gongkaoleida.com']
    start_urls = ['https://www.gongkaoleida.com/article/869186']
    # start_urls = ['https://www.gongkaoleida.com/article/869244']

    def parse(self, response):
        content = response.xpath('//article[@class="detail-info"]').getall()[0].replace('\n', '').replace('\r', '')
        content = unescape(content)
        tree = etree.HTML(content)
        # 查找包含“公考雷达”的标签
        str1 = tree.xpath('//p[contains(text(), "公考雷达")] | //a[contains(text(), "公考雷达")]/.. | //div[contains(text(), "公考雷达")]')
        # 查找包含“附件:”或“附件:”或“常见office文件后缀”的标签
        str2 = tree.xpath('//a[re:match(text(), "附件(\w+)?(:|:)") or re:match(text(), "(.doc|.xls|.ppt|.pdf)")]/..', namespaces={
    
    "re": "http://exslt.org/regular-expressions"})
        str3 = tree.xpath('//p[re:match(text(), "^(附件)(\w+)?(:|:)") or re:match(text(), "(.doc|.xls|.ppt|.pdf)")]', namespaces={
    
    "re": "http://exslt.org/regular-expressions"})
        str4 = tree.xpath('//span[re:match(text(), "附件(\w+)?(:|:)") or re:match(text(), "(.doc|.xls|.ppt|.pdf)")]/../..', namespaces={
    
    "re": "http://exslt.org/regular-expressions"})
        str5 = tree.xpath('//em[re:match(text(), "附件(\w+)?(:|:)") or re:match(text(), "(.doc|.xls|.ppt|.pdf)")]/../..', namespaces={
    
    "re": "http://exslt.org/regular-expressions"})
        # 查找href中包含gongkaoleida的标签
        str6 = tree.xpath('//*[re:match(@href, "gongkaoleida")]', namespaces={
    
    "re": "http://exslt.org/regular-expressions"})
        # 数据清洗
        for i in str1 + str2 + str3 + str4 + str5 + str6:
            p1 = html.tostring(i)
            p2 = unescape(p1.decode('utf-8'))
            content = content.replace(p2, '')
        # 压缩代码
        content = htmlmin.minify(content, remove_empty_space=True, remove_comments=True)
        print(content)

Here is XPath + 正则the way to do it!

Precautions

  • When using the regular method ( ) of XPath in lxmlre:match() , it needs to be used in conjunction with the namespace ( namespaces={"re": "http://exslt.org/regular-expressions"}), such as:
    str6 = tree.xpath('//*[re:match(@href, "gongkaoleida")]', namespaces={
          
          "re": "http://exslt.org/regular-expressions"})
    
  • There is no need to fill in the namespace when using the regular method ( ) of XPath in Scrapy , such as:re:match()
    attachment_title_list = response.xpath('//a[re:match(text(), "(.doc|.xls|.ppt|.pdf)")]/text()').getall()
    

Guess you like

Origin blog.csdn.net/qq_34562959/article/details/121631832