【Scrapy 框架】「版本2.4.0源码」物品管道(Item Pipeline) 详解篇

全部源码解析文章索引目录传送门

【Scrapy 框架】版本 2.4.0 源码篇:全部配置目录索引

内容介绍

Pipeline用于处理通过Scrapy抓取来的数据。
主要用途:

  1. 清理HTML数据
  2. 验证抓取去的数据(检查项目是否包含某些字段)
  3. 检查副本(并删除)
  4. 将Scrapy的项存储在数据库中

pipeline基础方法

每个项目管道组件都是一个Python类

process_item(self, item, spider):pipeline处理定义的Items内容。

open_spider(self, spider):打开Spider时调用此方法。

close_spider(self, spider):关闭Spider时调用此方法。

from_crawler(cls, crawler):当创建一个pipline实例的时候该方法会被调用,该方法必须返回一个pipline实例对象,一般用于获取scrapy项目的配置setting中配置的值。

pipeline简单举例

抓取数据用于调整price属性示例

from itemadapter import ItemAdapter
from scrapy.exceptions import DropItem
class PricePipeline:

    vat_factor = 1.15

    def process_item(self, item, spider):
        adapter = ItemAdapter(item)
        if adapter.get('price'):
            if adapter.get('price_excludes_vat'):
                adapter['price'] = adapter['price'] * self.vat_factor
            return item
        else:
            raise DropItem(f"Missing price in {item}")

数据写入JSON文件

import json

from itemadapter import ItemAdapter

class JsonWriterPipeline:

    def open_spider(self, spider):
        self.file = open('items.jl', 'w')

    def close_spider(self, spider):
        self.file.close()

    def process_item(self, item, spider):
        line = json.dumps(ItemAdapter(item).asdict()) + "\n"
        self.file.write(line)
        return item

数据写入写入MongoDB

import pymongo
from itemadapter import ItemAdapter

class MongoPipeline:

    collection_name = 'scrapy_items'

    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            mongo_uri=crawler.settings.get('MONGO_URI'),
            mongo_db=crawler.settings.get('MONGO_DATABASE', 'items')
        )

    def open_spider(self, spider):
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]

    def close_spider(self, spider):
        self.client.close()

    def process_item(self, item, spider):
        self.db[self.collection_name].insert_one(ItemAdapter(item).asdict())
        return item

页面截图

import hashlib
from urllib.parse import quote

import scrapy
from itemadapter import ItemAdapter

class ScreenshotPipeline:
    """
    每个Scrapy项目使用Splash渲染屏幕截图的管道
	"""

    SPLASH_URL = "http://localhost:8050/render.png?url={}"

    async def process_item(self, item, spider):
        adapter = ItemAdapter(item)
        encoded_item_url = quote(adapter["url"])
        screenshot_url = self.SPLASH_URL.format(encoded_item_url)
        request = scrapy.Request(screenshot_url)
        response = await spider.crawler.engine.download(request, spider)

        if response.status != 200:
            # Error happened, return item.
            return item

        # Save screenshot to file, filename will be hash of url.
        url = adapter["url"]
        url_hash = hashlib.md5(url.encode("utf8")).hexdigest()
        filename = f"{url_hash}.png"
        with open(filename, "wb") as f:
            f.write(response.body)

        # Store filename in item.
        adapter["screenshot_filename"] = filename
        return item

数据重复过滤

from itemadapter import ItemAdapter
from scrapy.exceptions import DropItem

class DuplicatesPipeline:

    def __init__(self):
        self.ids_seen = set()

    def process_item(self, item, spider):
        adapter = ItemAdapter(item)
        if adapter['id'] in self.ids_seen:
            raise DropItem(f"Duplicate item found: {item!r}")
        else:
            self.ids_seen.add(adapter['id'])
            return item

pipeline激活方法

在settings.py中设置,否则抓取数据无法处理

ITEM_PIPELINES = {
    
    
    'myproject.pipelines.PricePipeline': 300,
    'myproject.pipelines.JsonWriterPipeline': 800,
}

猜你喜欢

转载自blog.csdn.net/qq_20288327/article/details/113498166
今日推荐