Scrapy 爬取图片实例

目标:360摄影美图

创建scrapy:

scrapy startproject images360

创建spider:

scrapy genspider images images.so.com

修改代码:

修改spider:修改images.py:代码是根据下拉网页的AJAX请求分析出来的。

# -*- coding: utf-8 -*-
from scrapy import Spider, Request
from urllib.parse import urlencode
import json

from images360.items import ImageItem


class ImagesSpider(Spider):
    name = 'images'
    allowed_domains = ['images.so.com']
    start_urls = ['http://images.so.com/']
    
    
    def start_requests(self):
        data = {'ch': 'beauty', 'listtype': 'new'}
        base_url = 'https://image.so.com/zj?'
        for page in range(1, self.settings.get('MAX_PAGE') + 1):
            data['sn'] = page * 30
            params = urlencode(data)
            url = base_url + params
            yield Request(url, self.parse)
    
    def parse(self, response):
        result = json.loads(response.text)
        for image in result.get('list'):
            item = ImageItem()
            item['id'] = image.get('imageid')
            item['url'] = image.get('qhimg_url')
            item['title'] = image.get('group_title')
            item['thumb'] = image.get('qhimg_thumb_url')
            yield item

修改items.py:想要得到的字段

from scrapy import Item,Field


class ImageItem(Item):
    collection = table = 'images'
    
    id = Field()
    url = Field()
    title = Field()
    thumb = Field()

修改piepeline.py:用了内置imagespipeline保存图片到本地:

import pymongo
from scrapy import Request
from scrapy.exceptions import DropItem
from scrapy.pipelines.images import ImagesPipeline

class ImagePipeline(ImagesPipeline):
    def file_path(self, request, response=None, info=None):
        url = request.url
        file_name = url.split('/')[-1]
        return file_name
    
    def item_completed(self, results, item, info):
        image_paths = [x['path'] for ok, x in results if ok]
        if not image_paths:
            raise DropItem('Image Downloaded Failed')
        return item
    
    def get_media_requests(self, item, info):
        yield Request(item['url'])

最后修改setting:

ROBOTSTXT_OBEY = False #修改

ITEM_PIPELINES = {
    'images360.pipelines.ImagePipeline': 300,
    #'images360.pipelines.MongoPipeline': 301,
}

MAX_PAGE = 50

#MONGO_URI = '192.168.6.23'
#MONGO_DB = 'images360'

最后运行:

scrapy crawl images

猜你喜欢

转载自blog.csdn.net/qq_40771567/article/details/83822453