Python+Scrapy批量抓取唯一图库图片并按系列存储

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/baidu_39459954/article/details/80593788
人生苦短,我用python!

博主闲暇中自学Scrapy,水平有限,不到之处,还请大家指正。

开发及运行环境

CentOS Linux release 7.4.1708 +  Pycharm2018.1.3 

Python 2.7.5 + Scrapy 1.5.0

如何安装开发环境和运行环境这里就不赘述了,Scrapy是个很强大的框架,本例只使用了其中部分功能。

干货

爬取网站为唯一图库,http://www.mmonly.cc/mmtp/

源代码托管:github

MMspider.py

爬虫主解析程序,关于网站源码解析以及XPATH语法,请自行百度,或者留言

# --coding:utf-8--
import os
import scrapy
import datetime
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from mmonly.items import mmonlyItem

class Myspider(CrawlSpider):
    name = 'mmspider'
    base = r'/home/yinchong/Downloads/mmtp/'  # 定义基础存储路径

    allowed_domains = ['mmonly.cc']
    start_urls = [
        'http://www.mmonly.cc/mmtp/',
    ]
    # 定义主页面爬取规则,有下一页则继续深挖,其他符合条件的链接则调用parse_item解析原图地址   
    rules = (
        Rule(LinkExtractor(allow=(''), restrict_xpaths=(u"//a[contains(text(),'下一页')]")), follow=True),
        Rule(LinkExtractor(allow=('http://www.mmonly.cc/(.*?).html'), restrict_xpaths=(u"//div[@class='ABox']")), callback="parse_item", follow=False),
    )

    def parse_item(self, response):
        item = mmonlyItem()
        item['siteURL'] = response.url
        item['title'] = response.xpath('//h1/text()').extract_first()   # xpath解析标题
        item['path'] = self.base + item['title']   # 定义存储路径,同一系列存储在同一目录
        path = item['path']
        if not os.path.exists(path):
            os.makedirs(path)             # 如果存储路径不存在则创建
        item['detailURL'] = response.xpath('//a[@class="down-btn"]/@href').extract_first()   # 解析原图URL
        print(item['detailURL'] )
        num = response.xpath('//span[@class="nowpage"]/text()').extract_first()   # 解析同一系列图片编号
        item['fileName'] = item['path'] + '/' + str(num) + '.jpg'        # 拼接图片名称

        print datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), item['fileName'], u'解析成功!'
        yield item
        # 传入的解析item的链接如果有下一页的话,继续调用parse_item
        next_page = response.xpath(u"//a[contains(text(),'下一页')]/@href").extract_first()
        if next_page is not None:
            next_page = response.urljoin(next_page)
            yield scrapy.Request(next_page, callback=self.parse_item)

items.py

# -*- coding: utf-8 -*-
import scrapy

class mmonlyItem(scrapy.Item):
    siteURL = scrapy.Field() # 图片网站地址
    detailURL = scrapy.Field() # 图片原图地址
    title = scrapy.Field()  # 图片系列名称
    fileName = scrapy.Field() # 图片存储全路径名称
    path = scrapy.Field() # 图片系列存储路径

pipelines.py

下载处理程序

# -*- coding: utf-8 -*-
import requests
import datetime

class mmonlyPipeline(object):
    def process_item(self, item, spider):
        count = 0
        detailURL = item['detailURL']
        fileName = item['fileName']
        while True:
            try:
                print datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), u'正在保存图片:', detailURL
                print u'文件:', fileName
                image = requests.get(detailURL) # 根据解析出的item原图链接下载图片
                f = open(fileName, 'wb')        # 打开图片
                f.write(image.content)          # 写入图片
                f.close()
            except Exception, e:
                print fileName, 'other fault:', e
                count += 1
            else:
                print datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), fileName, u'保存成功!'
                break
        return item

settings.py

scrapy设置,由于本次采集的网站反爬不严大笑,未使用随机User-Agent和IP代理。

# -*- coding: utf-8 -*-
# Scrapy settings for mmonly project

BOT_NAME = 'mmonly'
SPIDER_MODULES = ['mmonly.spiders']
NEWSPIDER_MODULE = 'mmonly.spiders'
FEED_EXPORT_ENCODING = 'utf-8'

ROBOTSTXT_OBEY = False
# 默认是16,一次可以请求的最大次数
CONCURRENT_REQUESTS = 32
# 下载延迟
# DOWNLOAD_DELAY = 0.1
COOKIES_ENABLED = False
DEFAULT_REQUEST_HEADERS = {
    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
    'Accept-Encoding':'gzip, deflate, sdch',
    'Accept-Language':'zh-CN,zh;q=0.8',
    'Cache-Control':'max-age=0',
    'Connection':'keep-alive',
    'User-Agent':'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36'}

ITEM_PIPELINES = {'mmonly.pipelines.mmonlyPipeline': 100}
# 日志级别
LOG_LEVEL = 'INFO'
LOG_FILE = '/tmp/log.txt'

main.py

from scrapy.cmdline import execute
execute(['scrapy', 'crawl', 'mmspider'])

命令行运行scrapy crawl mmspider或者 python main.py

Pycharm运行main.py

运行效果

运行main.py开始抓取图片,总计抓取图片超过20万张大笑,运行时间会很长,需要的磁盘空间也很大,做好心理准备。如果您有好的多线程方案,可以留言讨论。


猜你喜欢

转载自blog.csdn.net/baidu_39459954/article/details/80593788