Scrapy项目(斗鱼直播)---利用Spider爬取颜值下的美女信息

1、创建Scrapy项目

scrapy startproject douyu

2.进入项目目录,使用命令genspider创建Spider

scrapy genspider douyumeinv "capi.douyucdn.cn"

3、定义要抓取的数据(处理items.py文件)

# -*- coding: utf-8 -*-
import scrapy

class DouyuItem(scrapy.Item):
    name = scrapy.Field()  # 存储照片的名字
    imagesUrls = scrapy.Field()  # 照片的url路径
    imagesPath = scrapy.Field()  # 照片保存在本地的路径

4、编写提取item数据的Spider(在spiders文件夹下:douyumeinv.py)

# -*- coding: utf-8 -*-
import scrapy
import json
# 如果下面在pycharm中有红色波浪线,参照这个设置:https://blog.csdn.net/z564359805/article/details/80650843
from douyu.items import DouyuItem

class DouyumeinvSpider(scrapy.Spider):
    name = 'douyumeinv'
    allowed_domains = ['capi.douyucdn.cn']
    offset = 0
    url = "http://capi.douyucdn.cn/api/v1/getVerticalRoom?limit=20&offset="
    start_urls = [url + str(offset)]

    def parse(self, response):
        data = json.loads(response.text)['data']
        for each in data:
            item = DouyuItem()
            item['name'] = each['nickname']
            item['imagesUrls'] = each['vertical_src']

            yield item
        self.offset += 20
        yield scrapy.Request(self.url + str(self.offset),callback=self.parse)

5.处理pipelines管道文件保存数据,可将结果保存到文件中(pipelines.py)

# -*- coding: utf-8 -*-
import scrapy
import os
from scrapy.utils.project import get_project_settings
from scrapy.pipelines.images import ImagesPipeline

# 继承ImagesPipeline()的子类处理图片并保存,参考:https://blog.csdn.net/z564359805/article/details/80693578
class ImagePipeline(ImagesPipeline):
    # 获取settings文件中设置的图片保存地址IMAGES_STORE
    IMAGES_STORE =get_project_settings().get("IMAGES_STORE")
    def get_media_requests(self, item, info):
        image_url = item['imagesUrls']
        yield scrapy.Request(image_url)

    def item_completed(self, results, item, info):
        image_path = [x['path'] for ok, x in results if ok]
        os.rename(self.IMAGES_STORE + '/' + image_path[0],self.IMAGES_STORE + '/' + item['name'] + '.jpg')
        item['imagesPath'] = self.IMAGES_STORE + '/' + item['name']
        return item

6.配置settings文件(settings.py)

# Obey robots.txt rules,具体含义参照:https://blog.csdn.net/z564359805/article/details/80691677
ROBOTSTXT_OBEY = False      
# Override the default request headers:添加User-Agent信息    
DEFAULT_REQUEST_HEADERS = {
  'USER_AGENT':'DYZB/2.290 (iPhone; iOS 9.3.4; Scale/2.00)',
}

# 图片保存地址,这样会在当前执行的目录下创建images文件夹,也可以写具体地址  
IMAGES_STORE = "./images"  

# Configure item pipelines  
ITEM_PIPELINES = {  
   'douyu.pipelines.ImagePipeline': 300,
}  
# 还可以将日志存到本地文件中(可选添加设置)  
LOG_FILE = "douyulog.log"  
LOG_LEVEL = "DEBUG"  

7.以上设置完毕,进行爬取:执行项目命令crawl,启动Spider:

scrapy crawl douyumeinv

猜你喜欢

转载自blog.csdn.net/z564359805/article/details/80707165