#第7篇Sharing: python-mobile crawler fiddler-opening a new era of data collection

#Mobile terminal crawler introduction
1. The idea of ​​mobile crawler, how to crawl the content in the APP :
a. The mobile phone and the computer need to communicate, relying on fiddler (equivalent to the establishment of a data transfer station);
b. The way to access the web page for data crawling take;

2.fiddler things need to be configured and the mobile phone:
A Fiddler to download and install, computers and mobile phones in. The same network ;
B Computer side configuration shown below:. Cmd-> ipconfig ip address is obtained, the configuration of mobile terminal used in the following:
Insert picture description here
Insert picture description here
c. Mobile phone configuration (There will be anti-picking during Douyin and Kuaishou grabbing. After the configuration is complete, if you want to grab his website, he will ban your network. The only solution is to download the mobile simulator on the computer side. Can solve anti-climbing: it may be optimized after a while):
#1. Set up network proxy: hostname: computer ip address, not fixed, and change with network changes; port is fidder port: can be modified (according to different mobile phone settings There may be differences, but remember that as long as these two are changed, the problem is not big);
#2. Mobile phone download certificate (open crawling permission): browser input URL: http://ip address: port number, mobile browsing If you can’t open the device, download it from the computer and manually upload it to the phone;

Insert picture description here
Insert picture description here
3. Crawler example: scrapy of today's headline anime entry pictures:
Insert picture description here
Insert picture description here

table of Contents:
Insert picture description here

settings:

# -*- coding: utf-8 -*-

# Scrapy settings for images project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'images'

SPIDER_MODULES = ['images.spiders']
NEWSPIDER_MODULE = 'images.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'images (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
    
    
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
    
    
#    'images.middlewares.ImagesSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
    
    
#    'images.middlewares.ImagesDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
    
    
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
# #上面只是个访问header,加个降低被拒绝的保险
ITEM_PIPELINES = {
    
    
   'images.pipelines.ImagesPipeline': 300,
}
IMAGES_STORE ='D:\\python\\Scrapy\\image\\test'


#IMAGES_EXPIRES = 90
#IMAGES_MIN_HEIGHT = 100
#IMAGES_MIN_WIDTH = 100
#其中IMAGES_STORE是设置的是图片保存的路径。IMAGES_EXPIRES是设置的项目保存的最长时间。
# IMAGES_MIN_HEIGHT和IMAGES_MIN_WIDTH是设置的图片尺寸大小

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

items:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class ImagesItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    image_urls = scrapy.Field()
    images = scrapy.Field()
# image_urls和images是固定的,不能改名字

images_toutiao:

# -*- coding: utf-8 -*-
import scrapy
import re
from ..items import ImagesItem

class ImagesToutiaoSpider(scrapy.Spider):
    name = 'images_toutiao'
    allowed_domains = ['a3-ipv6.pstatp.com']
    start_urls = ['https://a3-ipv6.pstatp.com/article/content/25/1/6819945008301343243/6819945008301343243/1/0/0']  # 构造爬取的URL

    # 爬取图片ID:
#:https://a3-ipv6.pstatp.com/article/content/25/1/6819945008301343243/6819945008301343243/1/0/0
#https://a3-ipv6.pstatp.com/article/content/25/1/6848145029051974155/6848145029051974155/1/0/0
#https://a6-ipv6.pstatp.com/article/content/25/1/6848145029051974155/6848145029051974155/1/0/0
#https://a3-ipv6.pstatp.com/article/content/25/1/6848145029051974155/6848145029051974155/1/0/0        #找了三个链接,是基本相同的地址

    def parse(self, response):
        result = response.body.decode()  # 对start_urls获取的响应进行解码
        contents = re.findall(r'},{"url":"(.*?)"}', result)

        for i in range(0, len(contents)):
            if len(contents[i]) <= len(contents[0]):

                item = ImagesItem()
                list = []
                list.append(contents[i])
                item['image_urls'] = contents
                print(list)
                yield item
            else:
                pass
        #翻页-爬取多个页面的图片
        # self.page = [6819945008301343243/6819945008301343243/1/0/0,6819945008301343243/6819945008301343243/1/0/0,]
        # for i in self.page  #只爬前5页
        #     url="https://a3-ipv6.pstatp.com/article/content/25/1/"+str(self.page)
        #     yield scrapy.Request(url,callback=self.parse)

pipelines:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.pipelines.images import ImagesPipeline
from scrapy.exceptions import DropItem
from scrapy.http import Request

#这里的两个函数get_media_requests和item_completed都是scrapy的内置函数,想重命名的就这这里操作
#可以直接复制这里的代码就可以用了

class ImagesPipeline(ImagesPipeline):
    def get_media_requests(self, item, info):
        for image_url in item['image_urls']:
            yield Request(image_url)

    def item_completed(self, results, item, info):
        image_path = [x['path'] for ok, x in results if ok]
        if not image_path:
            raise DropItem('Item contains no images')
        #item['image_paths'] = image_path
        return item

#     def file_path(self, request, response=None, info=None):
#         name = request.meta['name']    # 接收上面meta传递过来的图片名称
#         name = re.sub(r'[?\\*|“<>:/]', '', name)    # 过滤windows字符串,不经过这么一个步骤,你会发现有乱码或无法下载
#         filename= name +'.jpg'          #添加图片后缀名
#         return filename

We have completed the crawling of today’s headline app. We may find it difficult to get in touch at the beginning and encounter some problems, but after we really understand the learning, we will find that crawling on the web is a configuration problem. Not very complicated.
Recently, I am learning about web development templates and developing a blog site. The progress is very slow, just because I can’t write static web pages, but I recently solved it. I found one on the Internet and modified it myself; this also made me understand One reason is that the difficulty of doing things will slowly accumulate in our hearts, and it may accumulate to the point that we give up, but you discover when you really break through, but this is the case. This mentality is also applicable to facing difficulties in our lives. ;
Let me give a simple example: most people who learn to drive have experienced it. When we learn, we feel that we are everything to us. If we can’t pass it, we feel that life has failed. But when the driver’s license is obtained, it’s nothing more than looking back. What's wrong with my own. So be it, and if you have any questions, please feel free to communicate.
The seventh part is sharing, continuous update,,
,, I'm really working hard recently.

Guess you like

Origin blog.csdn.net/weixin_46008828/article/details/108690179