Python crawler crawls Baidu pictures at will

I. Introduction

I have crawled many static web pages before, including novels, pictures, etc. Today I will try to crawl dynamic web pages. As we all know, Baidu Pictures is a dynamic web page. So, go! Rush! ! Rush! ! !

Second, the library that needs to be imported

import requests
import json
import os

Three, the realization process

1. Download link analysis

First, open Baidu, search for a content, here is the male god ( myself ) -Peng Yuyan
Insert picture description here
Then, open the packet capture tool, select the XHR option, press Ctrl+R , and then you will find that as your mouse slides, right One data packet after another will appear on the side.
Didn't dare to slide too much
(There is not too much sliding here. At first, because of too much sliding, the recorded GIF exceeded 5M)

Then, select a package and view its headers, as shown in the figure:
Insert picture description here

After the interception, paste it on the notepad as a URL, which will be used later.
Insert picture description here
There are many, many parameters here, and I don't know which ones can be ignored. I will copy them all in the following text. See the following text for details.

At this point, the content that can be directly observed is over. Next, with the help of code, help us open the door to another world

Chong and it's over!

2. Code analysis

First of all: group the " other parameters " mentioned above together.

If you do it yourself, it's best to copy your own " other parameters ".

After that, we can extract it for a try first, and change the encoding format to'utf-8'

 url = 'https://image.baidu.com/search/acjson?'
    param = {
    
    
        'tn': 'resultjson_com',
        'logid': ' 7517080705015306512',
        'ipn': 'rj',
        'ct': '201326592',
        'is': '',
        'fp': 'result',
        'queryWord': '彭于晏',
        'cl': '2',
        'lm': '-1',
        'ie': 'utf-8',
        'oe': 'utf-8',
        'adpicid': '',
        'st': '',
        'z': '',
        'ic': '',
        'hd': '',
        'latest': '',
        'copyright': '',
        'word': '彭于晏',
        's': '',
        'se': '',
        'tab': '',
        'width': '',
        'height': '',
        'face': '',
        'istype': '',
        'qc': '',
        'nc': '1',
        'fr': '',
        'expermode': '',
        'force': '',
        'cg': 'star',
        'pn': '30',
        'rn': '30',
        'gsm': '1e',
    }
    # 将编码形式转换为utf-8
    response = requests.get(url=url, headers=header, params=param)
    response.encoding = 'utf-8'
    response = response.text
    print(response)

The results of the operation are as follows: It
Insert picture description here
looks messy, it's okay, let's wrap it up!

Add to the above:

 # 把字符串转换成json数据
    data_s = json.loads(response)
    print(data_s)

The results of the operation are as follows:
Insert picture description here
Compared with the above, it is much clearer, but it is still not clear enough. Why? Because its printed format is not convenient for us to watch!

There are two solutions to this.

①Import the pprintlibrary, then input pprint.pprint(data_s), you can print, as shown below

Insert picture description here

②Using json online parser (self-Baidu), the results are as follows:
Insert picture description here

After solving the previous step, we will find that all the data we want is in datait!

Then extract it!

 a = data_s["data"]
    for i in range(len(a)-1):  # -1是为了去掉上面那个空数据
        data = a[i].get("thumbURL", "not exist")
        print(data)

The result is as follows:
Insert picture description here
So far, it has been 90% successful, and the rest is to save and optimize the code!

Here is the quote

3. Complete code

This part is slightly different from the above, and you will find it after a closer look!

# -*- coding: UTF-8 -*-
"""
@Author  :远方的星
@Time   : 2021/2/27 17:49
@CSDN    :https://blog.csdn.net/qq_44921056
@腾讯云   : https://cloud.tencent.com/developer/user/8320044
"""
import requests
import json
import os
import pprint
# 创建一个文件夹
path = 'D:/百度图片'
if not os.path.exists(path):
    os.mkdir(path)
# 导入一个请求头
header = {
    
    
    'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36'
}
# 用户(自己)输入信息指令
keyword = input('请输入你想下载的内容:')
page = input('请输入你想爬取的页数:')
page = int(page) + 1
n = 0
pn = 1
# pn代表从第几张图片开始获取,百度图片下滑时默认一次性显示30张
for m in range(1, page):
    url = 'https://image.baidu.com/search/acjson?'
    param = {
    
    
        'tn': 'resultjson_com',
        'logid': ' 7517080705015306512',
        'ipn': 'rj',
        'ct': '201326592',
        'is': '',
        'fp': 'result',
        'queryWord': keyword,
        'cl': '2',
        'lm': '-1',
        'ie': 'utf-8',
        'oe': 'utf-8',
        'adpicid': '',
        'st': '',
        'z': '',
        'ic': '',
        'hd': '',
        'latest': '',
        'copyright': '',
        'word': keyword,
        's': '',
        'se': '',
        'tab': '',
        'width': '',
        'height': '',
        'face': '',
        'istype': '',
        'qc': '',
        'nc': '1',
        'fr': '',
        'expermode': '',
        'force': '',
        'cg': 'star',
        'pn': pn,
        'rn': '30',
        'gsm': '1e',
    }
    # 定义一个空列表,用于存放图片的URL
    image_url = list()
    # 将编码形式转换为utf-8
    response = requests.get(url=url, headers=header, params=param)
    response.encoding = 'utf-8'
    response = response.text
    # 把字符串转换成json数据
    data_s = json.loads(response)
    a = data_s["data"]  # 提取data里的数据
    for i in range(len(a)-1):  # 去掉最后一个空数据
        data = a[i].get("thumbURL", "not exist")  # 防止报错key error
        image_url.append(data)

    for image_src in image_url:
        image_data = requests.get(url=image_src, headers=header).content  # 提取图片内容数据
        image_name = '{}'.format(n+1) + '.jpg'  # 图片名
        image_path = path + '/' + image_name  # 图片保存路径
        with open(image_path, 'wb') as f:  # 保存数据
            f.write(image_data)
            print(image_name, '下载成功啦!!!')
            f.close()
        n += 1
    pn += 29

The results of the operation are as follows:
Insert picture description here
Insert picture description here
Friendly reminder :
①: One page is 30 sheets
②: The input content can be changed in many ways: such as bridge, moon, sun, Hu Ge, Zhao Liying and so on.

Four, Blogger's speech

I hope everyone can like, follow, collect, and support three consecutive times!

Author: distant star
CSDN: https: //blog.csdn.net/qq_44921056
Tencent cloud: https: //cloud.tencent.com/developer/column/91164
This article is only for the exchange of learning, without the author's permission is prohibited reprint , Don’t use it for other purposes, offenders must be investigated.

Guess you like

Origin blog.csdn.net/qq_44921056/article/details/114174916