Idea:
1. First send a request to the target website to obtain the html source code
2. Extract all image links in the source code
3. Send a request to the image link again to obtain the byte data
4. Save the image locally.
Import module:
import requests #请求库 第三方库,需要安装: pip install requests
import re #筛选库,py自带,无需安装
1. Search interface:
Open F12 to open the developer tools, click Network, Fetch/XHR, Load, and click in order. You can see that there are two query parameters, which are: word: landscape image queryWord: landscape image
We can use these two query parameters to customize the image content
2. Determine the interface we want to crawl
3. Customize url query parameters:
You can use format to format parameters. We use the input function to accept parameters input by the user.
word = input('请输入要搜索的图片:')
url = 'https://image.baidu.com/search/acjson?tn=resultjson_com&logid=5853806806594529489&ipn=rj&ct=201326592&is=&fp=result&fr=ala&word={}&queryWord={}&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=&z=&ic=&hd=&latest=©right=&s=&se=&tab=&width=&height=&face=&istype=&qc=&nc=&expermode=&nojc=&isAsync=&pn=30&rn=30&gsm=1e&1658411978178='.format(word, word)
print(url) 打开网址就生成我们需要的网址
4. Add request headers for disguise to prevent the server from identifying it as a crawler program
headers = {"User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36 Edg/99.0.1150.39'}
5. Create a folder to determine whether the folder exists. If it exists, create it. If it does not exist, it will overwrite it.
Then send a request and print the source code
if not os.path.exists(files): #假如没有文件执行以下代码:
os.makedirs(files) #有文件夹则覆盖,没有则创建
#发送请求 获取响应的htm源码
response_html=requests.get(url=url,headers=headers)
print(response_html.text) #输出源码
6. Extract the image link in the response html source code
result='"thumbURL":"(.*?)"' #正则式
img_list=re.findall(result,response_html) #调用findall函数进行匹配
7. Traverse the image l link address extracted by the regular expression and then send the request
file_name=1 #使用数字命名图片
for img_url in img_list: #遍历刷选后的网址
print(img_url) #打印地址
#发送请求 获取字节数据
response=requests.get(url=img_url,headers=headers)
8. Set the image save type and save location
#定义文件名和类型 创建的文件夹路径+文件名+类型
file=files+word+str(file_name)+'张.jpg'
#创建图片文件 写入二进制
with open(file,mode='wb') as f:
#写入字节数据
f.write(response.content)
#提示
print(word+str(file_name)+'张.jpg''保存成功')
#文件名+1 防止重复
file_name+=1
9. The complete source code can be copied and used. The prerequisite is to install the requests library.
import re #筛选url
import requests #请求
import os #创建文件夹
word = input('请输入要搜索的图片:')
url = 'https://image.baidu.com/search/acjson?tn=resultjson_com&logid=5853806806594529489&ipn=rj&ct=201326592&is=&fp=result&fr=ala&word={}&queryWord={}&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=&z=&ic=&hd=&latest=©right=&s=&se=&tab=&width=&height=&face=&istype=&qc=&nc=&expermode=&nojc=&isAsync=&pn=30&rn=30&gsm=1e&1658411978178='.format(word, word)
#伪装浏览器
headers = {
"User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36 Edg/99.0.1150.39'
}
files='D:/{}/'.format(word) #创建文件夹路径
if not os.path.exists(files): #假如没有文件执行以下代码:
os.makedirs(files) #有文件夹则覆盖,没有则创建
#获取html源码
response_html=requests.get(url=url,headers=headers)
#正则提取图片
result='"thumbURL":"(.*?)"' #正则式
img_list=re.findall(result,response_html.text) #筛选
file_name=1 #使用数字命名
for img_url in img_list: #遍历刷选后的网址 get_image(a,i) #将遍历后的url地址传到get-image这个函数
print(img_url) #打印地址
#发送请求 获取字节数据
response=requests.get(url=img_url,headers=headers)
#定义文件名和类型 创建的文件夹路径+文件名+类型
file=files+word+str(file_name)+'张.jpg'
#创建图片文件 写入二进制
with open(file,mode='wb') as f:
#写入字节数据
f.write(response.content)
#提示
print(word+str(file_name)+'张.jpg''保存成功')
#文件名+1 防止重复
file_name+=1
Let's take a look at the running results:
You can see that I am searching for Shiba Inu, and send and save each image link in the source code.
So is the saved picture a Shiba Inu? Let’s take a look:
You can see that what is saved is the Shiba Inu picture and a folder named Shiba Inu is created!