Turn: [actual] development Python3 web crawler crawls Ajax 6.4 Analysis Today's headlines Street beat Mito

Abstract In this section, we headlines today as an example to try to crawl the web method of data analysis by Ajax request. The goal is to grab headlines today's street shooting Mito, crawl after completion of each set of pictures will be downloaded to the local sub-folder and save it.

1. Preparations

Before beginning this section, make sure you have installed the requests library. If not, refer to Chapter 1.

2. crawl analysis

Before we crawled, we must first analyze crawl logic. Open the home page headlines today http://www.toutiao.com/, as shown in Figure 6-15.

6-15.jpg

Figure 6-15 Home content

The upper right corner there is a search portal, here trying to crawl street shoot Mito, so enter "street beat" word search, results shown in Figure 6-16.

6-16.jpg

Figure 6-16 Search Results

Then open the Developer Tools to see all network requests. First, open the first network request, URL request is the current link http://www.toutiao.com/search/?keyword= street shooting, turn on the Preview tab to view the Response Body. If the content of the page is rendered based on the results obtained in the first request, the first request of the source code will inevitably result in a page containing text. For verification, we can try search result's title, such as "passers" the word, shown in Figure 6-17.

6-17.jpg

Figure 6-17 Search Results

We found that web page source code does not contain the word, the search results match the number to zero. Therefore, it is possible to determine the initial content is loaded by Ajax, and then use the JavaScript rendered. Next, we can switch to filter XHR tab, check there are no Ajax request.

Not surprisingly, where there is a more conventional Ajax request to see if it contains the results of a page of data.

Click to expand the data fields and found there are many pieces of data. Click on the first start, you can find a title field, its value is precisely what the title page of the first data. Then check other data, also happens to be one to one, shown in Figure 6-18.

6-18.jpg

Figure 6-18 Comparison Results

This confirms that the data is indeed Ajax loaded.

我们的目的是要抓取其中的美图,这里一组图就对应前面data字段中的一条数据。每条数据还有一个image_detail字段,它是列表形式,这其中就包含了组图的所有图片列表,如图6-19所示。

6-19.jpg

图6-19 图片列表信息

因此,我们只需要将列表中的url字段提取出来并下载下来就好了。每一组图都建立一个文件夹,文件夹的名称就为组图的标题。

接下来,就可以直接用Python来模拟这个Ajax请求,然后提取出相关美图链接并下载。但是在这之前,我们还需要分析一下URL的规律。

切换回Headers选项卡,观察一下它的请求URL和Headers信息,如图6-20所示。

6-20.jpg

图6-20 请求信息

可以看到,这是一个GET请求,请求URL的参数有offset、format、keyword、autoload、count和cur_tab。我们需要找出这些参数的规律,因为这样才可以方便地用程序构造出来。

接下来,可以滑动页面,多加载一些新结果。在加载的同时可以发现,Network中又出现了许多Ajax请求,如图6-21所示。

6-21.jpg

图6-21 Ajax请求

这里观察一下后续链接的参数,发现变化的参数只有offset,其他参数都没有变化,而且第二次请求的offset值为20,第三次为40,第四次为60,所以可以发现规律,这个offset值就是偏移量,进而可以推断出count参数就是一次性获取的数据条数。因此,我们可以用offset参数来控制数据分页。这样一来,我们就可以通过接口批量获取数据了,然后将数据解析,将图片下载下来即可。

3. 实战演练

我们刚才已经分析了一下Ajax请求的逻辑,下面就用程序来实现美图下载吧。

首先,实现方法get_page()来加载单个Ajax请求的结果。其中唯一变化的参数就是offset,所以我们将它当作参数传递,实现如下:


  
  
  1. import requests
  2. from urllib.parse import urlencode
  3. def get_page(offset):
  4. params = {
  5. 'offset': offset,
  6. 'format': 'json',
  7. 'keyword': '街拍',
  8. 'autoload': 'true',
  9. 'count': '20',
  10. 'cur_tab': '1',
  11. }
  12. url = 'http://www.toutiao.com/search_content/?' + urlencode(params)
  13. try:
  14. response = requests.get(url)
  15. if response.status_code == 200:
  16. return response.json()
  17. except requests.ConnectireplaceString:
  18. return None

这里我们用urlencode()方法构造请求的GET参数,然后用requests请求这个链接,如果返回状态码为200,则调用response的json()方法将结果转为JSON格式,然后返回。

接下来,再实现一个解析方法:提取每条数据的image_detail字段中的每一张图片链接,将图片链接和图片所属的标题一并返回,此时可以构造一个生成器。实现代码如下:


  
  
  1. def get_images(json):
  2. if json.get('data'):
  3. for item in json.get('data'):
  4. title = item. get( 'title')
  5. images = item. get( 'image_detail')
  6. for image in images:
  7. yield {
  8. 'image': image. get( 'url'),
  9. 'title': title
  10. }

接下来,实现一个保存图片的方法save_image(),其中item就是前面get_images()方法返回的一个字典。在该方法中,首先根据item的title来创建文件夹,然后请求这个图片链接,获取图片的二进制数据,以二进制的形式写入文件。图片的名称可以使用其内容的MD5值,这样可以去除重复。相关代码如下:


  
  
  1. import os
  2. from hashlib import md5
  3. def save_image(item):
  4. if not os.path.exists(item.get( 'title')):
  5. os.mkdir(item.get( 'title'))
  6. try:
  7. response = requests.get(item.get( 'image'))
  8. if response.status_code == 200:
  9. file_path = '{0}/{1}.{2}'.format(item.get( 'title'), md5(response.content).hexdigest(), 'jpg')
  10. if not os.path.exists(file_path):
  11. with open(file_path, 'wb') as f:
  12. f.write(response.content)
  13. else:
  14. print( 'Already Downloaded', file_path)
  15. except requests.ConnectireplaceString:
  16. print( 'Failed to Save Image')

最后,只需要构造一个offset数组,遍历offset,提取图片链接,并将其下载即可:


  
  
  1. from multiprocessing.pool import Pool
  2. def main(offset):
  3. json = get_page(offset)
  4. for item in get_images(json):
  5. print(item)
  6. save_image(item)
  7. GROUP_START = 1
  8. GROUP_END = 20
  9. if __name__ == '__main__':
  10. pool = Pool()
  11. groups = ([x * 20 for x in range(GROUP_START, GROUP_END + 1)])
  12. pool.map(main, groups)
  13. pool.close()
  14. pool.join()

这里定义了分页的起始页数和终止页数,分别为GROUP_START和GROUP_END,还利用了多线程的线程池,调用其map()方法实现多线程下载。

这样整个程序就完成了,运行之后可以发现街拍美图都分文件夹保存下来了,如图6-22所示。

6-22.jpg

图6-22 保存结果

最后,我们给出本节的代码地址:https://github.com/Python3WebSpider/Jiepai

通过本节,我们了解了Ajax分析的流程、Ajax分页的模拟以及图片的下载过程。

本节的内容需要熟练掌握,在后面的实战中我们还会用到很多次这样的分析和抓取。

来源:华为云社区  作者:崔庆才丨静觅

This article transferred from the old ape: https: //blog.csdn.net/devcloud/article/details/95064931, thank Montana sharing!

Guess you like

Origin blog.csdn.net/LaoYuanPython/article/details/95305338