在不使用爬虫框架的前提下,如何高效爬取500w条数据

使用线程池实现多任务+多任务异步协程 来实现高效爬取. 下面 以爬取音频数据做示范

高效爬取大量音频数据

主要思路:
<以下的变量名跟代码中不一样,只是方便表述所示>
1. 首先 获取所有音频url地址,放进两个url_list中,用来模拟多个任务 <比如有500W个任务,就将他们分成两个任务列表去分别执行>
2. 创建协程函数:主要实现 获取音频数据,返回个字典 <音频名,音频bytes>
3. 创建回调函数_1:主要实现 音频数据持久化存储
4. 协程_obj_list 存放 实例化协程对象
5. task_list 中存放 封装好的 协程任务 <即task> ,并在此阶段绑定 回调函数_1
6.实例化 事件循环对象
7. func_args_list 中存放字典,字典内容为:事件循环对象,协程任务对象 <即task>
8. 创建pool_func :主要实现 多任务异步协程 执行爬取任务
9.实例化线程池,开启线程池,将事件循环对象和协程任务对象交给pool_func函数,正式开始执行爬取任务

import requests
from time import time
from lxml import etree
from multiprocessing.dummy import Pool
import asyncio
import aiohttp
import os

pool = Pool(12)
url = 'https://www.ximalaya.com/revision/play/album?albumId=20337620&pageNum=1&sort=1&pageSize=30'
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36"
}

index_dic = requests.get(url,headers=headers).json()

urls1 = []
urls2 = []
count = 0

#获取音频数据 url
for dic in index_dic['data']['tracksAudioPlay']:
    my_dic = {}
    my_dic['url'] = dic['src']
    my_dic['title'] = dic['trackName']
    if count == 0:
        urls1.append(my_dic)
        count += 1
    else:
        count -= 1
        urls2.append(my_dic)

#协程函数: 获取字节形式的音频数据
async def download(dic):
    down_dic = {}
    async with aiohttp.ClientSession() as s: #实例化一个支持异步的请求方法
        async with await s.get(dic['url'],headers=headers) as response:
            bytes_music = await response.read()#字节形式的音频数据
            down_dic['title'] = dic['title']
            down_dic['bytes_music'] = bytes_music
            return down_dic

def callback_(dic):
    #存储
    my_dic = dic.result()
    path_ = 'xxx'
    with open(os.path.join(path_,my_dic['title']),'wb')as f:
        f.write(my_dic['bytes_music'])
        print("%s已完成!!!"%my_dic['title'])

async_obj_list1 = [download(i) for i in urls1]  #实例化 协程对象
async_obj_list2 = [download(i) for i in urls2]

task_list1 = []
task_list2 = []
for async_obj in async_obj_list1:
    task = asyncio.ensure_future(async_obj) #实例化 task
    task.add_done_callback(callback_)
    task_list1.append(task)

for async_obj in async_obj_list2:
    task = asyncio.ensure_future(async_obj) #实例化 task
    task.add_done_callback(callback_)
    task_list2.append(task)


for_loop1 = asyncio.get_event_loop()
for_loop2 = asyncio.get_event_loop()

func_args_list  = [{'for_loop':for_loop1,'task_list':task_list1},{'for_loop':for_loop2,'task_list':task_list2}]

def pool_func (dic):
    loop_ = dic['for_loop']
    task_list = dic['task_list']
    loop_.run_until_complete(asyncio.wait(task_list))

pool.map(pool_func ,func_args_list )

猜你喜欢

转载自www.cnblogs.com/lgw1171435560/p/11106151.html
今日推荐