bilibili完结番剧分区数据抓取

 这是b站的完结番剧界面,它属于b站-番剧分区-完结动画区,今天来爬取b站的完结番剧,来了解他们的播放量和硬币数等。

爬取方法: 

B站是一个对于爬虫是一个很友好的网站,它对于爬虫有专门的接口 

https://github.com/uupers/BiliSpider/wiki


这个网址中有b站各个区域的接口,由于我们爬取的是b站二级分区数据,所以我们可以在这个网页右侧的[Bilibili API 二级分区视频分页数据(投稿时间逆序)]链接中,我们可以看到b站视频数据接口的信息。它是一个json文件。

我们需要用到的就是这部分信息,我们就转而获取视频接口信息的json文件,提取出想要的文件来保存在csv文件中。

我们总的思路是获取视频信息的json文件->提取json数据->保存数据为csv

具体实现:

这个就是我们所需要获取的json文件的地址了

def get_url():
    url = 'http://api.bilibili.com/x/web-interface/newlist?rid=32&pn='
    for i in range(1, 328):
        urls.append(url + str(i) + '&ps=50')

得到json文件中的信息

def get_message(url):
    print(url)
    time.sleep(1)#一秒爬一次,运用了4个线程就是一秒爬4次
    try:
        r = requests.get(url, timeout=5)
        data = json.loads(r.text)['data']['archives']
        for j in range(len(data)):
            content = {}
            content['aid'] = data[j]['aid']
            content['title'] = data[j]['title']
            content['view'] = data[j]['stat']['view']
            content['danmaku'] = data[j]['stat']['danmaku']
            content['reply'] = data[j]['stat']['reply']
            content['favorite'] = data[j]['stat']['favorite']
            content['coin'] = data[j]['stat']['coin']
            comic_list.append(content)
    except Exception as e:
        print(e)

然后写入csv文件

def write_to_file(comic_list):#写入csv文件
    with open(r'bilibili-comic.csv', 'w', newline='', encoding='utf-8') as f:
        fieldnames = ['aid', 'title', 'view', 'danmaku', 'reply', 'favorite', 'coin']
        writer = csv.DictWriter(f, fieldnames=fieldnames)
        writer.writeheader()
        try:
            writer.writerows(comic_list)
        except Exception as e:
            print(e)

我的电脑是4核的就创建4个线程,然后调用map函数运行get_messsage函数

get_url()
pool = ThreadPool(4)
pool.map(get_message, urls)
pool.close()
write_to_file(comic_list)

整体的代码

import requests
import json
import csv
from multiprocessing.dummy import Pool as ThreadPool#导入多线程库
import time


comic_list = []
urls = []


def get_url():
    url = 'http://api.bilibili.com/x/web-interface/newlist?rid=32&pn='
    for i in range(1, 328):
        urls.append(url + str(i) + '&ps=50')


def get_message(url):
    print(url)
    time.sleep(1)#一秒爬一次,运用了4个线程就是一秒爬4次
    try:
        r = requests.get(url, timeout=5)
        data = json.loads(r.text)['data']['archives']
        for j in range(len(data)):
            content = {}
            content['aid'] = data[j]['aid']
            content['title'] = data[j]['title']
            content['view'] = data[j]['stat']['view']
            content['danmaku'] = data[j]['stat']['danmaku']
            content['reply'] = data[j]['stat']['reply']
            content['favorite'] = data[j]['stat']['favorite']
            content['coin'] = data[j]['stat']['coin']
            comic_list.append(content)
    except Exception as e:
        print(e)


def write_to_file(comic_list):#写入csv文件
    with open(r'bilibili-comic.csv', 'w', newline='', encoding='utf-8') as f:
        fieldnames = ['aid', 'title', 'view', 'danmaku', 'reply', 'favorite', 'coin']
        writer = csv.DictWriter(f, fieldnames=fieldnames)
        writer.writeheader()
        try:
            writer.writerows(comic_list)
        except Exception as e:
            print(e)


get_url()
pool = ThreadPool(4)
pool.map(get_message, urls)
pool.close()
write_to_file(comic_list)

在爬取过程中,很多网站都有自己的接口,我们可以去寻找接口来让爬取过程变得简单

猜你喜欢

转载自blog.csdn.net/qq_39531895/article/details/84984806
今日推荐