爬虫实战系列(九):知乎热榜全爬取及词云制作

声明:本博客只是简单的爬虫示范,并不涉及任何商业用途。

一.前言

今天正值国庆中秋双节,但作为一个技术宅的我仍然是在寝室度过,在下午我还是习惯性地打开知乎,结果发现《姜子牙》冲到了知乎热榜第一,而我最近也有意向去看这部国产动漫。于是不了解风评的我准备利用爬虫+词云图对《姜子牙》的评价进行可视化,然后决定一波到底要不要去看,顺带的我也把热榜其他问题和对应的全部回答也扒了下来,下面是热榜全爬取的详细记录。

二.爬虫过程

2.1 所有问题对应回答页面链接获取

首先,进入知乎热榜页面(展示如下图),可以看到热榜中一共包括了50个问题,这些问题的所有回答都是我们要爬取的目标。
知乎热榜

随机选中一个问题右键检查即可查看所有的元素都包含在一个<section>...</section>块中,即:
问题元素
我们点开其中的一个元素,可以发现对应的问题及其所指向的链接,即我们需要的链接:
具体元素

2.2 获取单个问题页面的全部回答

在解决了热榜所有问题的链接获取之后,下面的问题就是如何爬取单个页面的所有回答了,我们打开《姜子牙》的链接,可以看到如下页面:
问题页面
需要注意的是:该页面的所有回答并不会全部显示出来,而是当滚动条滚动到底部后才会出现新的回答,即它采用了Ajax 动态加载的技术。那该问题如何解决呢,我在开发者工具中,选中请求类型为XHR,结果果然看到了评论数据(json格式):
回答json数据
我又继续滚动滑动条几次,得到如下几个链接:

https://www.zhihu.com/api/v4/questions/337873977/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%2Cis_recognized%2Cpaid_info%2Cpaid_info_content%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics&limit=5&offset=5&platform=desktop&sort_by=default
https://www.zhihu.com/api/v4/questions/337873977/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%2Cis_recognized%2Cpaid_info%2Cpaid_info_content%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics&limit=5&offset=10&platform=desktop&sort_by=default
https://www.zhihu.com/api/v4/questions/337873977/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%2Cis_recognized%2Cpaid_info%2Cpaid_info_content%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics&limit=5&offset=15&platform=desktop&sort_by=default

观察上述链接我们可以看到变化的只有offset字段,而且是加5递增的,因此我们只需要改变该链接的offset字段即可获取到对应问题的全部回答所对应的链接。此外,我又打开了其他几个问题得到如下链接:

https://www.zhihu.com/api/v4/questions/337873977/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%2Cis_recognized%2Cpaid_info%2Cpaid_info_content%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics&limit=5&offset=5&platform=desktop&sort_by=default
https://www.zhihu.com/api/v4/questions/423719681/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%2Cis_recognized%2Cpaid_info%2Cpaid_info_content%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics&limit=5&offset=5&platform=desktop&sort_by=default
https://www.zhihu.com/api/v4/questions/423737325/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%2Cis_recognized%2Cpaid_info%2Cpaid_info_content%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics&limit=5&offset=5&platform=desktop&sort_by=default

观察可知,不同问题的回答对应的链接的不同之处只包括问题ID和对应各自问题的offset,因此我们只需要在进入每个问题的回答页面时,将对应的问题ID和回答数获取即可获取包含所有回答的json数据。
注:具体如何从json数据中提取作者和对应的回答的过程就不详细介绍了。

2.3 爬虫结果保存

在爬取的过程中,由于首先要获取到热榜各个问题对应的链接,因此我将各个问题及其对应的回答页面的链接保存了下来,格式为csv,其所包含的字段展示如下:

字段一 字段二
title(问题) url (问题对应的回答页面)

另外,对于所有问题的回答都单独存为一个csv文件,每个csv文件包含的字段如下:

字段一 字段二
author(回答者) content(回答内容,只筛选了其中的中文内容)

2.4 全过程流程总结

综上,爬虫的全过程已经一目了然了,首先是获取热榜所有问题的链接(顺便可以获取问题ID),然后进入到具体的某个页面获取回答数,然后就可以构造链接来爬取回答了,最后将回答保存为csv格式,即:
排球场全过程

三.示例程序及结果展示

import requests
from bs4 import BeautifulSoup
import pandas as pd
import re
import json
import traceback

chinese = '[\u4e00-\u9fa5]+' #提取中文汉字的pattern

headers = {
    
    
    'user-agent': '换上自己的User-Agent',
    'cookie': '换上自己的知乎登录cookie'
}

def getHots(url='https://www.zhihu.com/hot'):
    """
    功能:获取知乎热榜所有话题的id
    """
    topics = []
    response = requests.get(url=url,headers=headers)
    if response.status_code == 200:
        soup = BeautifulSoup(response.content,'lxml',from_encoding='utf-8')
        hots = soup.findAll('section',attrs={
    
    'class':'HotItem'})
        for hot in hots:
            hot_url = hot.find('a').get('href')
            hot_c = hot.find('a').get('title')
            print(hot_c,hot_url)
            topics.append([hot_c,hot_url])
    Saver(topics,0,['title','url'])
    return topics

def getNumber(topic_url):
    """
    功能:获取某个问题的回答数
    """
    response = requests.get(topic_url,headers=headers)
    if response.status_code == 200:
        soup = BeautifulSoup(response.content,'lxml',from_encoding='utf-8')
        string = soup.find('h4',attrs={
    
    'class':'List-headerText'}).get_text()
        number = ''.join([s for s in string if s.isdigit()])
        return int(number)
    
    return 0

def getAnswers(question_id,number):
    """
    功能:获取某个问题各个回答
    question_id:话题id
    number:回答数量
    """
    outcome = []
    i = 0
    while i * 5 < number:
        try:
            url = 'https://www.zhihu.com/api/v4/questions/{}/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%2Cis_recognized%2Cpaid_info%2Cpaid_info_content%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics&limit=5&offset={}&platform=desktop&sort_by=default'.format(question_id,i*5)
            response = requests.get(url,headers=headers)
            if response.status_code == 200:
                js = json.loads(response.text)
                for answer in js['data']:
                    author = answer['author']['name']
                    content = ''.join(re.findall(chinese,answer['content']))
                    print(author,content)
                    outcome.append([author,content])
            i += 1
        except Exception:
            i += 1
            print(traceback.print_exc())
            print('web spider fails')
    return outcome


def Saver(datas,idx,columns):
    """
    功能:保存数据为csv格式
    index:话题索引
    """
    df = pd.DataFrame(datas,columns=columns)
    df.to_csv('./datas/hot_{}.csv'.format(idx),index=False)



def Spider():
    """
    功能:爬虫主函数
    """
    topics = getHots()
    for idx,topic in enumerate(topics):
        print('clawling: {} numbers: {}'.format(topic[0],topic[1]))
        #获取question ID
        question_id = topic[1].split('/')[-1]
        #获取回答数
        number = getNumber(topic[1])
        #爬取对应问题的所有回答
        datas = getAnswers(question_id,number)
        #保存回答为.csv格式
        Saver(datas,idx + 1,['author','content'])

if __name__ == "__main__":
    Spider()

最后爬取的50个问题都保存为了单独的csv文件,然后我分别对其进行了jieba分词,删除停用词,最后利用wordcloud模块都生成了各自的词云图,下面是《姜子牙》的某个问题所对应的词云图:
姜子牙

四.结语

完整项目和数据地址zhihu_answer_demo
以上便是本文的全部内容,要是觉得不错的话就点个赞或关注一下吧, 你们的支持是博主创作下去的不竭动力,当然若有任何问题敬请批评指正!!!

猜你喜欢

转载自blog.csdn.net/qq_42103091/article/details/108897911