爬取知乎用户信息

所需环境

Python3+Scrapy+PyCharm+MongoDB
在这里插入图片描述

思路分析

采用递归爬取的方法,首先,选择一个知乎大V,获取其关注列表和粉丝列表,爬取其中的用户信息,对于列表中的每个用户,获取其粉丝列表和关注列表,爬取其中的用户信息,重复上述步骤,继续进行爬取即可。

具体步骤

1、随机抽取一位幸运的知乎大V,从他的关注列表和粉丝列表入手进行分析

通过浏览器抓包,我们可以找到请求的URL
在这里插入图片描述
2、通过对URL进行分析,可以找到这个URL的规律
URL的前半部分为:
https://www.zhihu.com/api/v4/members/tianshansoft/followees?include=
URL的后半部分可以在Query String Parameters中找到
在这里插入图片描述

再对URL的前半部分分析,我们还可以得到如下信息:
followees代表关注的人,而followers代表关注他的人,tianshansoft其实是url_token。
我们需要的用户信息是以json的形式返回的,因此我们需要用json库对返回的信息进行处理。

在这里插入图片描述
3、构造URL进行递归爬取数据

部分代码如下:

# -*- coding: utf-8 -*-
"""
爬取知乎用户的信息
"""
from scrapy import Request, Spider
import json


class ZhihuSpider(Spider):
    name = 'zhihu'
    allowed_domains = ['www.zhihu.com']
    start_urls = ['http://www.zhihu.com/']
    follows_url = 'https://www.zhihu.com/api/v4/members/{user}/followees?include={include}&offset={offset}&limit={limit}'
    followers_url = 'https://www.zhihu.com/api/v4/members/{user}/followers?include={include}&offset={offset}&limit={limit}'
    start_user = 'tianshansoft'
    follows_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'
    followers_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'

    user_url = 'https://www.zhihu.com/api/v4/members/{user}?include={include}'
    user_query = 'allow_message,is_followed,is_following,is_org,is_blocking,employments,answer_count,follower_count,articles_count,gender,badge[?(type=best_answerer)].topics'

    def start_requests(self):
        yield Request(self.user_url.format(user=self.start_user, include=self.user_query), callback=self.parse_user)
        yield Request(self.follows_url.format(user=self.start_user, include=self.follows_query, offset=0, limit=20),
                      callback=self.parse_follows)
        yield Request(self.followers_url.format(user=self.start_user, include=self.followers_query, offset=0, limit=20),
                      callback=self.parse_followers)

    def parse_user(self, response):
        result = json.loads(response.text)
        from zhihuuser.items import UserItem
        item = UserItem()
        for field in item.fields:
            if field in result.keys():
                item[field] = result.get(field)
        yield item

        yield Request(self.follows_url.format(user=result.get('url_token'), include=self.follows_query, limit=20, offset=0), self.parse_follows)
        yield Request(
            self.followers_url.format(user=result.get('url_token'), include=self.followers_query, limit=20, offset=0),
            self.parse_followers)

    def parse_follows(self, response):
        result1 = json.loads(response.text)

        if 'data' in result1.keys():
            for result in result1.get('data'):
                yield Request(self.user_url.format(user=result.get('url_token'), include=self.user_query), self.parse_user)

        if 'paging' in result1.keys() and result1.get('paging').get('is_end') is False:
            next_page = result1.get('paging').get('next')
            yield Request(next_page, self.parse_follows)

    def parse_followers(self, response):
        result1 = json.loads(response.text)

        if 'data' in result1.keys():
            for result in result1.get('data'):
                yield Request(self.user_url.format(user=result.get('url_token'), include=self.user_query), self.parse_user)

        if 'paging' in result1.keys() and result1.get('paging').get('is_end') is False:
            next_page = result1.get('paging').get('next')
            yield Request(next_page, self.parse_followers)

4、通过管道将爬取的数据存入MongoDB

部分代码如下:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import pymongo


class MongoPipeline(object):
    collection_name = 'users'

    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            mongo_uri=crawler.settings.get('MONGO_URI'),
            mongo_db=crawler.settings.get('MONGO_DATABASE')
        )

    def open_spider(self, spider):
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]

    def close_spider(self, spider):
        self.client.close()

    def process_item(self, item, spider):
        self.db[self.collection_name].update({'url_token': item['url_token']}, dict(item), True)
        return item

最终结果

因为是递归爬取(鬼知道它要跑多长时间2333~~),我没等它跑完就停止了,大约跑了有5分钟,总共爬到了5695条用户信息。

在这里插入图片描述

发布了102 篇原创文章 · 获赞 93 · 访问量 9646

猜你喜欢

转载自blog.csdn.net/Deep___Learning/article/details/103830988