Scrapy爬取知乎所有问题和回答

模拟登录

知乎需要登录才能进入。
所以,爬取知乎的第一步就是模拟登录,这里我们使用的是selenium模拟登录。

start_requests函数是scrapy中spider的入口,所以模拟登录应该放在这个函数中,我们重写start_requests函数:

def start_requests(self):

     from selenium import webdriver
     from selenium.webdriver.chrome.options import Options

     os.system('chcp 65001')
     os.popen('chrome.exe --remote-debugging-port=9222 --user-data-dir="C:\selenum\AutomationProfile"')
     chrome_options = Options()
     chrome_options.add_experimental_option("debuggerAddress", "127.0.0.1:9222")
     browser = webdriver.Chrome(
         executable_path='F:\BaiduNetdiskDownload\ArticleSpider\chromedriver.exe',
         options=chrome_options)
     browser.get("https://www.zhihu.com/signin")
     browser.find_element_by_css_selector(".SignFlow-accountInput.Input-wrapper input").send_keys("用户名")
     import time
     time.sleep(3)
     browser.find_element_by_css_selector(".SignFlow-password input").send_keys("密码")
     browser.find_element_by_css_selector(
         ".Button.SignFlow-submitButton").click()
     time.sleep(10)
     Cookies = browser.get_cookies()
     print("----",Cookies)
     cookie_dict = {}
     for cookie in Cookies:
         cookie_dict[cookie['name']] = cookie['value']
     browser.close()
     
     return [scrapy.Request(url=self.start_urls[0],dont_filter=True,headers=self.headers,cookies=cookie_dict)] #callback默认是parse函数

模拟登录后,获取cookies,并在settings.py中将COOKIES_ENABLED设为True,这样我们就只需要在第一次请求中加上cookies,后面的所有请求都会默认带上cookies

COOKIES_ENABLED = True

另外,记得携带请求头,不然会被知乎识别为爬虫。

爬取所有的问题

由于知乎没有提供所有问题的入口,所以我们采用的是深度优先的爬取算法

模拟登录后,我们在parse函数中解析所有的问题url
经过分析,我们发现问题url的格式为:https://www.zhihu.com/question/问题id

使用filter函数过滤掉非https开头的url,过滤后的url如果是/question/***格式, 就提交给下载器,如果不是,则进一步解析跟踪。

def parse(self, response):
     '''
     提取出html页面中所有的url  并跟踪这些url进一步爬取
     如果提取的格式为/question/*** 就提交给下载器
     '''
     all_urls = response.xpath('//a/@href').extract()
     all_urls = [parse.urljoin(response.url,url) for url in all_urls]
     all_urls = filter(lambda x: True if x.startswith('https') else False,all_urls)  #过滤掉非https开头的url
     for url in all_urls:
         match_obj = re.match('(.*zhihu.com/question/(\d+))(/|$).*',url)
         if match_obj:
             #如果url格式为question格式,则下载后解析页面
             request_url = match_obj.group(1)
             request_id = match_obj.group(2)
             yield scrapy.Request(request_url,headers=self.headers,meta={"zhihuid":request_id},callback=self.parse_question)
         else:
             #如果url不是question格式,则进一步跟踪链接
             yield scrapy.Request(url,headers=self.headers,callback=self.parse)

获取到问题的url后,我们就将需要的字段解析出来:

def parse_question(self,response):
    zhihu_id = response.meta.get("zhihuid","")

    item_loader = ArticleItemLoader(item=ZhihuQuestionItem(),response=response)
    item_loader.add_xpath('title','//h1[@class="QuestionHeader-title"]/text()')
    item_loader.add_xpath('content','//div[@class="QuestionRichText QuestionRichText--expandable QuestionRichText--collapsed"]//span[@class="RichText ztext"]/text()')
    item_loader.add_value('url',response.url)
    item_loader.add_value('zhihu_id',zhihu_id)
    item_loader.add_xpath('answer_num','//h4[@class="List-headerText"]/span/text()')
    item_loader.add_xpath('comments_num','//div[@class="QuestionHeader-Comment"]/button/text()')
    item_loader.add_xpath('watch_user_num','//div[@class="NumberBoard-item"]//strong[@class="NumberBoard-itemValue"]/text()')
    item_loader.add_xpath('topics','//div[@class="Tag QuestionTopic"]//div[@class="Popover"]//text()')#xpath类名要写全
    qustion_item = item_loader.load_item()
    yield  qustion_item

同时,在items.py中定义相关的item:

class ArticleItemLoader(ItemLoader):
    #重载ItemLoader类  自定义itemloader
    default_output_processor = TakeFirst() #设置默认output_processor:取列表中第0个元素
    
def get_nums(value):
#提取数字
match_re = re.match(".*?(\d+).*", value)
if match_re:
    nums = int(match_re.group(1))
else:
    nums = 0
return nums
    
class ZhihuQuestionItem(scrapy.Item):
    #知乎的问题 item
    zhihu_id = scrapy.Field()
    topics = scrapy.Field(
        output_processor = Join(",")
    )
    url = scrapy.Field()
    title = scrapy.Field()
    content = scrapy.Field()
    answer_num = scrapy.Field()
    comments_num = scrapy.Field(
        input_processor=MapCompose(get_nums)
    )
    watch_user_num = scrapy.Field()
    crawl_time = scrapy.Field()

爬取问题的所有回答

首先分析下接口
打开调试工具,点击页面中的“查看所有回答”或者不断滚动页面滚动条。
可以在network中发现这个可疑的接口。
在这里插入图片描述
点击查看response,发现返回的是一段json数据。
具体的数据包括了:data paging
然后还发现:is_end和next,is_end是判断当前url是否是最后一个请求,next就是下一个请求的url。
在这里插入图片描述
这样一来,我们就可以方便的获得所有的回答了。
具体的爬取逻辑就是:判断is_end的值,如果为False,则继续请求next中的url。

接着,我们再来分析下请求回答url的参数结构:
在这里插入图片描述
include是固定的,limit是限制每次请求返回的数量,offset是偏移量

分析结束后,我们就可以写代码了:

start_answer_url = 'https://www.zhihu.com/api/v4/questions/{0}/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics&limit={1}&offset={2}&platform=desktop&sort_by=default'

def parse_question(self,response):
	zhihu_id = response.meta.get("zhihuid","")
	#此处省略解析问题的代码部分
	yield scrapy.Request(self.start_answer_url.format(zhihu_id,5,0),headers=self.headers,callback=self.parse_answer)

def parse_answer(self,response):
    ans_json = json.loads(response.text)
    is_end = ans_json['paging']['is_end']
    next_answer_url = ans_json['paging']['next']

    for answer in ans_json['data']:
        answer_item = ZhihuAnswerItem()
        answer_item['zhihu_id'] = answer['id']
        answer_item['url'] = answer['url']
        answer_item['question'] = answer['question']['title']
        answer_item['author_id'] = answer['author']['id'] if 'id' in answer['author'] else None
        answer_item['content'] = answer['content'] if 'content' in answer else None
        answer_item['praise_num'] = answer['voteup_count']
        answer_item['comments_num'] = answer['comment_count']
        answer_item['create_time'] = answer['created_time']
        answer_item['update_time'] = answer['updated_time']
        answer_item['crawl_time'] = datetime.datetime.now()

        yield answer_item

    if not is_end:
        yield scrapy.Request(next_answer_url,headers=self.headers,callback=self.parse_answer)

我们在解析完问题之后,就开始请求初始回答url:start_answer_url,然后在parse_answer中对回答进行解析。

同时,也要记得在items.py中定义对应的item:

class ZhihuAnswerItem(scrapy.Item):
    #知乎问题回答 item
    zhihu_id = scrapy.Field()
    url = scrapy.Field()
    question = scrapy.Field()
    author_id = scrapy.Field()
    content = scrapy.Field()
    praise_num = scrapy.Field()
    comments_num = scrapy.Field()
    create_time = scrapy.Field()
    update_time = scrapy.Field()
    crawl_time = scrapy.Field()

保存数据到mysql

首先设计两张数据表zhihu_question、zhihu_answer:
在这里插入图片描述
在这里插入图片描述
考虑到保存效率的问题,依然采用异步写入mysql的方法:

class MysqlTwistedPipeline(object):
    #采用异步的机制写入mysql
    def __init__(self,dbpool):
        self.dbpool = dbpool

    @classmethod
    def from_settings(cls,settings):
        dbparms = dict(
            #启动时就会被scrapy调用,将settings传进来,固定的函数,自定义自己的组件时很有用
            #注意:host、db等参数名称是固定的
            host = settings["MYSQL_HOST"],
            db = settings["MYSQL_DBNAME"],
            user = settings["MYSQL_USER"],
            passwd = settings["MYSQL_PASSWORD"],
            charset = 'utf8',
            cursorclass = MySQLdb.cursors.DictCursor,
            use_unicode = True
        )
        dbpool = adbapi.ConnectionPool("MySQLdb",**dbparms)
        return cls(dbpool)

    def process_item(self,item,spider):
        #使用twisted将mysql插入变成异步执行
        query = self.dbpool.runInteraction(self.do_insert,item)
        query.addErrback(self.handle_error,item, spider)#处理异常 ,加上item, spider更方便排查异常

    def handle_error(self,failure,item, spider):
        #处理异步插入的异常
        print(failure)

    def do_insert(self,cursor,item):
        #执行具体的插入
        insert_sql,params = item.get_sql() #根据不同的item 构建不同的sql语句并插入到mysql中
        cursor.execute(insert_sql,params)

同时在settings中配置mysql的参数:

MYSQL_HOST = "127.0.0.1"
MYSQL_DBNAME = "article_spider"
MYSQL_USER = "root"
MYSQL_PASSWORD = "admin"

为了能够实现根据不同的item 构建不同的sql语句并插入到mysql中,我们将sql语句以及参数都写入item类中:

def replace_dou(value):
    #去除数字中的逗号
    return value.replace(',','')

class ZhihuQuestionItem(scrapy.Item):
    #知乎的问题 item
    zhihu_id = scrapy.Field()
    topics = scrapy.Field(
        output_processor = Join(",")
    )
    url = scrapy.Field()
    title = scrapy.Field()
    content = scrapy.Field()
    answer_num = scrapy.Field(
        input_processor=MapCompose(replace_dou)
    )
    comments_num = scrapy.Field(
        input_processor=MapCompose(get_nums)
    )
    watch_user_num = scrapy.Field(
        input_processor=MapCompose(get_nums)
    )
    crawl_time = scrapy.Field()

    def get_sql(self):
        #便于#根据不同的item 构建不同的sql语句并插入到mysql中
        insert_sql = """
               insert into zhihu_question(zhihu_id,topics,url,title,content,answer_num,comments_num,watch_user_num,crawl_time)
               VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s)
               ON DUPLICATE KEY UPDATE content=VALUES(content), answer_num=VALUES(answer_num), comments_num=VALUES(comments_num),
              watch_user_num=VALUES(watch_user_num),crawl_time=VALUES(crawl_time)
           """
        #如果存在主键冲突,则更新数据。如果不存在,则插入数据

        params = (self["zhihu_id"], self["topics"], self["url"], self["title"], self["content"], int(self["answer_num"]), int(self["comments_num"]), int(self["watch_user_num"]), datetime.datetime.now().strftime(SQL_DATETIME_FORMAT))
        return insert_sql,params

class ZhihuAnswerItem(scrapy.Item):
    #知乎问题回答 item
    zhihu_id = scrapy.Field()
    url = scrapy.Field()
    question_id = scrapy.Field()
    question = scrapy.Field()
    author_id = scrapy.Field()
    content = scrapy.Field()
    praise_num = scrapy.Field()
    comments_num = scrapy.Field()
    create_time = scrapy.Field()
    update_time = scrapy.Field()
    crawl_time = scrapy.Field()

    def get_sql(self):
        # 便于#根据不同的item 构建不同的sql语句并插入到mysql中
        # 如果存在主键冲突,则更新数据。如果不存在,则插入数据
        insert_sql = """
                  insert into zhihu_answer(zhihu_id,url,question_id,question,author_id,content,praise_num,comments_num,create_time,update_time,crawl_time)
                  VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)
                  ON DUPLICATE KEY UPDATE content=VALUES(content), praise_num=VALUES(praise_num), comments_num=VALUES(comments_num),
                 update_time=VALUES(update_time), crawl_time=VALUES(crawl_time)
              """

        create_time = datetime.datetime.fromtimestamp(self["create_time"]).strftime(SQL_DATETIME_FORMAT)#时间戳转换为时间格式
        update_time = datetime.datetime.fromtimestamp(self["update_time"]).strftime(SQL_DATETIME_FORMAT)
        crawl_time = self["crawl_time"].strftime(SQL_DATETIME_FORMAT)

        params = (
        self["zhihu_id"], self["url"], self["question_id"], self["question"], self["author_id"], self["content"],
        int(self["praise_num"]), int(self["comments_num"]), create_time,update_time,crawl_time)
        return insert_sql, params

同时还要在settings中设置日期转换的格式:

SQL_DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
SQL_DATE_FORMAT = "%Y-%m-%d"

运行后,数据写入mysql数据库了:
在这里插入图片描述
在这里插入图片描述
这里再介绍一个在scrapy中调试的小技巧,
由于scrapy异步执行的特性,Debug起来就不那么方便。
所以,我们可以选择yield一个Request后break,这样就后面就不会继续向引擎发送Request了。
这样就保证了只有一个Request,便于我们调试。

yield scrapy.Request(request_url,headers=self.headers,meta={"zhihuid":request_id},callback=self.parse_question)
break

知乎的反爬机制总结

未完待续。。。。

猜你喜欢

转载自blog.csdn.net/qq_42206477/article/details/86557950