web crawlers explain -Scrapy framework reptile -Scrapy analog browser to log - Get Scrapy framework Cookies

Analog browser Login

start_requests () method can return a request to start the site crawlers, this was equivalent to a return of start_urls, start_requests () will return the request in the request alternative start_urls

Request () get the request can be set, url, cookie, callback function

FormRequest.from_response () submit the form post, you must first parameter, the last cookie in response to the response object, other parameters, cookie, url, form content etc.

yield Request () can return a new request to the execution reptiles

Cookie operation when sending the request,
Meta = { 'CookieJar':} indicate on cookie records. 1, when the first write request in the Request () in
meta = { 'cookiejar': response.meta [ 'cookiejar']} represents the use of a response of cookie, write in FormRequest.from_response () where post authorization
meta = { 'cookiejar': True } represents the cookie to access the license need to log in to view the page

Get Scrapy framework Cookies

请求Cookie
Cookie = response.request.headers.getlist('Cookie')
print(Cookie)

响应Cookie
Cookie2 = response.headers.getlist('Set-Cookie')
print(Cookie2)

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request,FormRequest

class PachSpider(scrapy.Spider):                            #定义爬虫类,必须继承scrapy.Spider
    name = 'pach'                                           #设置爬虫名称
    allowed_domains = ['edu.iqianyue.com']                  #爬取域名
    # start_urls = ['http://edu.iqianyue.com/index_user_login.html']     #爬取网址,只适于不需要登录的请求,因为没法设置cookie等信息

    header = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:54.0) Gecko/20100101 Firefox/54.0'}  #设置浏览器用户代理

    def start_requests(self):       #用start_requests()方法,代替start_urls
        """第一次请求一下登录页面,设置开启cookie使其得到cookie,设置回调函数"""
        return [Request('http://edu.iqianyue.com/index_user_login.html',meta={'cookiejar':1},callback=self.parse)]

    def parse(self, response):     #parse回调函数

        data = {                    #设置用户登录信息,对应抓包得到字段
            'number':'adc8868',
            'passwd':'279819',
            'submit':''
            }

        # 响应Cookie
        Cookie1 = response.headers.getlist('Set-Cookie')   #查看一下响应Cookie,也就是第一次访问注册页面时后台写入浏览器的Cookie
        print(Cookie1)

        print('登录中')
        """第二次用表单post请求,携带Cookie、浏览器代理、用户登录信息,进行登录给Cookie授权"""
        return [FormRequest.from_response(response,
                                          url='http://edu.iqianyue.com/index_user_login',   #真实post地址
                                          meta={'cookiejar':response.meta['cookiejar']},
                                          headers=self.header,
                                          formdata=data,
                                          callback=self.next,
                                          )]
    def next(self,response):
        a = response.body.decode("utf-8")   #登录后可以查看一下登录响应信息
        # print(a)
        """登录后请求需要登录才能查看的页面,如个人中心,携带授权后的Cookie请求"""
        yield Request('http://edu.iqianyue.com/index_user_index.html',meta={'cookiejar':True},callback=self.next2)
    def next2(self,response):
        # 请求Cookie
        Cookie2 = response.request.headers.getlist('Cookie')
        print(Cookie2)

        body = response.body  # 获取网页内容字节类型
        unicode_body = response.body_as_unicode()  # 获取网站内容字符串类型

        a = response.xpath('/html/head/title/text()').extract()  #得到个人中心页面
        print(a)

Analog browser to log 2

first step,

First visit reptiles, generally when the user logs on, the first time you access the login page, background Cookies automatically written to a browser, so our first major acquisition is a response Cookies

First, visit the Web site login page if the login page is a separate page, our crawlers for the first time should start from the login page, login page if such page is not independent js pop, then we can start from the home page of reptiles

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request,FormRequest
import re

class PachSpider(scrapy.Spider):                            #定义爬虫类,必须继承scrapy.Spider
    name = 'pach'                                           #设置爬虫名称
    allowed_domains = ['dig.chouti.com']                    #爬取域名
    # start_urls = ['']                                     #爬取网址,只适于不需要登录的请求,因为没法设置cookie等信息

    header = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:54.0) Gecko/20100101 Firefox/54.0'}  #设置浏览器用户代理

    def start_requests(self):
        """第一次请求一下登录页面,设置开启cookie使其得到cookie,设置回调函数"""
        return [Request('http://dig.chouti.com/',meta={'cookiejar':1},callback=self.parse)]

    def parse(self, response):
        # 响应Cookies
        Cookie1 = response.headers.getlist('Set-Cookie')                            #查看一下响应Cookie,也就是第一次访问注册页面时后台写入浏览器的Cookie
        print('后台首次写入的响应Cookies:',Cookie1)

        data = {                                                                    # 设置用户登录信息,对应抓包得到字段
            'phone': '8615284816568',
            'password': '279819',
            'oneMonth': '1'
        }

        print('登录中....!')
        """第二次用表单post请求,携带Cookie、浏览器代理、用户登录信息,进行登录给Cookie授权"""
        return [FormRequest.from_response(response,
                                          url='http://dig.chouti.com/login',                        #真实post地址
                                          meta={'cookiejar':response.meta['cookiejar']},
                                          headers=self.header,
                                          formdata=data,
                                          callback=self.next,
                                          )]

    def next(self,response):
        # 请求Cookie
        Cookie2 = response.request.headers.getlist('Cookie')
        print('登录时携带请求的Cookies:',Cookie2)

        jieg = response.body.decode("utf-8")   #登录后可以查看一下登录响应信息
        print('登录响应结果:',jieg)

        print('正在请需要登录才可以访问的页面....!')

        """登录后请求需要登录才能查看的页面,如个人中心,携带授权后的Cookie请求"""
        yield Request('http://dig.chouti.com/user/link/saved/1',meta={'cookiejar':True},callback=self.next2)

    def next2(self,response):
        # 请求Cookie
        Cookie3 = response.request.headers.getlist('Cookie')
        print('查看需要登录才可以访问的页面携带Cookies:',Cookie3)

        leir = response.xpath('//div[@class="tu"]/a/text()').extract()  #得到个人中心页面
        print('最终内容',leir)
        leir2 = response.xpath('//div[@class="set-tags"]/a/text()').extract()  # 得到个人中心页面
        print(leir2)

                在学习过程中有什么不懂得可以加我的
python学习交流扣扣qun,784758214
群里有不错的学习视频教程、开发工具与电子书籍。
与你分享python企业当下人才需求及怎么从零基础学习好python,和学习什么内容

web crawlers explain -Scrapy framework reptile -Scrapy analog browser to log - Get Scrapy framework Cookies

Guess you like

Origin blog.51cto.com/14510224/2434871