(最新)使用爬虫刷CSDN博客访问量——亲测有效

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_41782425/article/details/84993073

说明:该篇博客是博主一字一码编写的,实属不易,请尊重原创,谢谢大家!

1.概述

前言:前两天刚写了第一篇博客https://blog.csdn.net/qq_41782425/article/details/84934224 发现阅读量很少,博主很生气,当时就想到使用爬虫来增加阅读量,于是一言不合就开始敲代码

分析:首先刚开始我觉得csdn网站不存在反爬虫,于是直接通过urllib2库对我写的第一篇文章,进行while True无限循环访问,然后通过print response.url发现响应的url地址变成了https://passport.csdn.net/login?xxxxxxx,根本没有进入我要访问的url地址,当通过浏览器不断访问博客时,也会提示登录,所以使用爬虫进行访问也同样会跳转到登录界面的,所以我自己将headers中附带登录后的cookie,这样就不会再从访问内容目标地址跳转到登录地址了。

说明:原本我打算使用scrapy框架进行爬取,但是内容太少,不需要爬取全网站,所以直接写个爬虫py文件即可,为了防止访问太密集导致被封IP,所以使用了https://www.kuaidaili.com/free/inha/ 该免费代理网站的代理IP,也是用了User-Agent大全进行隐藏本机的UA。

注:CSDN网站并不是淘宝同级网站,所以爬取该网站很简单

2.闲话不扯了,直接看代码

1.USER-AGENT代码:

反爬虫和隐藏本机UA是不可缺少的一步(User-Agent这个网上一搜一大把)

USER_AGENTS = [
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 OPR/26.0.1656.60',
        'Opera/8.0 (Windows NT 5.1; U; en)',
        'Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0 Opera 9.50',
        'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; en) Opera 9.50',
        'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0',
        'Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.57.2 (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36',
        'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
        'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.133 Safari/534.16',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER',
        'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)',
        'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 SE 2.X MetaSr 1.0',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; SE 2.X MetaSr 1.0)',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.122 UBrowser/4.0.3214.0 Safari/537.36'
    ]

2.URL代码:

准备将要访问的url链接地址(存在url_list列表中,这里是我个人博客的网址,包括资源,帖子等)

说明:这样做可以将访问度减少,避免检测到某个页面在极断时间内一直在被访问

url_list = [
        "https://blog.csdn.net/qq_41782425/article/details/84934224",
        "https://blog.csdn.net/qq_41782425/article/category/8519763",
        "https://me.csdn.net/qq_41782425",
        "https://me.csdn.net/download/qq_41782425",
        "https://me.csdn.net/bbs/qq_41782425"
    ]

3.proxy代码:

说明:代理这块我跟大多人不一样,我是直接写一个get_proxy方法,利用爬虫将代理网站的IP爬取下来,存在类属性中方便我在代码中使用,不用open在read那些华而不实的方法

注:因代码简单所以没有加注释(有不懂的可以在下面评论,我这个方法在线爬取都是最新的免费代理,好到爆炸)

    def get_proxy(self):
        self.page+=1
        headers = {"User-Agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11"}
        request = urllib2.Request("https://www.kuaidaili.com/free/inha/"+str(self.page), headers=headers)
        html = urllib2.urlopen(request).read()
        # print html
        content = etree.HTML(html)
        ip = content.xpath('//td[@data-title="IP"]/text()')
        port = content.xpath('//td[@data-title="PORT"]/text()')
        
        for i in range(len(ip)):
            for p in range(len(port)):
                if i == p:
                    if ip[i] + ':' + port[p] not in self.proxy:
                        self.proxy.append(ip[i] + ':' + port[p])
        # print self.proxy
        if self.proxy:
            print "现在使用的是第" + str(self.page) + "页代理IP"
            self.spider()

4.spider代码:

说明:通过urllib2中的ProxyHandler类构建一个Handler处理器对象,再通过build_opener方法构建一个全局的opener,之后所有的请求都可以用urlopen()方式去发送即可

注:为了让大家更清楚,最后添加了注释(大家觉得写得不错的就点个喜欢,谢谢)

    def spider(self):
        num = 0 # 用于访问计数
        err_num = 0 #用于异常错误计数
        while True:
            # 从列表中随机选择UA和代理
            user_agent = random.choice(self.USER_AGENTS)
            proxy = random.choice(self.proxy)

            # proxy = json.loads('{"http":'+'"'+proxy+'"}')
            # print proxy
            # print type(proxy)
            referer = random.choice(self.url_list) #随机选择访问url地址
            headers = {
                # "Host": "blog.csdn.net",
                "Connection": "keep-alive",
                "Cache-Control": "max-age=0",
                "Upgrade-Insecure-Requests": "1",
                # "User-Agent": user_agent,
                "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
                # "Referer": "https://blog.csdn.net/qq_41782425/article/details/84934224",
                # "Accept-Encoding": "gzip, deflate, br",
                "Accept-Language": "zh-CN,zh;q=0.9",
                "Cookie": "your cookie"

            }
            try:
                # 构建一个Handler处理器对象,参数是一个字典类型,包括代理类型和代理服务器IP+PROT
                httpproxy_handler = urllib2.ProxyHandler({"http": proxy})
                opener = urllib2.build_opener(httpproxy_handler)
                urllib2.install_opener(opener)
                request = urllib2.Request(referer,headers=headers)
                request.add_header("User-Agent", user_agent)
                response = urllib2.urlopen(request)
                html = response.read()
                # 利用etree.HTML,将字符串解析为HTML文档
                content = etree.HTML(html)
                # 使用xpath匹配阅读量
                read_num = content.xpath('//span[@class="read-count"]/text()')
                # 将列表转为字符串
                new_read_num = ''.join(read_num)
                # 通过xpath匹配的页面为blog.csdn.net/qq_41782425/article/details/84934224所以匹配其他页面返回的为空
                if len(new_read_num) != 0:
                    print new_read_num

                num += 1
                print '第' + str(num) + '次访问'
                print response.url + " 代理ip: " + str(proxy)
                # print request.headers
                time.sleep(1)
                # 当访问数量达到100时,退出循环,并调用get_proxy方法获取第二页的代理
                if num > 100:
                    break
            except Exception as result:
                err_num+=1
                print "错误信息(%d):%s"%(err_num,result)
                # 当错误信息大于等于30时,初始化代理页面page,重新从第一页开始获取代理ip,并退出循环
                if err_num >=30:
                    self.__init__()
                    break
            # url = "https://blog.csdn.net/qq_41782425/article/details/84934224"
            # try:
            #     response = requests.get(url,headers=headers,proxies={"http":""})
            #     num += 1
            #     print '第' + str(num) + '次访问'
            #     print response.url
            # except Exception as result:
            #     err_num+=1
            #     print "错误信息(%d):%s"%(err_num,result)
            #     if err_num >=30:
            #         break
        # 当退出循环时,看就会执行get_proxy获取代理的方法
        print "正在重新获取代理IP"
        self.get_proxy()

5.启动代码:

if __name__ == "__main__":
    CsdnSpider().get_proxy()

6.效果图:

D:\PycharmProjects\Web_Crawler\venv\Scripts\python.exe D:/PycharmProjects/Web_Crawler/practice/csdnSpider.py
现在使用的是第1页代理IP
第1次访问
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 123.162.168.192:8088
第2次访问
https://me.csdn.net/bbs/qq_41782425 代理ip: 60.182.22.244:8118
第3次访问
https://me.csdn.net/qq_41782425 代理ip: 222.186.45.144:9000
第4次访问
https://me.csdn.net/qq_41782425 代理ip: 27.24.215.49:37644
第5次访问
https://me.csdn.net/qq_41782425 代理ip: 101.76.209.69:9000
第6次访问
https://me.csdn.net/bbs/qq_41782425 代理ip: 175.11.194.73:80
第7次访问
https://me.csdn.net/download/qq_41782425 代理ip: 111.230.254.195:47891
第8次访问
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 123.162.168.192:53281
第9次访问
https://me.csdn.net/bbs/qq_41782425 代理ip: 180.118.134.103:8118
阅读数:419
第10次访问
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 111.198.77.169:47891
第11次访问
https://me.csdn.net/qq_41782425 代理ip: 27.24.215.49:8118
阅读数:419
第12次访问
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 221.224.136.211:9000
第13次访问
https://me.csdn.net/qq_41782425 代理ip: 221.224.136.211:8118
第14次访问
https://me.csdn.net/download/qq_41782425 代理ip: 222.171.251.43:35101
阅读数:419
第15次访问
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 111.230.254.195:8118
阅读数:419
第16次访问
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 180.118.86.75:1080
第17次访问
https://me.csdn.net/qq_41782425 代理ip: 101.76.209.69:47891
第18次访问
https://me.csdn.net/bbs/qq_41782425 代理ip: 222.186.45.144:37644
第19次访问
https://me.csdn.net/qq_41782425 代理ip: 123.162.168.192:42164
第20次访问
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 180.118.134.103:47891

说明:因为访问时间过快阅读数过段时间再到浏览器刷新即可,还有一个情况就是我打印的阅读量为第一篇博客的阅读量,随机访问5个页面,所以看到的阅读量是慢慢的增加,这样也是比较安全的,当访问超过100后,会更换第二页的代理,测试都没问题

注:当打印出来的response.url为https://passport.csdn.net/login时,就要更换你的cookie了

阅读数:420
第98次访问
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 47.95.9.128:8118
第99次访问
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 111.198.77.169:80
第100次访问
https://me.csdn.net/qq_41782425 代理ip: 111.198.77.169:8088
第101次访问
https://me.csdn.net/qq_41782425 代理ip: 124.235.135.210:8118
正在重新获取代理IP
现在使用的是第2页代理IP
阅读数:420
第1次访问
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 221.224.136.211:35101
阅读数:420
第2次访问
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 115.223.243.136:13289
第3次访问
https://me.csdn.net/download/qq_41782425 代理ip: 180.104.107.46:9000
第4次访问
https://me.csdn.net/qq_41782425 代理ip: 139.196.125.96:9000
第5次访问
https://me.csdn.net/download/qq_41782425 代理ip: 180.118.128.250:8118
第6次访问
https://me.csdn.net/download/qq_41782425 代理ip: 180.118.134.103:53281
第7次访问
https://me.csdn.net/bbs/qq_41782425 代理ip: 182.107.13.217:9000
第8次访问
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 115.223.243.136:9000
阅读数:421
第9次访问
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 115.223.243.136:9000
第10次访问
https://me.csdn.net/qq_41782425 代理ip: 121.232.194.69:9000
第11次访问
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 139.196.125.96:9000
第12次访问
https://me.csdn.net/qq_41782425 代理ip: 175.11.194.73:9000

3.完整代码

说明:尝试自己写一下,逻辑很简单,实现也很简单

# coding:utf-8

import urllib2
from lxml import etree
import random
import time
import json,requests

class CsdnSpider():
    USER_AGENTS = [
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 OPR/26.0.1656.60',
        'Opera/8.0 (Windows NT 5.1; U; en)',
        'Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0 Opera 9.50',
        'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; en) Opera 9.50',
        'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0',
        'Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.57.2 (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36',
        'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
        'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.133 Safari/534.16',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER',
        'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)',
        'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 SE 2.X MetaSr 1.0',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; SE 2.X MetaSr 1.0)',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.122 UBrowser/4.0.3214.0 Safari/537.36'
    ]
    url_list = [
        "https://blog.csdn.net/qq_41782425/article/details/84934224",
        "https://blog.csdn.net/qq_41782425/article/category/8519763",
        "https://blog.csdn.net/qq_41782425/article/details/84993073",
        "https://me.csdn.net/qq_41782425",
        "https://me.csdn.net/download/qq_41782425",
        "https://me.csdn.net/bbs/qq_41782425"
    ]
    def __init__(self):
        self.page = 0
        self.proxy = []
    def get_proxy(self):
        self.page+=1
        headers = {"User-Agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11"}
        request = urllib2.Request("https://www.kuaidaili.com/free/inha/"+str(self.page), headers=headers)
        html = urllib2.urlopen(request).read()
        # print html
        content = etree.HTML(html)
        ip = content.xpath('//td[@data-title="IP"]/text()')
        port = content.xpath('//td[@data-title="PORT"]/text()')
        # 将对应的ip和port进行拼接
        for i in range(len(ip)):
            for p in range(len(port)):
                if i == p:
                    if ip[i] + ':' + port[p] not in self.proxy:
                        self.proxy.append(ip[i] + ':' + port[p])
        # print self.proxy
        if self.proxy:
            print "现在使用的是第" + str(self.page) + "页代理IP"
            self.spider()

    def spider(self):
        num = 0 # 用于访问计数
        err_num = 0 #用于异常错误计数
        while True:
            # 从列表中随机选择UA和代理
            user_agent = random.choice(self.USER_AGENTS)
            proxy = random.choice(self.proxy)

            # proxy = json.loads('{"http":'+'"'+proxy+'"}')
            # print proxy
            # print type(proxy)
            referer = random.choice(self.url_list) #随机选择访问url地址
            headers = {
                # "Host": "blog.csdn.net",
                "Connection": "keep-alive",
                "Cache-Control": "max-age=0",
                "Upgrade-Insecure-Requests": "1",
                # "User-Agent": user_agent,
                "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
                # "Referer": "https://blog.csdn.net/qq_41782425/article/details/84934224",
                # "Accept-Encoding": "gzip, deflate, br",
                "Accept-Language": "zh-CN,zh;q=0.9",
                "Cookie": "your cookie"
            }
            try:
                # 构建一个Handler处理器对象,参数是一个字典类型,包括代理类型和代理服务器IP+PROT
                httpproxy_handler = urllib2.ProxyHandler({"http": proxy})
                opener = urllib2.build_opener(httpproxy_handler)
                urllib2.install_opener(opener)
                request = urllib2.Request(referer,headers=headers)
                request.add_header("User-Agent", user_agent)
                response = urllib2.urlopen(request)
                html = response.read()
                # 利用etree.HTML,将字符串解析为HTML文档
                content = etree.HTML(html)
                # 使用xpath匹配阅读量
                read_num = content.xpath('//span[@class="read-count"]/text()')
                # 将列表转为字符串
                new_read_num = ''.join(read_num)
                # 通过xpath匹配的页面为blog.csdn.net/qq_41782425/article/details/84934224所以匹配其他页面返回的为空
                if len(new_read_num) != 0:
                    print new_read_num

                num += 1
                print '第' + str(num) + '次访问'
                print response.url + " 代理ip: " + str(proxy)
                # print request.headers
                time.sleep(1)
                # 当访问数量达到100时,退出循环,并调用get_proxy方法获取第二页的代理
                if num > 100:
                    break
            except Exception as result:
                err_num+=1
                print "错误信息(%d):%s"%(err_num,result)
                # 当错误信息大于等于30时,初始化代理页面page,重新从第一页开始获取代理ip,并退出循环
                if err_num >=30:
                    self.__init__()
                    break
            # url = "https://blog.csdn.net/qq_41782425/article/details/84934224"
            # try:
            #     response = requests.get(url,headers=headers,proxies={"http":""})
            #     num += 1
            #     print '第' + str(num) + '次访问'
            #     print response.url
            # except Exception as result:
            #     err_num+=1
            #     print "错误信息(%d):%s"%(err_num,result)
            #     if err_num >=30:
            #         break
        # 当退出循环时,看就会执行get_proxy获取代理的方法
        print "正在重新获取代理IP"
        self.get_proxy()


if __name__ == "__main__":
    CsdnSpider().get_proxy()





猜你喜欢

转载自blog.csdn.net/qq_41782425/article/details/84993073