python利用fiddler抓取微信公众号文章及标题(简单易懂)

1安装好fiddler配置好安全证书,这一步很简单就是安装,自己问度娘

2登录微信客户端(电脑),打开公众号滑动,查看响应信息,主要要过滤一下,排除干扰

mp.weixin.qq.com;

3可以点开看返回来的数据

这些都是,点开查看传入的参数

4 构建爬虫

import requests
import time
import json
import pymysql
import random
from lxml import etree

def weixin_spider():
    headers = {
        'Host':'mp.weixin.qq.com',
        'Connection':'keep-alive',
        'Accept': '*/*',
        'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 MicroMessenger/6.5.2.501 NetType/WIFI WindowsWechat QBCore/3.43.1021.400 QQBrowser/9.0.2524.400',
        'X-Requested-With':'XMLHttpRequest',
        'Referer':'https//mp.weixin.qq.com/mp/profile_ext?action=home&__biz=MzA5NTU0NzY2Mg==&scene=124&uin=MjE2MjM1OTUzMw%3D%3D&key=596bb344bde38dd20f59a2fa5330dac3a1acca3e1871b06cf0e54106eaad9f2e2a23f10625dfa5181e1dd5cd4034650e08b869b60d32f1560f51adfe0d0f679879b9fce799a5bf8c8d8ec75b2d45c768&devicetype=Windows+7&version=62060739&lang=zh_CN&a8scene=7&pass_ticket=9CFImLn6k6xDjN338MB1sxqSgPq1I6xxk1LXh8nBfsGJh0dj%2BUO%2FkQBsvf46AyoC&winzoom=1',
        'Accept-Encoding':'gzip, deflate',
        'Accept-Language':'zh-CN,zh;q=0.8,en-us;q=0.6,en;q=0.5;q=0.4',
        'Cookie': '需要自己添加,这个有时限性',

    }
    for x in range(0,100):
        offset_number = 10*x
        print(offset_number)
#每次url需要自己更换,因为里面的key参数是不断改变的
        url1 = "https://mp.weixin.qq.com/mp/profile_ext?action=getmsg&__biz=MzA5NTU0NzY2Mg==&f=json&offset=%s"%offset_number+"&count=10&is_ok=1&scene=124&uin=MjE2MjM1OTUzMw%3D%3D&key=af1c8633b1a3851810bc25d5ced85c1909ef044494f4db4631e2ff9c642c6f8a337899708bdca25b822b00b114e52ecf298811547febcfdd9036e1a042634c2c49fd667685a4cbd4205949aa172ded2e&pass_ticket=9CFImLn6k6xDjN338MB1sxqSgPq1I6xxk1LXh8nBfsGJh0dj%2BUO%2FkQBsvf46AyoC&wxtoken=&appmsg_token=1006_S6a%252FULLhOv%252FIJCE3rgA1NLtksui59VgArLsFDg~~&x5=0&f=json HTTP/1.1"
        
        result = requests.get(url=url1, headers=headers, verify=False)
        html = json.loads(result.text)
        print(html)
        title_list = []
        for item in json.loads(html['general_msg_list'])['list']:
            datatime = item['comm_msg_info']['datetime']
            title_ = item['app_msg_ext_info']['title']
            # print(title_)
            title_list.append(title_)
            eleinums = item['app_msg_ext_info']['multi_app_msg_item_list']
            content_url_ = item['app_msg_ext_info']['content_url']
            for i in eleinums:
                # print(i['title'])
                title_1 = i['title']
                title_list.append(title_1)
        print(title_list)


weixin_spider()

简单的就完成了,最后没数据时候会报keyerror,那是爬完文章了,我爬取得是    《书单 》 公众号,大概690篇文章,爬取量还是很少的,你们可以自己试试,很简单易实现

本人小菜鸡郑重说明:无法爬取大量公众号,而且其中关键参数URL里的key和Cookie这两个由于本人能力暂且无法攻破,这个会有时限而且会不断变动,其中key是加密的,暂时不知道怎么解密破解,但是用来玩玩还是可以的,我就是爬着几个公众号文章和标题用来玩的,可以为爬取公众号提供思路,感觉比较好的可以用中间代理来解决,GitHub上有开源代码可供参考

猜你喜欢

转载自blog.csdn.net/lzz781699880/article/details/89669138
今日推荐