Python crawler + data analysis + data visualization (analyzing the barrage of "Knife in the Snow")
beep beep
Have all the brothers of Swordsman in the Snow watched it? I feel lonely after reading it, but it feels okay. Forgive me for not reading the original novel~
The Douban score is 5.8, which shows that I am still right about it.
Of course, this does not prevent its broadcast volume from rising, with 2.5 billion broadcasts in half a month, an average of 100 million per episode, that is, only one episode per day is a bit uncomfortable.
Let's collect its bullet screen today, realize data visualization, and see what the bullet screen culture has output~
reptile part
Let's collect its bullet screen first and save it to an Excel table~
First install these two modules
requests # 发送网络请求
pandas as pd # 保存数据
If you don't know how to install modules, move to the homepage to see my top article, and there are special explanations on installing modules in detail.
code section
import requests # 发送网络请求
import pandas as pd # 保存数据
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36'
}
# 构建一个列表储存数据
data_list = []
for page in range(15, 1500, 30):
try:
url = f'https://mfm.video.qq.com/danmu?otype=json&target_id=7626435152%26vid%3Dp0041oidttf&session_key=0%2C174%2C1642248894×tamp={
page}'
# 1. 发送网络请求
response = requests.get(url=url, headers=headers)
# 2. 获取数据 弹幕内容 <Response [200]>: 告诉我们响应成功
json_data = response.json()
# print(json_data)
# 3. 解析数据(筛选数据) 提取想要的一些内容 不想要的忽略掉
comments = json_data['comments']
for comment in comments:
data_dict = {
}
data_dict['commentid'] = comment['commentid']
data_dict['content'] = comment['content']
data_dict['opername'] = comment['opername']
print(data_dict)
data_list.append(data_dict)
except:
pass
# 4. 保存数据 wps 默认以gbk的方式打开的
df = pd.DataFrame(data_list)
# 乱码, 指定编码 为 utf-8 或者是 gbk 或者 utf-8-sig
df.to_csv('data.csv', encoding='utf-8-sig', index=False)
Show results
data visualization
With the data in hand, let's start making word cloud analysis.
These two modules need to be installed
jieba
pyecharts
Code display
import jieba
from pyecharts.charts import WordCloud
import pandas as pd
from pyecharts import options as opts
wordlist = []
data = pd.read_csv('data.csv')['content']
data
data_list = data.values.tolist()
data_str = ' '.join(data_list)
words = jieba.lcut(data_str)
for word in words:
if len(word) > 1:
wordlist.append({
'word':word, 'count':1})
df = pd.DataFrame(wordlist)
dfword = df.groupby('word')['count'].sum()
dfword2 = dfword.sort_values(ascending=False)
dfword = df.groupby('word')['count'].sum()
dfword2 = dfword.sort_values(ascending=False)
dfword3['word'] = dfword3.index
dfword3
word = dfword3['word'].tolist()
count = dfword3['count'].tolist()
a = [list(z) for z in zip(word, count)]
c = (
WordCloud()
.add('', a, word_size_range=[10, 50], shape='circle')
.set_global_opts(title_opts=opts.TitleOpts(title="词云图"))
)
c.render_notebook()
Show results
You can see from the effect of the word cloud map
that this, the rich and the last three have the most comment data. Let’s look at the statistics.
Video explanation
All steps are explained in detail in the video
Python crawler + data analysis + data visualization (analyzing the barrage of "Knife in the Snow")
Welfare link
There are bullet screens and word cloud images. It is impossible to say without the video. I have compiled the code. You can try it yourself. I will not show it. If you show it, you will not be able to see it.
import requests
import re
from tqdm import tqdm
url = 'https://vd.l.qq.com/proxyhttp'
data = {
'adparam': "pf=in&ad_type=LD%7CKB%7CPVL&pf_ex=pc&url=https%3A%2F%2Fv.qq.com%2Fx%2Fcover%2Fmzc0020036ro0ux%2Fc004159c18o.html&refer=https%3A%2F%2Fv.qq.com%2Fx%2Fsearch%2F&ty=web&plugin=1.0.0&v=3.5.57&coverid=mzc0020036ro0ux&vid=c004159c18o&pt=&flowid=55e20b5f153b460e8de68e7a25ede1bc_10201&vptag=www_baidu_com%7Cx&pu=-1&chid=0&adaptor=2&dtype=1&live=0&resp_type=json&guid=58c04061fed6ba662bd7d4c4a7babf4f&req_type=1&from=0&appversion=1.0.171&uid=115600983&tkn=3ICG94Dn33DKf8LgTEl_Qw..<=qq&platform=10201&opid=03A0BB50713BC1C977C0F256056D2E36&atkn=75C3D1F2FFB4B3897DF78DB2CF27A207&appid=101483052&tpid=3&rfid=f4e2ed2359bc13aa3d87abb6912642cf_1642247026",
'buid': "vinfoad",
'vinfoparam': "spsrt=1&charge=1&defaultfmt=auto&otype=ojson&guid=58c04061fed6ba662bd7d4c4a7babf4f&flowid=55e20b5f153b460e8de68e7a25ede1bc_10201&platform=10201&sdtfrom=v1010&defnpayver=1&appVer=3.5.57&host=v.qq.com&ehost=https%3A%2F%2Fv.qq.com%2Fx%2Fcover%2Fmzc0020036ro0ux%2Fc004159c18o.html&refer=v.qq.com&sphttps=1&tm=1642255530&spwm=4&logintoken=%7B%22main_login%22%3A%22qq%22%2C%22openid%22%3A%2203A0BB50713BC1C977C0F256056D2E36%22%2C%22appid%22%3A%22101483052%22%2C%22access_token%22%3A%2275C3D1F2FFB4B3897DF78DB2CF27A207%22%2C%22vuserid%22%3A%22115600983%22%2C%22vusession%22%3A%223ICG94Dn33DKf8LgTEl_Qw..%22%7D&vid=c004159c18o&defn=&fhdswitch=0&show1080p=1&isHLS=1&dtype=3&sphls=2&spgzip=1&dlver=2&drm=32&hdcp=0&spau=1&spaudio=15&defsrc=1&encryptVer=9.1&cKey=1WuhcCc07Wp6JZEItZs_lpJX5WB4a2CdS8kEoQvxVaqtHEZQ1c_W6myJ8hQOnmDFHMUnGJTDNTvp2vPBr-xE-uhvZyEMY131vUh1H4pgCXe2Op8F_DerfPItmE508flzsHwnEERQEN_AluNDEH6IC8EOljLQ2VfW2sTdospNPlD9535CNT9iSo3cLRH93ogtX_OJeYNVWrDYS8b5t1pjAAuGkoYGNScB_8lMah6WVCJtO-Ygxs9f-BtA8o_vOrSIjG_VH7z0wWI3--x_AUNIsHEG9zgzglpES47qAUrvH-0706f5Jz35DBkQKl4XAh32cbzm4aSDFig3gLiesH-TyztJ3B01YYG7cwclU8WtX7G2Y6UGD4Z1z5rYoM5NpAQ7Yr8GBgYGBgZKAPma&fp2p=1&spadseg=3"
}
headers = {
'cookie': 'pgv_pvid=7300130020; tvfe_boss_uuid=242c5295a1cb156d; appuser=BF299CB445E3A324; RK=6izJ0rkfNn; ptcz=622f5bd082de70e3e6e9a077923b48f72600cafd5e4b1e585e5f418570fa30fe; ptui_loginuin=1321228067; luin=o3452264669; lskey=000100003e4c51dfb8abf410ca319e572ee445f5a77020ba69d109f47c2ab3d67e58bd099a40c2294c41dbd6; o_cookie=3452264669; uid=169583373; fqm_pvqid=89ea2cc7-6806-4091-989f-5bc2f2cdea5c; fqm_sessionid=7fccc616-7386-4dd4-bba5-26396082df8d; pgv_info=ssid=s2517394073; _qpsvr_localtk=0.13663981383113954; login_type=2; vversion_name=8.2.95; video_omgid=d91995430fa12ed8; LCZCturn=798; lv_play_index=39; o_minduid=9ViQems9p2CBCM5AfqLWT4aEa-btvy40; LPSJturn=643; LVINturn=328; LPHLSturn=389; LZTturn=218; ufc=r24_7_1642333009_1642255508; LPPBturn=919; LPDFturn=470',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36'
}
response = requests.post(url=url, json=data, headers=headers)
html_data = response.json()['vinfo']
print(html_data)
m3u8_url = re.findall('url":"(.*?)",', html_data)[3]
m3u8_data = requests.get(url=m3u8_url).text
m3u8_data = re.sub('#E.*', '', m3u8_data).split()
for ts in tqdm(m3u8_data):
ts_1 = 'https://apd-327e87624fa9c6fc7e4593b5030502b1.v.smtcdns.com/vipts.tc.qq.com/AaFUPCn0gS17yiKCHnFtZa7vI5SOO0s7QXr0_3AkkLrQ/uwMROfz2r55goaQXGdGnC2de645-3UDsSmF-Av4cmvPV0YOx/svp_50112/vaemO__lrQCQrrgtQzL5v1kmLVKQZEaG2UBQO4eMRu4BAw6vBUoD1HAf7yUD8BtrL3NLr7bf9yrfSaqK5ufP8vmfEejwt0tuD8aNhyny1M-GJ8T1L1qi0R47t-v8KxV0ha-jJhALtc2N3tgRaTSfRwXwJ_vQObnhIdbyaVlJ2DzvMKoIlKYb_g/'
ts_url = ts_1 + ts
ts_content = requests.get(url=ts_url).content
with open('斗破12.mp4', mode='ab') as f:
f.write(ts_content)
print('斗破下载完成')