I finally did not hold back with a wave of Python climb goddess

You learn reptiles, and ultimately do not climb or for sister
What is not said, he began to welfare gift ~
Goddess of the General Assembly
Not know how many people know "understand ball emperor" This APP (website), how many people concerned about it in a column, "Goddess Conference", where there is no football, only goddess oh.
Such is the style
Goddess score, all determined by the fans, is not very red chicken, here's a look at the fans in the eyes of the goddess ranking it.
start
Obtain ID information
First of all, we can understand the network crawl request of Timor-APP, get a API,
http://api.dongqiudi.com/search?keywords=type=all&page=
The API, we can get the following information
We focus on ID and thumb, where the goddess behind ID for splicing HTML page address, thumb on for collection.
So we can get a simple analytical function
def get_list(page): nvshen_id_list = [] nvshen_id_picture = [] for i in range(1, page): print("获取第" + str(i) + "页数据") url = 'http://api.dongqiudi.com/search?keywords=%E5%A5%B3%E7%A5%9E%E5%A4%A7%E4%BC%9A&type=all&page=' + str(i) html = requests.get(url=url).text news = json.loads(html)['news'] if len(news) == 0: print("没有更多啦") break nvshen_id = [k['id'] for k in news] nvshen_id_list = nvshen_id_list + nvshen_id nvshen_id_picture = nvshen_id_picture + [{k['id']: k['thumb']} for k in news] time.sleep(1) return nvshen_id_list, nvshen_id_picture
下载 HTML 页面
接下来,通过观察,我们能够得到,每个女神所在的页面地址都是这样的,
https://www.dongqiudi.com/archive/**.html
其中 ** 就是上面拿到的 ID 值,那么获取 HTML 页面的代码也就有了
def download_page(nvshen_id_list): for i in nvshen_id_list: print("正在下载ID为" + i + "的HTML网页") url = 'https://www.dongqiudi.com/archive/%s.html' % i download = DownloadPage() html = download.getHtml(url) download.saveHtml(i, html) time.sleep(2) class DownloadPage(object): def getHtml(self, url): html = requests.get(url=url).content return html def saveHtml(self, file_name, file_content): with open('html_page/' + file_name + '.html', 'wb') as f: f.write(file_content)
防止访问限制,每次请求都做了2秒的等待
但是,问题来了
当我直接请求这个页面的时候,竟然是这样的
被(悲)拒(剧)了
没办法,继续斗争。重新分析,发现请求中有携带一个 cookie,哈哈,这个我们已经轻车熟路啦
对 requests 请求增加 cookie,同时再把 headers 里面增加个 User-Agent,再试
成了!
解析本地 HTML
最后,就是解析下载到本地的 HTML 页面了,页面的规则就是,本期女神介绍页面,会公布上期女神的综合得分,而我们的主要任务就是获取各个女神的得分
def deal_loaclfile(nvshen_id_picture): files = os.listdir('html_page/') nvshen_list = [] special_page = [] for f in files: if f[-4:] == 'html' and not f.startswith('~'): htmlfile = open('html_page/' + f, 'r', encoding='utf-8').read() content = BeautifulSoup(htmlfile, 'html.parser') try: tmp_list = [] nvshen_name = content.find(text=re.compile("上一期女神")) if nvshen_name is None: continue nvshen_name_new = re.findall(r"女神(.+?),", nvshen_name) nvshen_count = re.findall(r"超过(.+?)人", nvshen_name) tmp_list.append(''.join(nvshen_name_new)) tmp_list.append(''.join(nvshen_count)) tmp_list.append(f[:-4]) tmp_score = content.find_all('span', attrs={'style': "color:#ff0000"}) tmp_score = list(filter(None, [k.string for k in tmp_score])) if '.' in tmp_score[0]: if len(tmp_score[0]) > 3: tmp_list.append(''.join(list(filter(str.isdigit, tmp_score[0].strip())))) nvshen_list = nvshen_list + get_picture(content, tmp_list, nvshen_id_picture) else: tmp_list.append(tmp_score[0]) nvshen_list = nvshen_list + get_picture(content, tmp_list, nvshen_id_picture) elif len(tmp_score) > 1: if '.' in tmp_score[1]: if len(tmp_score[1]) > 3: tmp_list.append(''.join(list(filter(str.isdigit, tmp_score[1].strip())))) nvshen_list = nvshen_list + get_picture(content, tmp_list, nvshen_id_picture) else: tmp_list.append(tmp_score[1]) nvshen_list = nvshen_list + get_picture(content, tmp_list, nvshen_id_picture) else: special_page.append(f) print("拿不到score的HTML:", f) else: special_page.append(f) print("拿不到score的HTML:", f) except: print("解析出错的HTML:", f) raise return nvshen_list, special_page def get_picture(c, t_list, n_id_p): print("进入get_picture函数:") nvshen_l = [] tmp_prev_id = c.find_all('a', attrs={"target": "_self"}) for j in tmp_prev_id: if '期' in j.string: href_list = j['href'].split('/') tmp_id = re.findall(r"\d+\.?\d*", href_list[-1]) if len(tmp_id) == 1: prev_nvshen_id = tmp_id[0] t_list.append(prev_nvshen_id) for n in n_id_p: for k, v in n.items(): if k == prev_nvshen_id: t_list.append(v) print("t_list", t_list) nvshen_l.append(t_list) print("get_picture函数结束") return nvshen_l
保存数据
对于我们最后解析出来的数据,我们直接保存到 csv 文件中,如果数据量比较大的话,还可以考虑保存到 mongodb 中。
def save_to_file(nvshen_list, filename): with open(filename + '.csv', 'w', encoding='utf-8') as output: output.write('name,count,score,weight_score,page_id,picture\n') for row in nvshen_list: try: weight = int(''.join(list(filter(str.isdigit, row[1])))) / 1000 weight_2 = float(row[2]) + float('%.2f' % weight) weight_score = float('%.2f' % weight_2) rowcsv = '{},{},{},{},{},{}'.format(row[0], row[1], row[3], weight_score, row[4], row[5]) output.write(rowcsv) output.write('\n') except: raise
对于女神的得分,又根据打分的人数,做了个加权分数
保存图片
def save_pic(url, nick_name): resp = requests.get(url) if not os.path.exists('picture'): os.mkdir('picture') if resp.status_code == 200: with open('picture' + f'/{nick_name}.jpg', 'wb') as f: f.write(resp.content)
直接从拿到的 thumb 地址中下载图片,并保存到本地。
做一些图
首先我们先做一个柱状图,看看排名前10和倒数前10的情况
可以看到,朱茵、石川恋和高圆圆位列三甲,而得分高达95+的女神也有7位之多。那么排名后10位的呢,自行看吧,有没有人感到有点扎心呢,哈哈哈。同时,也能够从打分的人数来看出,人气高的女神,普遍得分也不低哦。
不过,该排名目前只代表球迷心目中的榜单,不知道程序猿心中的榜单会是怎样的呢
词云
图片墙
流口水哦。
百度 API 评分
百度有免费的人脸检测 API,只要输入图片,就能够得到对应的人脸得分,还是非常方便的,感兴趣的小伙伴可以去官网看看哦。
我这里直接给出了我通过百度 API 得出的女神新得分,一起来看看吧
哈哈哈哈,AI 的评分,对于图片的依赖太高,纯属娱乐。

Guess you like

Origin www.cnblogs.com/chengxyuan/p/11929574.html