2019-05-30 总阅读量破百万

在简书的第一篇文章是2017年9月6号发表的,经过了630天,一共产出了186篇文章。没用过任何互推,在儿童节的前两天,总阅读量达到了100,0000+!(要不是被屏蔽了几篇文章,其实还能早几天 T_T )

因为简书不能显示总阅读量,所以写了一个Python小爬虫来统计总阅读量(只要替换user_id,就可以算出来任何作者的总阅读量):

# 抓取简书博文总阅读量
# 我的主页:https://www.jianshu.com/u/130f76596b02

import re
import requests
from lxml import etree


header = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 '
                      '(KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'
    }

def get_all_article_links(user_id):
    links_list = []
    i = 1
    switch = True
    while switch:
        url = 'https://www.jianshu.com/u/{}?order_by=shared_at&page={}'.format(user_id, i)
        response = requests.get(url,
                                headers=header,
                                timeout=10
                                )
        tree = etree.HTML(response.text)
        try:
            article_links = tree.xpath('//div[@class="content"]/a[@class="title"]/@href')
        except:
            pass
        for item in article_links:
            article_link = 'https://www.jianshu.com' + item
            print(article_link)
            if not article_link in links_list:
                links_list.append(article_link)
            else:
                switch = False
                break
        i += 1
    return links_list

def get_read_num(user_id):
    num_list = []
    links_list = get_all_article_links(user_id)
    for url in links_list:
        response = requests.get(url,
                                headers=header,
                                timeout=30
                                )
        content = response.text
        read_num_pattern = re.compile(r'"views_count":\d+')
        read_num = int(read_num_pattern.findall(content)[0].split(':')[-1])
        print(read_num)
        num_list.append(read_num)
    return num_list


if __name__ == '__main__':
    read_num_list = get_read_num(user_id='130f76596b02')
    print(read_num_list)
    print(sorted(read_num_list))
    print('Total reads =', sum(read_num_list))

每篇文章阅读数,如下图所示:

[98, 308, 244, 205, 334, 528, 743, 131, 191, 438, 368, 754, 3901, 144, 234, 280, 468, 424, 1156, 549, 3043, 260, 464, 146, 135, 2960, 904, 3346, 85, 255, 2647, 1035, 875, 1119, 863, 469, 156, 1238, 637, 1329, 636, 1826, 1078, 362, 598, 1754, 1632, 761, 1011, 1640, 1591, 317, 1540, 689, 1116, 1062, 1791, 2176, 10573, 1774, 2340, 1197, 1606, 2806, 2168, 1680, 1896, 247, 3454, 571, 104, 147, 220, 1166, 180, 306, 1797, 829, 120, 333, 400, 2151, 96, 186, 232, 1425, 7985, 837, 201, 897, 584, 2584, 3940, 348, 8300, 16597, 229, 10810, 4055, 9930, 21782, 1367, 13142, 15105, 302, 18381, 647, 376, 137, 21397, 25279, 27036, 33929, 1133, 1266, 282, 1129, 17469, 34754, 64309, 149, 305, 1078, 672, 65754, 47316, 404, 72523, 208904, 231, 790, 55, 1377, 50161, 684, 166, 27, 771, 741, 1371, 435, 542, 1498, 1106, 4375, 3104, 182, 1961, 3416, 871, 1575, 343, 479, 333, 489, 204, 120, 370, 582, 1759, 38, 392, 798, 502, 410, 185, 271, 128, 228, 653, 447, 20, 47, 3051, 5275, 2105, 5201, 2795, 2515, 111, 2688, 3257, 11373, 2667, 9269, 6795]

7178691-412249e0977cc42f.png

画张图,是这样的(如果把阅读数从小到大排列,结果符合幂律定律,很有SEO的特点):

7178691-e1c568325a60f6a4.png

本来还想对粉丝做个用户画像,但是由于简书没有现成的标签,从用户信息(性别、发表文章数、文章主题)提取标签成本太高,就不做了。就说说自己的主观感受吧:

  1. 女粉丝比我预料的多(手动二哈狗头);
  2. 作诗画画写散文的粉丝不在少数(简书不是单纯的程序员平台,点赞);
  3. 有不少人是学医和生信的(其实我对生物和医学也挺感兴趣的,当初差点报了北医,很想和这些朋友交流交流);
  4. 有些人是北京时间凌晨点关注和点喜欢的,应该是在国外(知识无国界,顺便鄙视下最近IEEE取消华为资格);
  5. 有非常多的粉丝的第一个关注人就是我,说明是为了看文章,专门注册的简书账号(以获客成本100元计算的话,为简书省了几万乃至十万的拉新成本)

谢谢大家的阅读!

(有时候真的是太忙,所以有的人的留言没有回复,请见谅)

猜你喜欢

转载自blog.csdn.net/weixin_34062469/article/details/90773838