Python crawler actual combat, requests+time module, crawling data of a recruitment website and saving csv files (with source code)

foreword

What I will introduce to you today is that Python crawls the data of a recruitment website and saves it locally. Here, I will give the code to the friends who need it, and give some tips.

First of all, before crawling, you should pretend to be a browser as much as possible without being recognized as a crawler. The basic thing is to add a request header, but there will be many people crawling such plain
text data, so we need to consider changing the proxy IP and random Replace the request header to crawl the recruitment website data.

Before writing crawler code every time, our first and most important step is to analyze our web pages.

Through analysis, we found that the speed of crawling is relatively slow during the crawling process, so we can also improve the crawling speed of crawlers by disabling Google browser images, JavaScript, etc.

recruitment

development tools

Python version: 3.8

Related modules:

requests module

csv module

time module

the code

Environment build

Install Python and add it to the environment variable, and pip installs the required related modules.

Idea analysis

Open the page we want to crawl in the browser
Press F12 to enter the developer tool to see where the recruitment data we want is here
we need the page data

source code structure

Code

f = open('招聘数据.csv', mode='a', encoding='utf-8', newline='')
csv_writer = csv.DictWriter(f, fieldnames=[
    '标题',
    '地区',
    '公司名字',
    '薪资',
    '学历',
    '经验',
    '公司标签',
    '详情页',
])

csv_writer.writeheader() # 写入表头
for page in range(1, 31):
    print(f'------------------------正在爬取第{
      
      page}页-------------------------')
    time.sleep(1)
    # 1. 发送请求
    #url = 'https://www.lagou.com/jobs/positionAjax.json?needAddtionalResult=false'
    url = 'https://www.lagou.com/jobs/positionAjax.json?needAddtionalResult=false'
    # headers 请求头 用来伪装python代码, 防止被识别出是爬虫程序, 然后被反爬
    # pycharm里面 先全部选中 按住 ctrl +R 用正则表达式命令 批量替换数据
    # cookie: 用户信息, 常用于检测是否有登陆账号
    # referer: 防盗链, 告诉服务器我们请求的url地址 是从哪里跳转过来的 (动态网页数据 数据包 要比较多)
    # user-agent: 浏览器的基本标识
    headers = {
    
    
        'cookie': '你的Cookie',
        'referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput=',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36',
    }
    data = {
    
    
        'first': 'false',
        'pn': page,
        'kd': 'python',
        'sid': 'bf8ed05047294473875b2c8373df0357'
    }
    # response 自定义变量 可以自己定义  <Response [200]> 获取服务器给我们响应数据
    response = requests.post(url=url, data=data, headers=headers)
    # 200 状态码标识请求成功
    # print(response.text) # 获取响应体的文本数据 字符串数据类型
    # print(type(response.text))
    # print(response.json()) # 获取响应体的json字典数据 字典数据类型
    # print(type(response.json()))
    #  2. 获取数据
    # print(response.json())
    # pprint.pprint(response.json())
    #  3. 解析数据 json数据最好解析 非常好解析, 就根据字典键值对取值
    # 根据冒号左边的内容, 提取冒号右边的内容
    result = response.json()['content']['positionResult']['result']
    # 列表数据类型, 但是这个列表里面的元素, 是字典数据类型
    # pprint.pprint(result)
    # 循环遍历  从 result 列表里面 把元素一个一个提取出来
    for index in result:
        # pprint.pprint(index)
        # href = index['positionId']
        href = f'https://www.lagou.com/jobs/{
      
      index["positionId"]}.html'
        dit = {
    
    
            '标题': index['positionName'],
            '地区': index['city'],
            '公司名字': index['companyFullName'],
            '薪资': index['salary'],
            '学历': index['education'],
            '经验': index['workYear'],
            '公司标签': ','.join(index['companyLabelList']),
            '详情页': href,
        }
        # ''.join() 把列表转成字符串 '免费班车',
        csv_writer.writerow(dit)
        print(dit)

How to obtain cookies as shown in the figure

Cookie

Result display

Result display

at last

In order to thank the readers, I would like to share with you some of my recent favorite programming dry goods, to give back to every reader, and hope to help you.

There are practical Python tutorials suitable for beginners~

Come and grow up with Xiaoyu!

① More than 100 PythonPDFs (mainstream and classic books should be available)

② Python standard library (the most complete Chinese version)

③ Source code of reptile projects (forty or fifty interesting and classic hand-practicing projects and source codes)

④ Videos on basics of Python, crawlers, web development, and big data analysis (suitable for beginners)

⑤ Python Learning Roadmap (Farewell to Influential Learning)

Guess you like

Origin blog.csdn.net/Modeler_xiaoyu/article/details/128249283