python 爬虫系列01 认识 uillib

本系列所有文章基于 python3.5.2

urllib 是 python 常用内建模块 提供了一系列用于操作 URL 的功能

GET

uillib 的 request 模块可以非常方便的抓取 URL 内容,也就是发送 GET 请求到指定网页,然后获得服务器的 HTTP 响应

from urllib import request

with request.urlopen('https://api.douban.com/v2/book/2129650') as f:
    data = f.read()
    print('Status: ', f.status, f.reason)
    for k, v in f.getheaders():
        print('(%s: %s)' % (k, v))
    print("Data", data.decode('utf-8'))

可以看到 HTTP 响应头和 JSON 数据

Status:  200 OK
(Date: Mon, 04 Sep 2017 03:07:24 GMT)
(Content-Type: application/json; charset=utf-8)
(Content-Length: 2058)
(Connection: close)
(Vary: Accept-Encoding)
(X-Ratelimit-Remaining2: 99)
(X-Ratelimit-Limit2: 100)
(Expires: Sun, 1 Jan 2006 01:00:00 GMT)
(Pragma: no-cache)
(Cache-Control: must-revalidate, no-cache, private)
(Set-Cookie: bid=0jmmjXZzj9A; Expires=Tue, 04-Sep-18 03:07:24 GMT; Domain=.douban.com; Path=/)
(X-DOUBAN-NEWBID: 0jmmjXZzj9A)
(X-DAE-Node: sindar17d)
(X-DAE-App: book)
(Server: dae)
Data {"rating":{"max":10,"numRaters":16,"average":"7.4","min":0},"subtitle":"","author":["廖雪峰"],"pubdate":"2007","tags":[{"count":21,"name":"spring","title".....

如果我们要想模拟浏览器发送GET请求,就需要使用Request对象,通过往Request对象添加 HTTP 头,我们就可以把请求伪装成浏览器。例如,模拟iPhone 6去请求豆瓣首页:

req = request.Request('http://www.douban.com/')
req.add_header('User-Agent',
               'Mozilla/6.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/8.0 Mobile/10A5376e Safari/8536.25')
with request.urlopen(req) as f:
    print('Status:', f.status, f.reason)
    for k, v in f.getheaders():
        print('%s: %s' % (k, v))
    print('Data:', f.read().decode('utf-8')) 

这样会返回适合 iphone 的移动版网页

<!DOCTYPE html>
<html itemscope itemtype="http://schema.org/WebPage">
    <head>
        <meta charset="UTF-8">
        <title>豆瓣(手机版)</title>
        <meta name="viewport" content="width=device-width, height=device-height, user-scalable=no, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0">
        <meta name="format-detection" content="telephone=no">
        <link rel="canonical" href="https://m.douban.com/">
        <link href="https://img3.doubanio.com/f/talion/3c45a4b3705e30953879f6078082cbd1b9f88858/css/card/base.css" rel="stylesheet">

    <meta name="description" content="读书、看电影、涨知识、学穿搭...,加入兴趣小组,获得达人们的高质量生活经验,找到有相同爱好的小伙伴。">
    <meta name="keywords" content="豆瓣,手机豆瓣,豆瓣手机版,豆瓣电影,豆瓣读书,豆瓣同城">
......

POST

如果要以 POST 发送一个请求,只需要把参数 data 以 bytes 形式传入。

我们模拟一个微博登录,先读取登录的邮箱和口令,然后按照weibo.cn的登录页的格式以username=xxx&password=xxx的编码传入:

from urllib import request, parse

print('Login to weibo.cn...')
email = input('请输入 Email: ')
passwd = input('请输入密码 Password: ')
login_data = parse.urlencode([
    ('username',email ),
    ('password', passwd),
    ('entry', 'mweibo'),
    ('client_id', ''),
    ('savestate', '1'),
    ('ec', ''),
    ('pagerefer', 'https://passport.weibo.cn/signin/welcome?entry=mweibo&r=http%3A%2F%2Fm.weibo.cn%2F')
])

req = request.Request('https://passport.weibo.cn/sso/login')
req.add_header('Origin', 'https://passport.weibo.cn')
req.add_header('User-Agent', 'Mozilla/6.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/8.0 Mobile/10A5376e Safari/8536.25')
req.add_header('Referer', 'https://passport.weibo.cn/signin/login?entry=mweibo&res=wel&wm=3349&r=http%3A%2F%2Fm.weibo.cn%2F')

with request.urlopen(req, data=login_data.encode('utf-8')) as f:
    print('Status:', f.status, f.reason)
    for k, v in f.getheaders():
        print('%s: %s' % (k, v))
    print('Data:', f.read().decode('utf-8'))

如果登录成功,我们获得的响应如下:

Status: 200 OK
Server: nginx/1.2.0
...
Set-Cookie: SSOLoginState=1432620126; path=/; domain=weibo.cn
...
Data: {"retcode":20000000,"msg":"","data":{...,"uid":"1658384301"}}

如果登录失败,我们获得的响应如下:

...
Data: {"retcode":50011015,"msg":"\u7528\u6237\u540d\u6216\u5bc6\u7801\u9519\u8bef","data":{"username":"[email protected]","errline":536}}

备注:此文是学习 <廖雪峰 python 基础>的记录,同时作为 python 爬虫系列的基础篇

猜你喜欢

转载自blog.csdn.net/hepann44/article/details/77834690