urllib.parse: very bottom, but is a good path processing module url

Introduction

urllib.parse urllib packet is below a module, other modules can urllib requests used instead. But urlli.parse we need to know, because the module below many operations url path method

urlparse: Split url

from urllib import parse
url = "https://www.baidu.com/s?wd=python"
print(parse.urlparse(url))  # ParseResult(scheme='https', netloc='www.baidu.com', path='/s', params='', query='wd=python', fragment='')
"""
scheme:协议,比如http,https等等。
netloc:域名,这里是www.baidu.com
path:路径,跟在域名后面
params:参数
query:查询条件
fragment:锚点,用于直接定位页面的下拉位置,跳转到网页的指定位置
"""
scheme, netloc, path, params, query, fragment = parse.urlparse(url)
print(f"协议:{scheme}")
print(f"域名:{netloc}")
print(f"路径:{path}")
print(f"参数:{params}")
print(f"查询参数:{query}")
print(f"锚点:{fragment}")
"""
协议:https
域名:www.baidu.com
路径:/s
参数:
查询参数:wd=python
锚点:
"""


# 关于urlparse里面还可以传入一个scheme
# 这个参数只有在传入的url没有scheme的时候,才会起作用
url = "www.baidu.com/s?wd=python"
print(parse.urlparse(url))  # ParseResult(scheme='', netloc='', path='www.baidu.com/s', params='', query='wd=python', fragment='')
print(parse.urlparse(url), scheme="哈哈哈")  # ParseResult(scheme='哈哈哈', netloc='', path='www.baidu.com/s', params='', query='wd=python', fragment='')

urlunparse: generated url

# urlparse是将url拆分
# urlunparse是将url组合,参数则是一个元祖,里面是urlparse拆分之后的各个部分
url_params = ("https", "www.abc.com", "/info/ad2sads", "",  "name=saya&age=16", "splendid")
print(parse.urlunparse(url_params))  # https://www.abc.com/info/ad2sads?name=saya&age=16#splendid

urljoin: combination url

# 有时候我们获取的url是不包含域名的
# 比如爬虫获取图片,本来的路径是http://www.xxx.com/picture/aaa.jpg
# 但是返回的是/pic/aaa.jpg,于是我们就需要进行组合
netloc = "http://www.xxx.com"
path = "/picture/aaa.jpg"  # 开头的/无论有没有,都能组合成功
print(parse.urljoin(netloc, path))  # http://www.xxx.com/picture/aaa.jpg


# 如果本来就是完整路径呢?
netloc = "http://www.xxx.com"
path = "http://www.xxx.com/picture/aaa.jpg"
print(parse.urljoin(netloc, path))  # http://www.xxx.com/picture/aaa.jpg
# 如果不是完整路径,会进行拼接,如果是完整路径,那么就结果就是原来本身的完整路径


netloc = "http://www.kkk.com"
path = "http://www.xxx.com/picture/aaa.jpg"
print(parse.urljoin(netloc, path))  # http://www.xxx.com/picture/aaa.jpg
# 两者域名不一样的话,有限以path自身的路径为准
# 只有path中不存在域名的时候,才会使用netloc
netloc = "http://www.kkk.com"
path = "/picture/aaa.jpg"
print(parse.urljoin(netloc, path))  # http://www.kkk.com/picture/aaa.jpg

urlencode: parameter conversion

# 我们在requests中调用get方法传参的时候,直接指定一个字典即可
# 说明requests会自动帮我们转化,那么我们也可以调用urlencode手动转化
netloc = "http://www.query.com"
path = "/search"
params = {"name": "mashiro", "age": 16}
print(parse.urlencode(params))  # ame=mashiro&age=16
print(parse.urljoin(netloc, path) + "?" + parse.urlencode(params))  # http://www.query.com/search?name=mashiro&age=16

quote: Chinese convert url encoding

# 当我们在url中传入中文的时候,会以编码的形式
url = "https://www.baidu.com/s?wd=古明地觉"
print(parse.quote(url))  # https%3A//www.baidu.com/s%3Fwd%3D%E5%8F%A4%E6%98%8E%E5%9C%B0%E8%A7%89

unquote: url codec into Chinese

print(parse.unquote("https%3A//www.baidu.com/s%3Fwd%3D%E5%8F%A4%E6%98%8E%E5%9C%B0%E8%A7%89"))  # https://www.baidu.com/s?wd=古明地觉

Guess you like

Origin www.cnblogs.com/traditional/p/11571693.html