requests+lxml+xpath

requests库

获得html页面

import requests
r = requests.get(url)
r.content.decode()
或
r.text()

发送带header的请求

headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36"}
r = requests.get(url, headers=headers)

发送带params的请求

p = {"wd": "python"}
r = requests.get(url,params=kw)
或者
url = “www.baidu.com/s?wd={}”.format(“python”)
r = requests.get(url)

发送post请求

  self.data = {
            "query": "人生",
            "from": "zh",
            "to": "en",
            "token": "3382b43f5bd30a8207f823d122f13b36",
            "sign": "548627.834594"
        }
  self.headers = {
   "user-agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 11_0 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Version/11.0 Mobile/15A372 Safari/604.1"}
   r = requests.post(self.url, data=self.data, headers=self.headers)

使用代理

为什么爬虫需要使用代理?
让服务器以为不是同一个客户端在请求
防止我们的真实地址被泄漏
import requests
url = "http://httpbin.org/ip"
headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36"}
proxies = {"http": "http://119.3.37.101:8058"}
r = requests.get(url, headers=headers, proxies=proxies)
print(r.text)

使用session保存登陆状态

import requests

url = "登陆界面的url"
data = {"username": "", "password": ""}
headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36"}
session = requests.session()
session.post(url, headers=headers, data=data)
r = session.get('登陆后能访问的页面')

处理不信任的SSL证书

r = requests.get(url, verify=False)

Xpath

  1. Xpath语法
  • /:查找当前元素的直接子结点,不包括孙子结点。
  • //:查找子孙结点,包括儿子和孙子
  • @://div[@id] 获取所有div元素并且div含有id属性;
       //div[@id=“first”] 获取所有div元素并且div的id属性值为first;
  • 模糊匹配://div[contains(@class, “f1”)]获取所有div中class属性值包含f1的标签,此时div的class属性可能还包含其他类。
  1. 谓词
  • //div[1]:获取所有的div标签中的第一个标签
  • //div[last()]:获取所有的div标签中的最后一个标签
  • //div[position()< 3]:获取所有div标签的前两个标签
  • ps:下标从1开始算。
  1. 通配符
  • *:匹配任意结点
  1. 选取多个路径
  • //div[@class=‘a’] | //div[@class=‘b’] 获取所有div标签中class属性值为a,b的标签
  1. 运算符
  • and://div[@class=‘a’ and @id=‘b’] 获取所有div标签中class属性值为a,id属性值为b的标签

lxml库

  1. 解析html字符串
from lxml import etree

text = """
<div>test</div>
"""
htmlElement = etree.HTML(text)
print(etree.tostring(htmlElement, encoding='utf-8').decode('utf-8'))
  1. 解析html文件
parser = etree.HTMLParser(encoding='utf-8')//指定解析器,用于解析一些不规则的html代码
htmlElement = etree.parse(filepath,parser=parser)
  1. lxml结合xpath使用
    在这里插入图片描述
from lxml import etree

url = ""
html = etree.HTML(url)
# xpath()返回的是列表
# 获取所有tr标签
trs = html.xpath("//tr")
# 获取第2个tr标签
tr = html.xpath("//tr[2]")[0]
# 获取所有class等于even的tr标签
trs = html.xpath("//tr[@class='even']")
# 获取所有a标签的href的属性
aList = html.xpath("//a/@href")
# 获取所有职位纯文本信息
trs = html.xpath('//tr[position()>1]')
for tr in trs:
    # .代表从当前tr元素下继续提取元素,不加默认从html下提取元素
    href = tr.xpath(".//a//@href")[0]
    full_url = 'http://tecent.com' + href
    title = tr.xpath("./td[1]/text()")[0]
    category = tr.xpath("./td[2]/text()")[0]
    nums = tr.xpath("./td[3]/text()")[0]
    city = tr.xpath("./td[4]/text()")[0]
    publish_time = tr.xpath("./td[5]/text()")[0]
    kw = {
        'url': full_url,
        'title': title,
        'category': category,
        'nums': nums,
        'city': city,
        'publish_time': publish_time
    }
    # db.insert({})
发布了8 篇原创文章 · 获赞 1 · 访问量 212

猜你喜欢

转载自blog.csdn.net/qq_39249347/article/details/104192029