Crawler start route

Reptile Road

Crawler's weekly learning plan:

The picture below is the preparation of the crawler

Crawler route

Crawler crawling fast proxy case:

Website url="https://www.kuaidaili.com/free/"

For this crawl, we use the requests third-party library

Requests is a Python HTTP client library, we can use it to get HTML source code

import requests
url="https://www.kuaidaili.com/free/"
headers={
    
    
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36"
}
#这里进行了头部的伪装
res=requests.get(url,headers=headers)
res.encoding="utf-8"
html=res.text

Then we use xpath to implement tag traversal to get the content we need

e=etree.HTML(html)
ip_list=e.xpath("//tr/td[1]/text()")
port_list=e.xpath("//tr/td[2]/text()")
#采用zip迭代的方式打印输出
for ip,port in zip(ip_list,port_list):
    str="ip:"+ip+"\t端口号:"+port
    print(str)

summary

This article mainly explains the structure and application of web crawlers, as well as the case of Python implementation of crawlers. I hope you will focus on the web crawler workflow and the way that Requests implements HTTP requests in this article.

Guess you like

Origin blog.csdn.net/IT6848/article/details/108733841