How does python create its own IP pool? This article teaches you~

foreword

Hello! Hello everyone, this is the Demon King~

Development environment:

  • Python 3.8
  • Pycharm

Module use:

  • requests >>> pip install requests
  • parsel >>> pip install parsel

If installing python third-party modules:

  1. win + R Enter cmd Click OK, enter the installation command pip install module name (pip install requests) Enter
  2. Click Terminal in pycharm to enter the installation command

How to configure the python interpreter in pycharm?

  1. Select file >>> setting >>> Project >>> python interpreter (python interpreter)
  2. Click on the gear, select add
  3. Add python installation path

How does pycharm install plugins?

  1. Select file >>> setting >>> Plugins
  2. Click on Marketplace and enter the name of the plug-in you want to install. For example: translation plug-in input translation / Chinese plug-in input Chinese
  3. Select the corresponding plug-in and click install.
  4. After the installation is successful, the option to restart pycharm will pop up, click OK, and restart to take effect.

proxy ip structure

proxies_dict = {
    
    
    "http": "http://" + ip:端口,
    "https": "http://" + ip:端口,
}

Ideas:

1. Data source analysis

Find the content of the data we want, where did it come from

2. Code implementation steps:

  1. send request, for destination url send request
  2. Get data, get the server to return the response data (web page source code)
  3. Parse the data and extract the data content we want
  4. Save data, crawl local csv database of music videos… IP detection, detect whether IP proxy is available, save with IP proxy

from import from
import
import what method from what module
from xxx import * # import all methods


code

# 导入数据请求模块
import requests  # 数据请求模块 第三方模块 pip install requests
# 导入 正则表达式模块
import re  # 内置模块
# 导入数据解析模块
import parsel  # 数据解析模块 第三方模块 pip install parsel  >>> 这个是scrapy框架核心组件


lis = []
lis_1 = []

# 1. 发送请求, 对于目标网址发送请求 https://www.kuaidaili.com/free/
for page in range(11, 21):
    url = f'https://www.kuaidaili.com/free/inha/{
    
    page}/'  # 确定请求url地址
    """
    headers 请求头 作用伪装python代码
    """
    # 用requests模块里面get 方法 对于url地址发送请求, 最后用response变量接收返回数据
    response = requests.get(url)
    # <Response [200]>  请求之后返回response响应对象, 200状态码表示请求成功
    # 2. 获取数据, 获取服务器返回响应数据(网页源代码)  response.text 获取响应体文本数据
    # print(response.text)
    # 3. 解析数据, 提取我们想要的数据内容
    """
    解析数据方式方法:
        正则: 可以直接提取字符串数据内容
    需要把获取下来html字符串数据 进行转换
        xpath: 根据标签节点 提取数据内容
        css选择器: 根据标签属性提取数据内容 
        
        哪一种方面用那种, 那是喜欢用那种
    """
    # 正则表达式提取数据内容
    """
    # 正则提取数据 re.findall() 调用模块里面的方法
    # 正则 遇事不决 .*? 可以匹配任意字符(除了换行符\n以外) re.S
    
    ip_list = re.findall('<td data-title="IP">(.*?)</td>', response.text, re.S)
    port_list = re.findall('<td data-title="PORT">(.*?)</td>', response.text, re.S)
    print(ip_list)
    print(port_list)
    """
    # css选择器:
    """
    # css选择器提取数据 需要把获取下来html字符串数据(response.text) 进行转换
    # 我不会css 或者 xpath 怎么办
    # #list > table > tbody > tr > td:nth-child(1)
    # //*[@id="list"]/table/tbody/tr/td[1]
    selector = parsel.Selector(response.text) # 把html 字符串数据转成 selector 对象
    ip_list = selector.css('#list tbody tr td:nth-child(1)::text').getall()
    port_list = selector.css('#list tbody tr td:nth-child(2)::text').getall()
    print(ip_list)
    print(port_list)
    """
    # xpath 提取数据
    selector = parsel.Selector(response.text) # 把html 字符串数据转成 selector 对象
    ip_list = selector.xpath('//*[@id="list"]/table/tbody/tr/td[1]/text()').getall()
    port_list = selector.xpath('//*[@id="list"]/table/tbody/tr/td[2]/text()').getall()
    # print(ip_list)
    # print(port_list)
    for ip, port in zip(ip_list, port_list):
        # print(ip, port)
        proxy = ip + ':' + port
        proxies_dict = {
    
    
            "http": "http://" + proxy,
            "https": "http://" + proxy,
        }
        # print(proxies_dict)
        lis.append(proxies_dict)
        # 4.检测IP质量
        try:
            response = requests.get(url=url, proxies=proxies_dict, timeout=1)
            if response.status_code == 200:
                print('当前代理IP: ', proxies_dict,  '可以使用')
                lis_1.append(proxies_dict)
        except:
            print('当前代理IP: ', proxies_dict,  '请求超时, 检测不合格')



print('获取的代理IP数量: ', len(lis))
print('获取可用的IP代理数量: ', len(lis_1))
print('获取可用的IP代理: ', lis_1)

dit = {
    
    
    'http': 'http://110.189.152.86:40698',
    'https': 'http://110.189.152.86:40698'
}

insert image description here
insert image description here

video tutorial

IP is blocked! ! Looking for an agent to continue? Teach you to collect free IP and test whether it is available~

epilogue

Well, this article of mine ends here!

If you have more suggestions or questions, feel free to comment or private message me! Let's work hard together (ง •_•)ง

Follow the blogger if you like it, or like and comment on my article! ! !

Guess you like

Origin blog.csdn.net/python56123/article/details/124171676