When downloading the huggingface model, this error often occurs, HTTPSConnectionPool(host='huggingface.co', port=443), even if you have the correct Internet posture.
For example, when downloading Tokenizer, it will appear:
tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/mT5_multilingual_XLSum")
1. Download the model directly
There is a way, you can specify the Files and versions of the model to download directly on the huggingface, after the download is complete,
tokenizer = AutoTokenizer.from_pretrained("模型的本地路径")
In this way, it will not be downloaded from the Internet, but the local model will be used directly.
2. Downgrade requests and urllib3
If you don't want to download so many files manually, you can reduce requests to 2.27.1 and urllib3 to 1.25.11
pip install requests==2.27.1
pip install urllib3==1.25.11
You can see that the model is downloading by itself.
Downloading (…)okenizer_config.json: 100%|██████████| 375/375 [00:00<?, ?B/s]
Downloading (…)lve/main/config.json: 100%|██████████| 730/730 [00:00<?, ?B/s]
Downloading spiece.model: 100%|██████████| 4.31M/4.31M [00:03<00:00, 1.08MB/s]
Downloading (…)cial_tokens_map.json: 100%|██████████| 65.0/65.0 [00:00<00:00, 65.1kB/s]