Requests library web crawling actual combat
Example 1: Crawling of JD product pages
import requests
url = "https://item.jd.com/100007136939.html"
try:
kv = {
'user-agent':'Mozilla/5.0'}
r = requests.get(url,headers = kv)
r.raise_for_status()
r.encoding = r.apparent_encoding
print(r.text[:1000])
except:
print("爬取失败")
Jingdong also refuses to crawl, and needs to change the agent name to the server
Example 2: Baidu/360 search keyword submission
#baidu
import requests
keyword = "Python"
try:
kv = {
'wd':keyword}
r = requests.get("http://www.baidu.com/s",params=kv)
print(r.request.url)
r.raise_for_status()
print(len(r.text))
except:
print("爬取失败")
#360
import requests
keyword = "Python"
try:
kv = {
'q':keyword}
r = requests.get("http://www.so.com/s",params=kv)
print(r.request.url)
r.raise_for_status()
print(len(r.text))
except:
print("爬取失败")
Example 3: Crawling and storage of web pictures
import requests
import os
url = "http://file06.16sucai.com/2018/0330/61064182a59d797418c44af840cc1f23.jpg"
root = "D://pics//"
path = root + url.split('/')[-1]
try:
if not os.path.exists(root):
os.mkdir(root)
if not os.path.exists(path):
r = requests.get(url)
with open(path, 'wb') as f:
f.write(r.content)
f.close()
print("文件保存成功")
else:
print("文件已存在")
except:
print("爬取失败")
Tucao
Now, how can the website find the URL at the end of jpg, it is hidden.
Example 4: Automatic query of IP address attribution
import requests
url = 'https://m.ip138.com/iplookup.asp?ip='
try:
kv = {
'user-agent': 'chrome/10'}
r = requests.get(url + '202.204.80.112', headers=kv)
r.raise_for_status()
r.encoding = r.apparent_encoding
print(r.text[-500:])
except:
print("爬取失败")
Note: This is an error, and I did not understand the error. Write it here and record the
modification, and attach a reference article https://blog.csdn.net/weixin_44578172/article/details/109376326
import requests
def getHTMLText(url):
try:
kv={
'user-agent':'Mozilla/5.0'}
r=requests.get(url,headers=kv)
r.raise_for_status()
r.encoding=r.apparent_encoding
print(r.text[2000:3000])#分片查看相应字节
except:
print("爬取失败")
def main():
a=input("请输入要查询的IP地址:")
url='https://ipchaxun.com/'+a
getHTMLText(url)
main()
Running results
Summary:
Everyone is watching the same courses, hahahaha, come on