网络爬虫的“盗亦有道” 和Requests库网络爬取实战
和Requests库网络爬取实战)
学习笔记手札及单元小结
网络爬虫的“盗亦有道”
网络爬虫的限制
1.来源审查:判断User-Agent进行限制
检查来访HTTP协议头的User-Agent域,只响应浏览器或友好爬虫的访问
2.发布公告:Robots协议
告知所有爬虫网站的爬取策略,要求爬虫遵守
Robots协议
1.作用:
网站告知网络爬虫哪些页面可以抓取,哪些不行
2.形式:
在网站根目录下的robots.txt文件
案例:京东的Robots协议
https://www.jd.com/robots.txt
Robots协议基本语法:
#注释,*代表所有,/代表根目录
User-agent: *
Disallow: /?*
Disallow: /pop/*.html
Disallow: /pinpai/*.html?*
User-agent: EtaoSpider
Disallow: /
User-agent: HuihuiSpider
Disallow: /
User-agent: GwdangSpider
Disallow: /
User-agent: WochachaSpider
Disallow: /
https://news.sina.com.cn/robots.txt
User-agent: *
Disallow: /wap/
Disallow: /iframe/
Disallow: /temp/
https://www.qq.com/robots.txt
User-agent: *
Disallow:
Sitemap: http://www.qq.com/sitemap_index.xml
https://news.qq.com/robots.txt
User-agent: *
Disallow:
Sitemap: http://www.qq.com/sitemap_index.xml
Sitemap: http://news.qq.com/topic_sitemap.xml
https://www.moe.edu.cn/robots.txt
无robots协议
感兴趣的小伙伴可以自己去尝试一下扫描二维码关注公众号,回复: 11641298 查看本文章
Robots协议的遵守方式
Robots协议的使用
1.网络爬虫:
自动或人工识别robots.txt,再进行内容爬取
2.约束性:
Robots协议是建议但非约束性,网络爬虫可以不遵守,但存在法律风险。
Requests库网络爬取实战
实例1:京东商品页面的爬取
全代码:
import requests
url = "https://item.jd.com/2967929.html"
try:
r = requests.get(url)
r.raise_for_status()
r.encoding = r.apparent_encoding
print(r.text[:1000])
except:
print("爬取失败")
实例2:亚马逊商品页面的爬取
>>> import requests
>>> r = requests.get("https://www.amazon.cn/gp/product/B01M8L5Z3Y")
>>> r.status_code
200
>>> r.encoding
'UTF-8'
>>> r.encoding = r.apparent_encoding
>>> r.text
>>> r.request.headers
{'User-Agent': 'python-requests/2.22.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
>>> kv ={'user-agent':'Mozilla/5.0'}
>>> url = "https://www.amazon.cn/gp/product/B01M8L5Z3Y"
>>> r = requests.get(url,headers = {'user-agent':'Mozilla/5.0'}) #此处将浏览器端口改为'user-agent':'Mozilla/5.0'
>>> r,status_code
Traceback (most recent call last):
File "<pyshell#22>", line 1, in <module>
r,status_code
NameError: name 'status_code' is not defined #此处代码报错,一定要注意代码的规范书写,快找找错误在哪里
>>> r.status_code
200
>>> r.request.headers
{'user-agent': 'Mozilla/5.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
>>> r.text[:1000]
'\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n <!doctype html><html class="a-no-js" data-19ax5a9jf="dingo">\n <head>\n<script type="text/javascript">var ue_t0=ue_t0||+new Date();</script>\n<script type="text/javascript">\nwindow.ue_ihb = (window.ue_ihb || window.ueinit || 0) + 1;\nif (window.ue_ihb === 1) {\nvar ue_hob=+new Date();\nvar ue_id=\'E06HEJQW1W99HDZ0Z5H0\',\nue_csm = window,\nue_err_chan = \'jserr-rw\',\nue = {};\n(function(d){var e=d.ue=d.ue||{},f=Date.now||function(){return+new Date};e.d=function(b){return f()-(b?0:d.ue_t0)};e.stub=function(b,a){if(!b[a]){var c=[];b[a]=function(){c.push([c.slice.call(arguments),e.d(),d.ue_id])};b[a].replay=function(b){for(var a;a=c.shift();)b(a[0],a[1],a[2])};b[a].isStub=1}};e.exec=function(b,a){return function(){try{return b.apply(this,arguments)}catch(c){ueLogError(c,{attribution:a||"undefined",logLevel:"WARN"})}}}})(ue_csm);\n\nue.stub(ue,"log");ue.stub(ue,"onunload");ue.stu'
>>>
全代码
import requests
url = "https://www.amazon.cn/gp/product/B01M8L5Z3Y"
try:
kv = {'user-agent':'Mozilla/5.0'}
r = requests.get(url,headers=kv)
r.raise_for_status()
r.encoding = r.apparent_encoding
print(r.text[1000:2000])
except:
print("爬取失败")
实例3:百度/360搜索关键词提交
搜索引擎关键词提交接口
百度的关键词接口:
http://www.baidu.com/s?wd=keyword
>>> import requests
>>> kv = {'wd':'Python'}
>>> r = requests.get("http://www.baidu.com/s",params=kv)
>>> r.status_code
200
>>> r.request.url
'http://www.baidu.com/s?wd=Python'
>>> len(r.text)
524221
全代码:
import requests
keyword = "Python"
try:
kv = {'wd':keyword}
r = requests.get("http://www.baidu.com/s",params=kv)
print(r.request.url)
r.raise_for_status()
print(len(r.text))
except:
print("爬取失败")
360的关键词接口:
http://www.so.com/s?q=keyword
>>> import requests
>>> kv ={'q':'Python'}
>>> r = requests.get('http://www.so.com/s',params=kv)
>>> r.status_code
SyntaxError: invalid syntax
>>> r.status_code
200
>>> r.request.url
'https://www.so.com/s?q=Python'
>>> len(r.text)
407595
全代码:
import requests
keyword = "Python"
try:
kv ={'q':keyword}
r = requests.get("http://www.so.com/s",params=kv)
print(r.request.url)
r.raise_for_status()
print(len(r.text))
except:
print("爬取失败")
实例4:网络图片的爬取和存储
网络图片链接的格式:
http://www.example.com/picture.jpg
图片爬取全代码
import requests
import os
url = "http://image.nationalgeographic.com.cn/2017/0211/20170211061910157.jpg"
root = "D://pics//"
path = root + url.split('/')[-1]
try:
if not os.path.exists(root):
os.mkdir(root)
if not os.path.exists(path):
r = requests.get(url)
with open(path,'wb') as f:
f.write(r.content)
f.close()
print("文件保存成功")
else:
print("文件已存在")
except:
print("爬取失败")
实例5:IP地址归属地的自动查询
import requests
url = "http://m.ip138.com/ip.asp?ip="
try:
r = requests.get(url+'202.204.80.112')
r.raise_for_status()
r.encoding = r.apparent_encoding
print(r.text[-500:])
except:
print("爬取失败 ")