python爬虫(二十四)-------------------异步IO爬虫aiohttpa syncio

#比直接用多线程效率更高
#asyncio主要是有yield再回到线程时可以继续在yield处继续往下走
#如果不想用框架只是想简单用一个异步就用asyncio
#对比c++的异步实现c++简直是反人性


'''

C:\Users\ASUS>pip install aiohttp
Collecting aiohttp
  Using cached https://files.pythonhosted.org/packages/bc/bd/08f0900d62b4ea1ca10bb2e2a1596ac3b04024c7daf7350debee0bd022fb/aiohttp-3.5.4-cp37-cp37m-win_amd64.whl
Requirement already satisfied: chardet<4.0,>=2.0 in c:\python37\lib\site-packages (from aiohttp) (3.0.4)
Collecting async-timeout<4.0,>=3.0 (from aiohttp)
  Using cached https://files.pythonhosted.org/packages/e1/1e/5a4441be21b0726c4464f3f23c8b19628372f606755a9d2e46c187e65ec4/async_timeout-3.0.1-py3-none-any.whl
Requirement already satisfied: yarl<2.0,>=1.0 in c:\python37\lib\site-packages (from aiohttp) (1.3.0)
Requirement already satisfied: attrs>=17.3.0 in c:\python37\lib\site-packages (from aiohttp) (18.2.0)
Requirement already satisfied: multidict<5.0,>=4.0 in c:\python37\lib\site-packages (from aiohttp) (4.5.2)
Requirement already satisfied: idna>=2.0 in c:\python37\lib\site-packages (from yarl<2.0,>=1.0->aiohttp) (2.7)
Installing collected packages: async-timeout, aiohttp
Successfully installed aiohttp-3.5.4 async-timeout-3.0.1

C:\Users\ASUS>
'''
#代替requests
import aiohttp
import asyncio

@asyncio.coroutine
def get_stock(code):
    url = 'http://hq.sinajs.cn/list=' + code
    resp = yield from aiohttp.request('GET', url)
    body = yield from resp.read()
    print(body.decode('gb2312'))

codes = ['sz000878', 'sh600993', 'sz000002', 'sh600153', 'sz002230', 'sh600658']
tasks = [get_stock(code) for code in codes]
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks))
loop.close()

猜你喜欢

转载自blog.csdn.net/qq_41228218/article/details/89024866