asynio learning Daily

Today, the use of asynio learning, there is still hope that's not right is the great God pointing.

Synchronous and asynchronous, synchronous what is, what is the asynchronous?

Synchronization means in the implementation of the code, and the next is on the order of execution, if you encounter IO operations or other time-consuming operations, the entire program will be blocked there, so the computer's CPU is a tremendous waste

Asynchronous and synchronous exactly the opposite, we do not wait for this time-consuming operation returns the result, program execution continues at the next transaction, after the transaction has been processed to inform the main program, then the results are returned, so that we can take advantage of a great computer CPU resources, and code efficiency will be high

Unlike asynio multithreading, although the asynchronous operation, but is essentially a synchronization code

# The following is the code to simulate the asynchronous page request, the desired effect is instantly prints 10 hello word, rather than wait until a complete parse a web page, but concurrently at parsed

import asyncio

# Here imitate code requests a web page
async def get_html(url):
    await asyncio.sleep(url)
    print("hello word: {}".format(url))


if __name__ == "__main__":
    loop = asyncio.get_event_loop()
    c = [get_html(2) for i in range(10)]
    a = loop.run_until_complete(asyncio.wait(c))

    # Results:
    # hello word: 2
    # hello word: 2
    # hello word: 2
    # hello word: 2
    # hello word: 2
    # hello word: 2
    # hello word: 2
    # hello word: 2
    # hello word: 2
    # hello word: 2

Because asynio does not support asynchronous HTTP requests, if the requests are on the inside with a library or other library webpage request because requests are synchronized library

Here learned knowledge in recording a point, asynio is to support asynchronous socket, so using socket way here request HTTP, the resulting output is also expected results, the same url are printed simultaneously

import asyncio
from urllib.parse import urlparse


# Here imitate request code page of 
the async DEF get_html (url):
    iurl = urlparam (URL)
    host = iurl.netloc
    path = iurl.path
    if path == "":
        path = "/"
    request_html = "GET {} HTTP/1.1\r\nHOST: {}\r\nConnection: close\r\n\r\n".format(path, host)
    Reader, writer = await asyncio.open_connection (= Host Host, Port = 80)    # Because this connection is a time-consuming operation is required to achieve await asynchronous effect 
    # because it is so used writer sends a request to the server, equivalent to the socket the send 
    # data bytes must be transmitted, coding can not be Unicode 
    writer.Write (request_html.encode ( " UTF-. 8 " ))
    data = b""
    async for d in reader:
        Data + = D
     # Direct print 
    Print (Data)
     return Data

if __name__ == "__main__":
    url_list = ["http://www.guqin.cc/portal.php", 
                "http://www.guqin.cc/thread-2564-1-1.html", 
                "http://www.guqin.cc/thread-2564-1-1.html", 
                "http://www.guqin.cc/thread-2564-1-1.html", 
                "http://www.guqin.cc/thread-2564-1-1.html"]
    loop = asyncio.get_event_loop()
    c = [get_html(url) for url in url_list]
    a = loop.run_until_complete(asyncio.wait(c))

 

Guess you like

Origin www.cnblogs.com/witt-chen/p/12534436.html