day_3 reptiles


A reptile principle
1. What is the Internet?
It refers to a stack of a network device, to the computer station to the Internet together with a called Internet.

2. The purpose of the establishment of the Internet?
The purpose is to establish the Internet transfer and data sharing data.

3. What is the data?
For example ... Taobao, Jingdong product information such as
number of securities investment information East Fortune, snowball network ...
the chain of home, such as availability of information freely ....
12306 ticket information ...

4. The whole process of the Internet:
- ordinary users:
open browser -> sending a request to a target site -> the fetch response data -> renderer in the browser

- crawlers:
analog browser -> sending a request to a target site -> the fetch response data - -> extract valuable data -> persisted to the data


5. What is the browser sends a request?
request http protocol.

- Client:
the browser is a software -> Client IP and port


- server
https://www.jd.com/
www.jd.com (Jingdong domain name) -> DNS parsing -> IP server and Jingdong port

Client ip and port ------> IP and port to send the request to the server can establish a link to obtain the corresponding data.


6. The crawler whole process
- the transmission request (request requires libraries: Requests database request, requesting the Selenium library)
- fetch response data (as long as the transmission request to the server, the request returns response data)
- parses and extracts data (requires parsing library : Re, BeautifulSoup4, the Xpath ...)
- saved locally (file processing, database, MongoDB repository)


two requests requests library

1. installation and use
- open cmd
- input: requests the install PIP3

2. crawling video


3. grip packet analysis
open developer mode browser (check) ----> select the network
to find the page suffix xxx.html access (response text)

1) the request url (website address access)
2) request method:
GET:
direct send request the data
https://www.cnblogs.com/kermitjam/articles/9692597.html

the POST:
Need to carry the user information transmission request to the target address
https://www.cnblogs.com/login

. 3) Response Status Code:
2xx: Success
3xx: Redirection
4xx: resource not found
5xx: Server Error

4) the request header:
the User -Agent: user agent (proved to be a request by computer equipment and sent by the browser)
Cookies: real user login information (user prove your target site)
Referer: url on the first visit (to prove that you are jumping from target sites on the web )

5) request body:
POST request will have the request body.
The Data Form
{
'the User': 'Tank',
'pwd': '123'
}


four crawling IMDb
: starting from the current position
*: Find all
:? Find the first not to look

. * ?: non-greedy match
*: Greed match

: extract data in brackets (. *?)

Movie rankings, movies url, film name, director - starring - the type of movie scores, number of reviews, film synopsis
<div class = "item"> *.? <EM class = ""> (. *?) </ EM>
. *? <a href="(.*?)">. *? <span class = "title"> (. *?) </ span >
* director:.? (.? *).?.? (.? *) </ the p-> * <span class = "rating_num" *> </ span>
.? (.? *) * <span> people evaluation </ span>. *? <span class = "INQ"> (. *?) </ span>



<div class = "Item">
<div class = "PIC">
<EM class = ""> 226 < / EM>
<a href="https://movie.douban.com/subject/1300374/">
<IMG width = "100" Alt = "The green Mile" src = "https: //img3.doubanio.com/view/photo/s_ratio_poster/public/p767586451.webp" class="">
</a>
</div>
<div class="info">
<div class="hd">
href="https://movie.douban.com/subject/1300374/" class=""> <a
<span class = "title"> Green Mile </ span>
<span class = "title"> & nbsp; / & nbsp; of The green mile </ span>
<span class = "OTHER"> & nbsp; / & nbsp; The green mile (units) / green mile </ span>
</a>


<span class = "playable"> [playable ] </ span>
</ div>
<div class = "bd">
<the p-class = "">
director: Frank Darabont & nbsp; & nbsp ; & nbsp; starring: Tom Hanks Tom Hanks / David Morse David M <br> ...
1999 & nbsp; / & nbsp; USA & nbsp; / & nbsp;Fantasy Crime Mystery Drama
</ P>


<div class = "Star">
<span class = "T-rating45"> </ span>
<span class="rating_num" property="v:average">8.7</span>
<span property="v:best" content="10.0"></span>
<span>141370人评价</span>
</div>

<p class="quote">
<span class="inq">天使暂时离开。</span>
</p>
</div>
</div>
</div>



requests基本使用
import requests # import request requests Library


# Baidu home page to send a request to obtain the response object
response = requests.get(url='https://www.baidu.com/')

# Set the character encoding to utf-8
response.encoding = 'utf-8'

# Print the response text
print(response.text)

# The text is written in response to local
with open('baidu.html', 'w', encoding='utf-8') as f:
    f.write(response.text)

Crawling video

''''''
'''
Video Options:
    1. Pears video
'''
# import requests
#
## to the source address of the video transmission request
# response = requests.get(
#     'https://video.pearvideo.com/mp4/adshort/20190625/cont-1570302-14057031_adpkg-ad_hd.mp4')
#
# # Print binary stream, such as pictures, video and other data
# print(response.content)
#
# # Save the video to your local
# with open('视频.mp4', 'wb') as f:
#     f.write(response.content)

'''
1, first send a request to the pear Video Home
    https://www.pearvideo.com/
    
    Id get resolved for all videos:
        video_1570302
        
        re.findall()
        

2, access to video details page url:
    Thrilling! Man robbed on the subway slip, go on foot
    https://www.pearvideo.com/video_1570302
    Secret Karez
    https://www.pearvideo.com/video_1570107
'''
import requests
import re # regularization, for parsing text data
# 1, first send a request to the pear Video Home
response = requests.get('https://www.pearvideo.com/')
# print(response.text)

# Re regular matches to get all video id
Parameter # 1: Regular matching rules
Parameter # 2: parse text
Parameter # 3: Match mode
res_list = re.findall('<a href="video_(.*?)"', response.text, re.S)
# print(res_list)

# Stitching each video detail page url
for v_id in res_list:
    detail_url = 'https://www.pearvideo.com/video_' + v_id
    # print(detail_url)

    # Sending a request to obtain the video source url for each video detail page
    response = requests.get(url=detail_url)
    # print(response.text)

    # Parse and extract details page video url
    # Video url
    video_url = re.findall('srcUrl="(.*?)"', response.text, re.S)[0]
    print(video_url)

    # Name video
    video_name = re.findall(
        '<h1 class="video-tt">(.*?)</h1>', response.text, re.S)[0]

    print(video_name)

    # Binary stream to acquire a video transmission request video url
    v_response = requests.get(video_url)

    with open('%s.mp4' % video_name, 'wb') as f:
        f.write(v_response.content)
        print (video_name, 'video crawling Complete')

Crawling watercress video

''''''
'''
https://movie.douban.com/top250?start=0&filter=
https://movie.douban.com/top250?start=25&filter=
https://movie.douban.com/top250?start=50&filter=

1. The transmission request
2. Parse the data
3. Save data
'''
import requests
import re

# Reptile three-part song
# 1 sends a request
def get_page(base_url):
    response = requests.get(base_url)
    return response

# 2. parse text
def parse_index(text):

    res = re.findall('<div class="item">.*?<em class="">(.*?)</em>.*?<a href="(.*?)">.*?<span class="title">(.*?)</span>.*?导演:(.*?)</p>.*?<span class="rating_num".*?>(.*?)</span>.*?<span>(.*?)人评价</span>.*?<span class="inq">(.*?)</span>', text, re.S)
    # print(res)
    return res

# 3. Save data
def save_data(data):
    with open('douban.txt', 'a', encoding='utf-8') as f:
        f.write(data)

# Main + Enter key
if __name__ == '__main__':
    # A = 10
    # base_url = 'https://movie.douban.com/top250?start={}&filter='.format(num)

    a = 0
    for line in range(10):
        base_url = f'https://movie.douban.com/top250?start={num}&filter='
        a = + 25
        print(base_url)

        # 1 sends a request, the calling function
        response = get_page(base_url)

        # 2. parse text
        movie_list = parse_index(response.text)

        # 3. Save data
        # Formatted data
        for movie in movie_list:
            # print(movie)

            # Decompression assignment
            Ranked # movie, movies url, film name, director - starring - the type of movie scores, number of reviews, film synopsis
            v_top, v_url, v_name, v_daoyan, v_point, v_num, v_desc = movie
            # v_top = movie[0]
            # v_url = movie[1]
            moive_content = f'''
            Movie Ranking: {v_top}
            Film url: {v_url}
            Movie Name: {v_name}
            Director Starring: {v_daoyan}
            Movie rating: {v_point}
            Number of Evaluation: {v_num}
            Movie Synopsis: {v_desc}
            \n
            '''

            print(moive_content)

            # save data
            save_data(moive_content)

  



Guess you like

Origin www.cnblogs.com/Corbett/p/11094395.html