Use bs4-Beautifulsoup to crawl the content of the novel chapters of the Romance of the Three Kingdoms

Today, I used Beautifulsoup to crawl the content of the Romance of the Three Kingdoms novel on http://www.shicimingju.com/book/sanguoyanyi.html.

-bs4 data analysis principle: 
   -1. Instantiate a BeautifulSoup, and load the page source data into the object 
   -2. Perform label positioning and data extraction by calling relevant attributes or methods in the BeautifulSoup object 
-environment installation- 
   pip install bs4 
   -pip install lxml (it is a parser, both bs4 and xpath will be used) -How 
to instantiate the BeautifulSoup object: 
   from bs4 import BeautifulSoup -Object 
   instantiation- 
       1. Load the data in the local html document into the object 
                  #will Load the data in the local html document into this object 
                  fp = open('./text.html','r',encoding='utf-8') 
                  soup = BeautifulSoup(fp,'lxml') 
       -2. The source code of the page obtained on the Internet is loaded into the object 
              page_text = response.text 
              soup = Beatifulsoup(page_text,'lxml') 
   and attributes provided for data analysis: 
       -Methods -soup.tagName:What is returned is the tag corresponding to the first occurrence of tagName (div, a, etc.) in the document
       -soup.find(): 
              -find('tagName'): equivalent to soup.tagName -attribute 
              positioning: 
                    -soup.find('div',class_ (underline the tag if it is written)='song') 
       -soup. find_all('tagName'): Return all tags that meet the requirements (a list) 
       -select: 
              -soup.select('A certain selector (id, class, tag... selector) + tag'), returned Is a list 
              -level selector-soup.select 
                   ('.tang> ul> li >a')[0]:> indicates a level, level selector-soup.select 
                   ('.tang> ul a'): space Represents multiple levels, the li tag between ul and a is represented by a space 
       -get the text data between the tags 
              -when the tag is located, such as soup.a.text/string/get_text to get the corresponding text data 
              soup.select(' .tang> ul a')[0].string 
              -text and get_text can get all text content in a label
              -string can only get the text data directly under the label
              -Get the 
       attribute value in the label: -soup.a['herf']

Another important point: the label name must be written correctly! ! ! ! !

Not much to say about the code below.

import requests
from bs4 import BeautifulSoup
if __name__=='__main__':
    headers = {
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3947.100 Safari/537.36'
    }
    url = 'http://www.shicimingju.com/book/sanguoyanyi.html'
    page_text = requests.get(url=url,headers=headers).text
    print(page_text)
    #在首页中解析出章节的标题和详情页的url
    #1.实例化对象,需要将页面源码数据加载到该对象中
    soup = BeautifulSoup(page_text,'lxml')
    #解析章节标题和详情页url
    li_list = soup.select('.book-mulu >ul >li')
    fp = open('./sanguo.text','w',encoding='utf-8')
    for li in li_list:
        title = li.a.string
        detail_url = 'http://www.shicimingju.com'+li.a['href']
        #对详情页发起请求,解析出章节内容
        detail_page_text = requests.get(url=detail_url,headers=headers).text
        #解析出详情页相关的章节内容
        detail_soup = BeautifulSoup(detail_page_text,'lxml')
        div_tag = detail_soup.find('div',class_='chapter_content')
        content = div_tag.text
        fp.write(title+':'+content+'\n')
        print(title,'爬取成功!!!')

The result is in the picture above.

 

Guess you like

Origin blog.csdn.net/qwerty1372431588/article/details/106086361