The basic routine of python using beautifulsoup to crawl

Using python3, for example, to climb the kugo list:

import requests
from bs4 import BeautifulSoup
import time

headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36'
}

def get_info(url):
    wb_data = requests.get(url,headers=headers)
    soup = BeautifulSoup(wb_data.text,'lxml')
    ranks = soup.select('span.pc_temp_num')
    titles = soup.select('div.pc_temp_songlist > ul > li > a')
    times = soup.select('span.pc_temp_tips_r > span')
    for rank,title,time in zip(ranks,titles,times):
        data = {
            'rank':rank.get_text().strip(),
            'singer':title.get_text().split('-')[0],
            'song':title.get_text().split('-')[0],
            'time':time.get_text().strip()
        }
        print(data)

if __name__ == '__main__':
    urls = ['http://www.kugou.com/yy/rank/home/{}-8888.html'.format(str(i)) for i in range(1,2)]
    for url in urls:
        get_info(url)
        time.sleep(5)



  In the above code from bs4 import BeautifulSoup first import;
then set headers,
then soup = BeautifulSoup(wb_data.text,'lxml') In, call BeautifulSoup,
set lxml parser;
then in
ranks = soup.select('span. pc_temp_num')
    titles = soup.select('div.pc_temp_songlist > ul > li > a')
These, XPATH use the CHROME browser's check function to check it;
then a loop, print out the data, pay attention to the use of strip to remove spaces;
then
urls = ['http://www.kugou.com/yy/rank/home/{}-8888.html'.format(str(i)) for i in range(1,2)]
It is a very distinctive syntax in python. Set a URL template, where {} is to be replaced with the content in the format;

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326112612&siteId=291194637