Python爬虫:爬去韩国电视剧信息

版权声明:如转载请指明出处! https://blog.csdn.net/qq_42952437/article/details/88718974

最近看韩剧想重温一下以前看的韩剧 但是就记得剧情 到网站上找了 太多点的太麻烦,网上问了也回答不了 找的几个片名都不对,所以就想写个爬虫爬去某站上所有的韩剧信息,可以方便查找想看的韩剧

爬取具体如下:

# 爬去网站韩国电视剧的信息
# 爬去片面、时间、地区、主演、简介

1、导入模块

import requests
import time
from lxml import etree

2、网站页面爬取

创建类HanJuInfo

class HanJuInfo():
    def __init__(self, url):
        self.headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3641.400 QQBrowser/10.4.3284.400'}
        self.url = url

   # 首页获取页面链接
    def Get_Html(self):
        response = requests.get(url=self.url, headers=self.headers)
        if response.status_code == 200:
            html = response.text
            return html

   # 页面解析出详情页的url
    def Paser_Html(self):
        content = self.Get_Html()
        selector = etree.HTML(content)
        items = selector.xpath('//div[@class="s-tab-main"]/ul[@class="list g-clear"]/li[@class="item"]')
        for item in items:
            self.info_url = item.xpath('./a/@href')[0]
            self.info_urls = 'https://www.360kan.com'+self.info_url

        # 解析详情页信息
            response = requests.get(url=self.info_urls, headers=self.headers)
            if response.status_code == 200:
                selector = etree.HTML(response.text)
                name = selector.xpath('//div[@class="title-left g-clear"]/h1/text()')[0]
                time = selector.xpath('//*[@id="js-desc-switch"]/div[1]/p[2]/text()')[0]
                place = selector.xpath('//*[@id="js-desc-switch"]/div[1]/p[3]/text()')[0]
                actors = ''.join(selector.xpath('//*[@id="js-desc-switch"]/div[1]/p[6]//a//text()'))
                detials = selector.xpath('//*[@id="js-desc-switch"]/div[3]/p/text()')[0]

                yield {
                    '片面': name,
                    '时间': time,
                    '地区': place,
                    '主演': actors,
                    '简介': detials
                }
                info = {
                    '片面': name,
                    '时间': time,
                    '地区': place,
                    '主演': actors,
                    '简介': detials
                }
                self.save_info(str(info))

    def save_info(self, content):
        with open('info.txt', 'a', encoding='utf-8')as f:
            f.write(content+'\n')

3、主函数调用

开启翻页25爬去近七百部韩国电视剧信息

if __name__ == '__main__':
    for x in range(1, 26):
        url = 'https://www.360kan.com/dianshi/list.php?area=12&pageno={}'.format(str(x))
        han = HanJuInfo(url)
        time.sleep(1)
        print('第%s页' % x)
        for i, x in enumerate(han.Paser_Html()):
            print(i, x)

 4、爬取结果如下:

 图片太大简介如下:

猜你喜欢

转载自blog.csdn.net/qq_42952437/article/details/88718974