And must get regular crawl data sources class DemoSpider(scrapy.Spider): name = 'demo' end = 1 url = 'https://www.qiushibaike.com/text/page/' start_urls = [url+str(end)] def parse(self, response): # print(response.body) /html/body/div[3]/div[3]/div[1]/div/h5 mt = response.xpath('//div[@class="content"]') for ms in mt: name = ms.xpath('.//span/text()').extract() content = "\n".join(name) print(content) if self.end <= 10: self.end += 1 yield scrapy.Request(self.url+str(self.end), callback=self.parse)
python scrapy demo crawling embarrassments Encyclopedia
Guess you like
Origin blog.csdn.net/a1033479126/article/details/92083371
Recommended
Ranking