爬虫利器Scrapy框架:2:使用runspider运行爬虫

在这里插入图片描述
在上一篇文章中我们介绍了使用scrapy shell交互式地获取Web页面的标题信息,这篇文章继续以这个简单的示例来介绍在Scrapy框架下爬虫应用程序的使用方法。

爬虫示例代码

liumiaocn:scrapy liumiao$ ls
myspider.py
liumiaocn:scrapy liumiao$ cat myspider.py 
import scrapy

class MySpider(scrapy.Spider):
    name = 'myspider'
    start_urls = ['https://scrapy.org/']

    def parse(self, response):
        for title in response.css('title'):
            yield {'title': title.get()}
liumiaocn:scrapy liumiao$ 

代码非常简单,就是获取https://scrapy.org/的标题信息,yield是python的用法,css提取是HTML的基础,需要注意的只有两点

  • import scrapy之后指定scrapy.Spider
  • 提取数据的函数名称为parse,这是缺省的约定

创建工程 vs 自包含方式

一般来说使用Scrapy框架需要创建工程,然后在工程中创建爬虫实例并运行,在后续的文章中将会进一步地进行说明。但是Scrapy还提供了一种简化的(自包含)方式来运行爬虫,这种方式下不需要创建工程即可运行。

运行爬虫

使用如下命令即可使用自包含方式来运行爬虫

执行命令:scrapy runspider 爬虫程序名称

执行示例

liumiaocn:scrapy liumiao$ scrapy runspider myspider.py 
2020-03-28 06:53:16 [scrapy.utils.log] INFO: Scrapy 2.0.1 started (bot: scrapybot)
2020-03-28 06:53:16 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 20.3.0, Python 3.7.5 (default, Nov  1 2019, 02:16:32) - [Clang 11.0.0 (clang-1100.0.33.8)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.8, Platform Darwin-19.2.0-x86_64-i386-64bit
2020-03-28 06:53:16 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2020-03-28 06:53:16 [scrapy.crawler] INFO: Overridden settings:
{'SPIDER_LOADER_WARN_ONLY': True}
2020-03-28 06:53:16 [scrapy.extensions.telnet] INFO: Telnet Password: aeb340a45dd4aacb
2020-03-28 06:53:16 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2020-03-28 06:53:16 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-03-28 06:53:16 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-03-28 06:53:16 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-03-28 06:53:16 [scrapy.core.engine] INFO: Spider opened
2020-03-28 06:53:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-03-28 06:53:16 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2020-03-28 06:53:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://scrapy.org/> (referer: None)
2020-03-28 06:53:17 [scrapy.core.scraper] DEBUG: Scraped from <200 https://scrapy.org/>
{'title': '<title>Scrapy | A Fast and Powerful Scraping and Web Crawling Framework</title>'}
2020-03-28 06:53:17 [scrapy.core.engine] INFO: Closing spider (finished)
2020-03-28 06:53:17 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 210,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 15374,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'elapsed_time_seconds': 0.853152,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 3, 27, 22, 53, 17, 680433),
 'item_scraped_count': 1,
 'log_count/DEBUG': 2,
 'log_count/INFO': 10,
 'memusage/max': 50212864,
 'memusage/startup': 50212864,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2020, 3, 27, 22, 53, 16, 827281)}
2020-03-28 06:53:17 [scrapy.core.engine] INFO: Spider closed (finished)
liumiaocn:scrapy liumiao$ 

从结果中,我们可以看到如下的内容

{'title': '<title>Scrapy | A Fast and Powerful Scraping and Web Crawling Framework</title>'}

说明使用此种方式获取页面标题已经成功。

发布了1143 篇原创文章 · 获赞 1364 · 访问量 415万+

猜你喜欢

转载自blog.csdn.net/liumiaocn/article/details/105155080