从零开始分析scrapy源码(一)

通过脚本启动爬虫:

main.py
from scrapy.cmdline import execute #可执行scrapy 脚本
import sys
import os

main_path = os.path.abspath(__file__)
sys.path.append(os.path.dirname(main_path))
execute(["scrapy","crawl","jobbole"])

依据传入的值crawl,会调用scrapy.commands中Command.run()方法

scrapy.commands.crawl.py:

    def run(self, args, opts):
        if len(args) < 1:
            raise UsageError()
        elif len(args) > 1:
            raise UsageError("running 'scrapy crawl' with more than one spider is no longer supported")
        spname = args[0]

        self.crawler_process.crawl(spname, **opts.spargs)#生成一个crawler对象
        self.crawler_process.start()





猜你喜欢

转载自blog.csdn.net/whueratsjtuer/article/details/79201413
今日推荐