We are only the beginning of a default of a reptile, so the code is executed to create a py file in the project
from scrapy import cmdline
cmdline.execute('scrapy crawl 爬虫名'.split( ))
But to perform multiple crawlers on a gum tree, where I just make a note of strengthening memory
Original blog https://www.cnblogs.com/lei0213/p/7900340.html
Which performs the following:
1, create any directory spiders at the same level, such as: commands
2, in which to create crawlall.py file (where filename is the custom command)
crawlall.py
from scrapy.commands import ScrapyCommand from scrapy.utils.project import get_project_settings class Command(ScrapyCommand): requires_project = True def syntax(self): return '[options]' def short_desc(self): return 'Runs all of the spiders' def run(self, args, opts): spider_list = self.crawler_process.spiders.list() for name in spider_list: self.crawler_process.crawl(name, **opts.__dict__) self.crawler_process.start()
Not end here, settings.py also need to add a configuration file.
COMMANDS_MODULE = 'project name. Directory name'
Project name directory name COMMANDS_MODULE = ' zhihuuser.commands '
This is almost done, you need to perform, as long as in cmd cd into the project scrapy crawlall, or create a new project under a py file using scrapy.cmdline run, or os.system ( 'scrapy crawlall')