jdango + scrapy conjunction with data storage and crawling

 

1. Create a django project and to prepare models.py, start django project

 

Configuring Django embedded
  create Scrapy project (this is required scrapy-djangoitem configuration) under the project root Django
  configuration Django embedded, adding the following code in settings.py Scrapy of:

import os
import sys
sys.path.append(os.path.dirname(os.path.abspath('.')))
os.environ['DJANGO_SETTINGS_MODULE'] = '子app.settings'
# 手动初始化Django:
import django
django.setup()

 

3. Write reptiles

 

4.item.py introduced Django model class

import scrapy
    
from scrapy_djangoitem import DjangoItem
from 子app import models
class TalksItem(DjangoItem):
    django_model = models.表名

 

In 5.pipelines.py calling save ()

class TalksPipeline (Object):
     DEF process_item (Self, Item, Spider):
         Print ( ' Open Database ' ) 
        item.save () # data will be automatically added to the specified table 
        Print ( ' off database ' )
         return Item

 

6. Start reptiles: scrapy crawl reptile name

 

7. Refresh admin background, warehousing has been the data at this time!

 

I do not happy!

Guess you like

Origin www.cnblogs.com/kitshenqing/p/11059636.html