Application FastDFS distributed systems in Docker and Python,

First, what is FastDFS:

FastDFS is written in c language an open source distributed file system. FastDFS tailor-made for the Internet, full account of redundancy, load balancing, and other mechanisms linear expansion, and focus on high availability, high performance and other indicators, it is easy to use FastDFS to build a high-performance file server clusters to provide file upload and download and other services.

FastDFS architecture includes Tracker server and Storage server. Tracker server client requests for file upload, download, upload and download files finalized by the Storage server by Tracker server scheduling.

1. File upload interaction:

  1. Storage server uploads the timing information to the Tracker server state
  2. Client sends a connection request to the Tracker server
  3. Tracker server query storage available
  4. Tracker server IP and port of return storage to Client
  5. Client upload files to the Storage server
  6. Storage server writes files to the disk, while generating file id
  7. Storage server returns the file id (file name and path information) to the Client
  8. Client information storage file

2. Download interaction:

  1. Storage server uploads the timing information to the Tracker server state
  2. Client sends a connection request to the Tracker server
  3. Tracker server query storage available
  4. Tracker server IP and port of return storage to Client
  5. Client send a file id (file name and path information) to the Storage server
  6. Storage server to find files based on information
  7. Storage server returns the file to the Client

Two, Docker installation FastDFS

1. Mirror Download

sudo docker image pull delron/fastdfs

2. Place the file on the container folder mapped to the local path, starting tracker server and storage

sudo docker run -dit --network=host --name=tracker -v /var/fdfs/tracker:/var/fdfs delron/fastdfs tracker
sudo docker run -dit --network=host --name=storage -e TRACKER_SERVER=192.168.149.129:22122 -v /var/fdfs/storage:/var/fdfs delron/fastdfs storage

Note: storage servers need to specify tracker scheduling server address and port, the default port is 22122

3. Review the tracker and storage server is turned on

sudo docker ps

Renderings:
Here Insert Picture Description

If you have two months, it would have been opened on behalf of. If no instruction can be opened using the following

sudo docker container start 容器名

If the input code is open above, but the container is not turned on, it performs the following operations:

cd /var/fdfs/storage/data/
sudo rm -rf fdfs_storaged.pid

Then restart the container with the start instruction
renderings:

Here Insert Picture Description

Three, FastDFS the Python client

1. Download the package environment

First download on GitHub: https://github.com/JaceHo/fdfs_client-py then installed in their corresponding environment.

pip install fdfs_client-py-master.zip
pip install mutagen
pip isntall requests

2. Define your own profiles

Use FastDFS client, the need for configuration files, create fastdfs folder under the project directory, and then create a configuration file client.conf inside, the main change tracker_server and base_path:

# 连接超时时间 默认30秒
connect_timeout=30

# 网络超时时间
# default value is 30s
network_timeout=60

# 工作文件夹,日志存在此
base_path=/home/hadoop/桌面/shanghui/shanghuishop/shanghuiproject/logs
# tracer server列表,多个tracer server的话,分行列出
tracker_server=192.168.149.129:22122

#日志级别
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info

# 是否使用连接池
use_connection_pool = false

# 连接闲置超时时间,连接如果闲置的时间超过本配置,则关闭次连接,单位秒
connection_pool_max_idle_time = 3600

# 是否从tracer server读取fastdfs的参数,默认为false
load_fdfs_parameters_from_tracker=false

# 是否使用storage id 替换 ip,默认为false
# 和tracker.conf该参数含义一样
# 本配置只有在load_fdfs_parameters_from_tracker=false时生效
# 本配置默认为false
use_storage_id = false

# 指定storage id的文件名,允许使用绝对路径
# 和tracker.conf该参数含义一样
# 本配置只有在load_fdfs_parameters_from_tracker=false时生效
storage_ids_filename = storage_ids.conf

#HTTP settings
#http.tracker_server_port=8080


#引入HTTP相关配置
##include http.conf

3. upload files example

from fdfs_client.client import Fdfs_client

# 下面参数为client.conf的文件地址
client = Fdfs_client('fastdfs/client.conf')

# 通过创建的客户端对象执行上传文件的方法:
client.upload_by_filename('文件名')
# 或
client.upload_by_buffer(文件bytes数据)

By Python Test:
first find client.conf file path to
Here Insert Picture Description
the last file:
Here Insert Picture Description
'Remote file_id': 'group1 / M00 / 00/00 / wKiVgV0UKeGAeXeKAABPHvQkMfU978.jpg'

说明:
group1 : 文件上传之后storage组的名称
M00: storage 配置的虚拟路径
/00/00/ : 数据的俩级目录,用来存放数据
wKiVgV0UKeGAeXeKAABPHvQkMfU978.jpg :文件上传之后的名字,它和上传的时候的已经不一样了,它是由服务器根据特定的信息生成的,文件名包括:源存储服务器的IP地址、文件创建的时间戳、文件的大小、随机数和文件的扩展名等信息

四、自定义django文件存储并且保存到FDFS服务器上

Django是自带文件存储系统的,但是默认的文件存储到本地,在本项目中,需要将文件保存到FastDFS服务器上,所以需要自定义文件存储系统。

1.在刚才的fastdfs目录中建一个fdfs_client.py文件用来自定义文件管理

  • 需要继承自django.core.files.storage.Storage

  • 支持Django不带任何参数来实例化存储类,也就是说任何设置应该从配置django.conf.settings中获取

  • 存储类中必须实现_open()和_save()方法,以及任何后续使用中可能用到的其他方法。

  • 需要为存储类添加django.utils.deconstruct.deconstructible装饰器,以便在迁移中的字段上使用它时可以序列化,只要你的字段有自己的参数可以自动序列化。

代码如下:

from fdfs_client.client import Fdfs_client
from django.core.files.storage import Storage, FileSystemStorage
from django.conf import settings
from django.utils.deconstruct import deconstructible

# 装饰器的作用: 序列化
@deconstructible
class FastDfsStorage(Storage):
    '''定义FSATDFS客户端'''
    def __init__(self, base_url=None, client_conf=None):
        """
        初始化对象
        :param base_url: 将来用来构建图片、文件等的完整路径
        :param client_conf: fdfs客户端的配置文件的完整路径
        """
        if base_url is None:
            base_url = settings.FDFS_URL
        self.base_url = base_url

        if client_conf is None:
            client_conf = settings.FDFS_CLIENT_CONF
        self.client_conf = client_conf

    def _open(self, name, mode='rb'):
        """
        打开文件

		将来会被stroage.open()调用,在打开文件的时候调用
        :param name:
        :param mode:
        :return:
        """
        pass

    def _save(self, name=None, content=None, max_length=None):
        """
        保存文件,只需要传入一个name或者content即可
        
        将来会被storage.save() 调用,实现在fdfs里面保存数据
        :param name: 传入文件名
        :param content: 文件对象
        :return:保存到数据库中的FastDFSDE文件名
        """
        client = Fdfs_client(self.client_conf)
        if name is None:
            ret = client.upload_by_buffer(content.read())
        else:
            ret = client.upload_by_filename(name)
        if ret.get("Status") != "Upload successed.":
            raise Exception("upload file failed")
        file_name = ret.get("Remote file_id")
        return file_name

    def exists(self, name):
        """
        检查文件是否重复, FastDFS自动区分重复文件
        :param name:
        :return:
        """
        return False

    def url(self, name):
        """
        获取name文件的完整url
        :param name:
        :return:
        """
        return self.base_url + name

    def delete(self, name):
        '''
        删除文件
        :param name: Remote file_id
        :return:
        '''

        client = Fdfs_client(self.client_conf)
        client.delete_file(name)

注意:并不是这些方法全部都要实现,可以省略用不到的方法

2.在Django配置文件中设置自定义文件存储类

在settings/dev.py 中添加设置:

# django 文件储存
DEFAULT_FILE_STORAGE = 'shanghuiproject.fastdfs.fdfs_client.FastDfsStorage'

# FastDFS
FDFS_URL = 'http://image.shanghui.site:8888/'
LAST_BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
FDFS_CLIENT_CONF = os.path.join(LAST_BASE_DIR, 'fastdfs/client.conf')

3.配置系统路径

在/etc/hosts中添加访问FastDFS storage服务器的域名

127.0.0.1   image.shanghuiproject.site

4.测试上传以及文件服务器域名

Django tested inside the shell:
Here Insert Picture Description
After successfully uploaded browser opens later in the mosaic above the value image.shanghui.site:8888/ return ret

Renderings:
Here Insert Picture Description

Guess you like

Origin blog.csdn.net/dakengbi/article/details/93765831