21、FastDFS5.11介绍及使用

 一、FastDFS介绍

FastDFS开源地址:https://github.com/happyfish100

参考:分布式文件系统FastDFS设计原理 

1、简介

FastDFS 是一个开源的高性能分布式文件系统(DFS)。 它的主要功能包括:文件存储,文件同步和文件访问,以及高容量和负载平衡。主要解决了海量数据存储问题,特别适合以中小文件(建议范围:4KB < file_size <500MB)为载体的在线服务。

FastDFS 系统有三个角色:跟踪服务器(Tracker Server)、存储服务器(Storage Server)和客户端(Client)。

Tracker Server:跟踪服务器,主要做调度工作,起到均衡的作用;负责管理所有的 storage server和 group,每个 storage 在启动后会连接 Tracker,告知自己所属 group 等信息,并保持周期性心跳。

Storage Server:存储服务器,主要提供容量和备份服务;以 group 为单位,每个 group 内可以有多台 storage server,数据互为备份。

Client:客户端,上传下载数据的服务器,也就是我们自己的项目所部署在的服务器。

 

2、FastDFS的存储策略

为了支持大容量,存储节点(服务器)采用了分卷(或分组)的组织方式。存储系统由一个或多个卷组成,卷与卷之间的文件是相互独立的,所有卷的文件容量累加就是整个存储系统中的文件容量。一个卷可以由一台或多台存储服务器组成,一个卷下的存储服务器中的文件都是相同的,卷中的多台存储服务器起到了冗余备份和负载均衡的作用。

在卷中增加服务器时,同步已有的文件由系统自动完成,同步完成后,系统自动将新增服务器切换到线上提供服务。当存储空间不足或即将耗尽时,可以动态添加卷。只需要增加一台或多台服务器,并将它们配置为一个新的卷,这样就扩大了存储系统的容量。

3、FastDFS的上传过程

FastDFS向使用者提供基本文件访问接口,比如upload、download、append、delete等,以客户端库的方式提供给用户使用。

Storage Server会定期的向Tracker Server发送自己的存储信息。当Tracker Server Cluster中的Tracker Server不止一个时,各个Tracker之间的关系是对等的,所以客户端上传时可以选择任意一个Tracker。

当Tracker收到客户端上传文件的请求时,会为该文件分配一个可以存储文件的group,当选定了group后就要决定给客户端分配group中的哪一个storage server。当分配好storage server后,客户端向storage发送写文件请求,storage将会为文件分配一个数据存储目录。然后为文件分配一个fileid,最后根据以上的信息生成文件名存储文件。

4、FastDFS的文件同步

写文件时,客户端将文件写至group内一个storage server即认为写文件成功,storage server写完文件后,会由后台线程将文件同步至同group内其他的storage server。

每个storage写文件后,同时会写一份binlog,binlog里不包含文件数据,只包含文件名等元信息,这份binlog用于后台同步,storage会记录向group内其他storage同步的进度,以便重启后能接上次的进度继续同步;进度以时间戳的方式进行记录,所以最好能保证集群内所有server的时钟保持同步。

storage的同步进度会作为元数据的一部分汇报到tracker上,tracke在选择读storage的时候会以同步进度作为参考。

5、FastDFS的文件下载

客户端uploadfile成功后,会拿到一个storage生成的文件名,接下来客户端根据这个文件名即可访问到该文件。

 

跟upload file一样,在downloadfile时客户端可以选择任意tracker server。tracker发送download请求给某个tracker,必须带上文件名信息,tracke从文件名中解析出文件的group、大小、创建时间等信息,然后为该请求选择一个storage用来服务读请求。

 

 

 

 

 

二、安装FastDFS环境

1、准备环境

 

 

系统描述

 

IP地址

作用

LVS-master

192.168.10.101

主备负载均衡器(同时做web和DNS调度)

LVS-backup

192.168.10.102

DNS-master

192.168.10.103

vip:192.168.10.66(主处理DNS轮询请求)

DNS-backup

192.168.10.104

Nginx+fastdfs

192.168.10.105

vip:192.168.10.88主处理web轮询请求做web、追踪和存储服务器

Nginx+fastdfs

192.168.10.106

 

2、关闭防火墙\关闭SeLinux\设置时间同步(在node5和node6上操作):

systemctl stop firewalld.service && systemctl disable firewalld.service

sed -i "s/SELINUX=enforcing/SELINUX=disabled/"   /etc/selinux/config

setenforce 0

yum -y install wget net-tools ntp ntpdate lrzsz

systemctl restart ntpdate.service ntpd.service && systemctl enable ntpd.service ntpdate.service

 

3、配置主机映射/etc/hosts(在node5和node6上操作):

echo 192.168.10.101  linux-node1.server.com  >> /etc/hosts

echo 192.168.10.102  linux-node2.server.com  >> /etc/hosts

echo 192.168.10.103  linux-node3.server.com  >> /etc/hosts

echo 192.168.10.104  linux-node4.server.com  >> /etc/hosts

echo 192.168.10.105  linux-node5.server.com  >> /etc/hosts

echo 192.168.10.106  linux-node6.server.com  >> /etc/hosts

 

 

hostnamectl --static set-hostname linux-node5.server.com

bash

hostnamectl --static set-hostname linux-node6.server.com

bash

 

 

  1. 下载安装包资源:

源码地址: https://github.com/happyfish100/
下载地址: http://sourceforge.net/projects/fastdfs/file
官方论坛: http://bbs.chinaunix.net/forum-240-1.html

 

5、拓扑:

 

 

 

 

三、 FastDFS 的安装(所有跟踪服务器和存储服务器均执行如下操作)

 

1、编译和安装所需的依赖包在node5和node6上操作

 

yum -y install make cmake gcc gcc-c++ perl

 

 

  1. 安装libfastcommon在node5和node6上操作:https://github.com/happyfish100/libfastcommon/releases

 

  1. 下载 V1.0.39.tar.gz 加压到/usr/local/src 目录下:

wget https://github.com/happyfish100/libfastcommon/archive/V1.0.39.tar.gz

tar -zxvf V1.0.39.tar.gz -C /usr/local/src/

cd /usr/local/src/libfastcommon-1.0.39/

 

 

(2) 编译、安装:

./make.sh

./make.sh install

 

 

libfastcommon 默认安装到了

/usr/lib64/libfastcommon.so

/usr/lib64/libfdfsclient.so

(3)建立软链接
因为 FastDFS 主程序设置的 lib 目录是/usr/local/lib, 所以需要创建软链接.

ln -s /usr/lib64/libfastcommon.so  /usr/local/lib/libfastcommon.so

 

 

 

3、安装FastDFS在node5和node6上操作:https://github.com/happyfish100/fastdfs/releases

(1)下载FastDFS 源码包(FastDFS_V5.11.tar.gz) 解压到 /usr/local/src 目录下;

 

wget https://github.com/happyfish100/fastdfs/archive/V5.11.tar.gz

tar -zxvf V5.11.tar.gz -C /usr/local/src/

cd /usr/local/src/fastdfs-5.11/

 

(2)编译、 安装(编译前要确保已经成功安装了 libfastcommon)

./make.sh

./make.sh install

 

./make.sh报错1:

./make.sh: line 178: perl: command not found

./make.sh: line 179: perl: command not found

./make.sh: line 180: perl: command not found

解决方法:

 yum -y install perl

./make.sh报错2:

make: *** [fdfs_storaged] Error 1

make: Nothing to be done for `all'.

解决方法(因为刚刚编译过存在一些目录或者文件需要清除,我直接删除重新解压就可以了):

rm -rf /usr/local/src/fastdfs-5.11/

 

(3)生成默认文件:

A、启动脚本:

[root@linux-node5 ~]# ll /etc/init.d/fdfs_*

-rwxr-xr-x 1 root root 961 Sep 20 10:09 /etc/init.d/fdfs_storaged

-rwxr-xr-x 1 root root 963 Sep 20 10:09 /etc/init.d/fdfs_trackerd

 

B、配置文件:

[root@linux-node5 ~]# ll /etc/fdfs/

-rw-r--r-- 1 root root 1461 Sep 20 10:09 client.conf.sample

-rw-r--r-- 1 root root 7927 Sep 20 10:09 storage.conf.sample

-rw-r--r-- 1 root root  105 Sep 20 10:09 storage_ids.conf.sample

-rw-r--r-- 1 root root 7389 Sep 20 10:09 tracker.conf.sample

 

C、 命令工具在/usr/bin/目录下的:

[root@linux-node5 ~]# ll /usr/bin/fdfs*

-rwxr-xr-x 1 root root  317626 Sep 20 10:09 /usr/bin/fdfs_appender_test

-rwxr-xr-x 1 root root  317403 Sep 20 10:09 /usr/bin/fdfs_appender_test1

-rwxr-xr-x 1 root root  304251 Sep 20 10:09 /usr/bin/fdfs_append_file

-rwxr-xr-x 1 root root  303985 Sep 20 10:09 /usr/bin/fdfs_crc32

-rwxr-xr-x 1 root root  304310 Sep 20 10:09 /usr/bin/fdfs_delete_file

-rwxr-xr-x 1 root root  305045 Sep 20 10:09 /usr/bin/fdfs_download_file

-rwxr-xr-x 1 root root  304635 Sep 20 10:09 /usr/bin/fdfs_file_info

-rwxr-xr-x 1 root root  322560 Sep 20 10:09 /usr/bin/fdfs_monitor

-rwxr-xr-x 1 root root 1112082 Sep 20 10:09 /usr/bin/fdfs_storaged

-rwxr-xr-x 1 root root  327562 Sep 20 10:09 /usr/bin/fdfs_test

-rwxr-xr-x 1 root root  326779 Sep 20 10:09 /usr/bin/fdfs_test1

-rwxr-xr-x 1 root root  454116 Sep 20 10:09 /usr/bin/fdfs_trackerd

-rwxr-xr-x 1 root root  305237 Sep 20 10:09 /usr/bin/fdfs_upload_appender

-rwxr-xr-x 1 root root  306257 Sep 20 10:09 /usr/bin/fdfs_upload_file

[root@linux-node5 ~]# ll /usr/bin/stop.sh /usr/bin/restart.sh

-rwxr-xr-x 1 root root 1768 Sep 20 10:09 /usr/bin/restart.sh

-rwxr-xr-x 1 root root 1680 Sep 20 10:09 /usr/bin/stop.sh

 

 

 

 

四、配置 FastDFS 跟踪器 Tracker在node5和node6上操作

1、复制 FastDFS 跟踪器样例配置文件,并重命名在node5和node6上操作

cd /etc/fdfs/

cp tracker.conf.sample tracker.conf

 

2、 编辑跟踪器配置文件在node5和node6上操作

vi /etc/fdfs/tracker.conf #所以追踪服务器的配置一样

disabled=false #启用配置文件

port=22122     #tracker的端口号,一般采用 22122 这个默认端口

base_path=/fastdfs/tracker   #tracker的数据文件和日志目录

 

 

3、 创建基础数据目录在node5和node6上操作

mkdir -p /fastdfs/tracker

 

 

4、启动tracker在node5和node6上操作

/etc/init.d/fdfs_trackerd start

 

 

初次成功启动,会在/fastdfs/tracker 目录下创建 data、 logs 两个目录) 可以通过以下两个方法查看 tracker 是否启动成功:
(1)查看 22122 端口监听情况

[root@linux-node5 ~]# netstat -tunlp | grep fdfs

tcp        0      0 0.0.0.0:22122           0.0.0.0:*               LISTEN      12397/fdfs_trackerd

[root@linux-node5 ~]# ll /fastdfs/tracker/

total 0

drwxr-xr-x 2 root root 58 Sep 20 10:30 data

drwxr-xr-x 2 root root 25 Sep 20 10:30 logs

 

(2)通过以下命令查看 tracker 的启动日志,看是否有错误

[root@linux-node5 ~]# tail -f /fastdfs/tracker/logs/trackerd.log

[2018-09-20 10:30:52] INFO - FastDFS v5.11, base_path=/fastdfs/tracker, run_by_group=, run_by_user=, connect_timeout=30s, network_timeout=60s, port=22122, bind_addr=, max_connections=256, accept_threads=1, work_threads=4, min_buff_size=8192, max_buff_size=131072, store_lookup=2, store_group=, store_server=0, store_path=0, reserved_storage_space=10.00%, download_server=0, allow_ip_count=-1, sync_log_buff_interval=10s, check_active_interval=120s, thread_stack_size=64 KB, storage_ip_changed_auto_adjust=1, storage_sync_file_max_delay=86400s, storage_sync_file_max_time=300s, use_trunk_file=0, slot_min_size=256, slot_max_size=16 MB, trunk_file_size=64 MB, trunk_create_file_advance=0, trunk_create_file_time_base=02:00, trunk_create_file_interval=86400, trunk_create_file_space_threshold=20 GB, trunk_init_check_occupying=0, trunk_init_reload_from_binlog=0, trunk_compress_binlog_min_interval=0, use_storage_id=0, id_type_in_filename=ip, storage_id_count=0, rotate_error_log=0, error_log_rotate_time=00:00, rotate_error_log_size=0, log_file_keep_days=0, store_slave_file_use_link=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s

 

 

 

 

  1. 关闭 Tracker在node5和node6上操作

/etc/init.d/fdfs_trackerd stop

 

 

 

6、 设置 FastDFS 跟踪器开机启动在node5和node6上操作):

vi /etc/rc.local

/etc/init.d/fdfs_trackerd start

 

 

 

 

 

五、 配置 FastDFS 存储

1、 复制 FastDFS 存储器样例配置文件,并重命名在node5和node6上操作):

cd /etc/fdfs/

cp storage.conf.sample storage.conf

 

 

2、 编辑存储器样例配置文件在node5和node6上操作):

以 group1 中的 storage 节点的 storage.conf 为例

vi /etc/fdfs/storage.conf  #所以存储服务器的配置一样,修改的内容如下,其它默认不动。

disabled=false #启用配置文件

group_name=group1 #组名(第一组为 group1, 第二组为 group2)

port=23000 #storage 的端口号,同一个组的 storage 端口号必须相同

base_path=/fastdfs/storage #设置 storage 的日志目录

store_path0=/fastdfs/storage #存储路径

store_path_count=1 #存储路径个数,需要和 store_path 个数匹配

tracker_server=192.168.10.105:22122 #tracker 服务器的 IP 地址和端口

tracker_server=192.168.10.106:22122 #多个 tracker 直接添加多条配置

http.server_port=8888 #设置 http 端口号

 

 

3、 创建基础数据目录在node5和node6上操作

mkdir -p /fastdfs/storage

 

 

 

4、 启动tracker和 Storage在node5和node6上操作

/etc/init.d/fdfs_trackerd start

/etc/init.d/fdfs_storaged start

(初次成功启动,会在/fastdfs/storage 目录下创建数据目录 data 和日志目录 logs)
各节点启动动,使用 tail -f /fastdfs/storage/logs/storaged.log 命令监听存储节点日志, 可以看到存储节点链接到跟踪器,并提示哪一个为 leader 跟踪器。 同时也会看到同一组中的其他节点加入进来的日志信息。
查看 23000 端口监听情况

 

netstat -unltp|grep fdfs

tcp        0      0 0.0.0.0:22122           0.0.0.0:*               LISTEN      12493/fdfs_trackerd

tcp        0      0 0.0.0.0:23000           0.0.0.0:*               LISTEN      12456/fdfs_storaged

 

 

5、所有 Storage 节点都启动之后,可以在任一 Storage 节点上使用如下命令查看集群信息在node5和node6上操作

[root@linux-node5 ~]# /usr/bin/fdfs_monitor /etc/fdfs/storage.conf

[2018-09-20 10:56:45] DEBUG - base_path=/fastdfs/storage, connect_timeout=30, network_timeout=60, tracker_server_count=2, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0

 

server_count=2, server_index=0

 

tracker server is 192.168.10.105:22122

 

group count: 1

 

Group 1:

group name = group1

disk total space = 17878 MB

disk free space = 15807 MB

trunk free space = 0 MB

storage server count = 2

active server count = 2

storage server port = 23000

storage HTTP port = 8888

store path count = 1

subdir count per path = 256

current write server index = 0

current trunk file id = 0

 

        Storage 1:

                id = 192.168.10.105

                ip_addr = 192.168.10.105  ACTIVE

                http domain =

                version = 5.11

                join time = 2018-09-20 10:48:53

                up time = 2018-09-20 10:48:53

                total storage = 17878 MB

                free storage = 16625 MB

                upload priority = 10

                store_path_count = 1

                subdir_count_per_path = 256

                storage_port = 23000

                storage_http_port = 8888

                current_write_path = 0

                source storage id =

                if_trunk_server = 0

                connection.alloc_count = 256

                connection.current_count = 1

                connection.max_count = 1

                total_upload_count = 0

                success_upload_count = 0

                total_append_count = 0

                success_append_count = 0

                total_modify_count = 0

                success_modify_count = 0

                total_truncate_count = 0

                success_truncate_count = 0

                total_set_meta_count = 0

                success_set_meta_count = 0

                total_delete_count = 0

                success_delete_count = 0

                total_download_count = 0

                success_download_count = 0

                total_get_meta_count = 0

                success_get_meta_count = 0

                total_create_link_count = 0

                success_create_link_count = 0

                total_delete_link_count = 0

                success_delete_link_count = 0

                total_upload_bytes = 0

                success_upload_bytes = 0

                total_append_bytes = 0

                success_append_bytes = 0

                total_modify_bytes = 0

                success_modify_bytes = 0

                stotal_download_bytes = 0

                success_download_bytes = 0

                total_sync_in_bytes = 0

                success_sync_in_bytes = 0

                total_sync_out_bytes = 0

                success_sync_out_bytes = 0

                total_file_open_count = 0

                success_file_open_count = 0

                total_file_read_count = 0

                success_file_read_count = 0

                total_file_write_count = 0

                success_file_write_count = 0

                last_heart_beat_time = 2018-09-20 10:56:17

                last_source_update = 1969-12-31 19:00:00

                last_sync_update = 1969-12-31 19:00:00

                last_synced_timestamp = 1969-12-31 19:00:00

        Storage 2:

                id = 192.168.10.106

                ip_addr = 192.168.10.106  ACTIVE

                http domain =

                version = 5.11

                join time = 2018-09-20 10:54:01

                up time = 2018-09-20 10:54:01

                total storage = 17878 MB

                free storage = 15807 MB

                upload priority = 10

                store_path_count = 1

                subdir_count_per_path = 256

                storage_port = 23000

                storage_http_port = 8888

                current_write_path = 0

                source storage id = 192.168.10.105

                if_trunk_server = 0

                connection.alloc_count = 256

                connection.current_count = 1

                connection.max_count = 1

                total_upload_count = 0

                success_upload_count = 0

                total_append_count = 0

                success_append_count = 0

                total_modify_count = 0

                success_modify_count = 0

                total_truncate_count = 0

                success_truncate_count = 0

                total_set_meta_count = 0

                success_set_meta_count = 0

                total_delete_count = 0

                success_delete_count = 0

                total_download_count = 0

                success_download_count = 0

                total_get_meta_count = 0

                success_get_meta_count = 0

                total_create_link_count = 0

                success_create_link_count = 0

                total_delete_link_count = 0

                success_delete_link_count = 0

                total_upload_bytes = 0

                success_upload_bytes = 0

                total_append_bytes = 0

                success_append_bytes = 0

                total_modify_bytes = 0

                success_modify_bytes = 0

                stotal_download_bytes = 0

                success_download_bytes = 0

                total_sync_in_bytes = 0

                success_sync_in_bytes = 0

                total_sync_out_bytes = 0

                success_sync_out_bytes = 0

                total_file_open_count = 0

                success_file_open_count = 0

                total_file_read_count = 0

                success_file_read_count = 0

                total_file_write_count = 0

                success_file_write_count = 0

                last_heart_beat_time = 2018-09-20 10:56:34

                last_source_update = 1969-12-31 19:00:00

                last_sync_update = 1969-12-31 19:00:00

                last_synced_timestamp = 1969-12-31 19:00:00

 

可以看到存储节点状态为 ACTIVE 则可

 

 

 

 

 

6、 设置 FastDFS 跟踪器开机启动在node5和node6上操作):

vi /etc/rc.local

/etc/init.d/fdfs_storaged start

 

 

 

六、 文件上传测试

1、 修改 Tracker 服务器中的客户端配置文件在node5和node6上操作

vi /etc/fdfs/client.conf

base_path=/fastdfs/tracker

tracker_server=192.168.10.105:22122

tracker_server=192.168.10.106:22122

 

 

2、 执行如下文件上传命令在node5和node6上操作

[root@linux-node5 ~]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /root/V5.11.tar.gz

group1/M00/00/00/wKgKaVujttqAdPZqAAUkK6yqBFI.tar.gz   #返回 ID 号

[root@linux-node6 ~]#/usr/bin/fdfs_upload_file /etc/fdfs/client.conf anaconda-ks.cfg

group1/M00/00/00/wKgKaVumUVKAJXOqAAADvg1G5kU514.cfg  #返回 ID 号

 

 

每个存储结点都可以看到上传文件:

[root@linux-node5 ~]# find / -name wKgKaVumUVKAJXOqAAADvg1G5kU514.cfg

/fastdfs/storage/data/00/00/wKgKaVumUVKAJXOqAAADvg1G5kU514.cfg

[root@linux-node5 ~]# find / -name wKgKaVujttqAdPZqAAUkK6yqBFI.tar.gz

/fastdfs/storage/data/00/00/wKgKaVujttqAdPZqAAUkK6yqBFI.tar.gz

 

[root@linux-node6 ~]# find / -name wKgKaVumUVKAJXOqAAADvg1G5kU514.cfg

/fastdfs/storage/data/00/00/wKgKaVumUVKAJXOqAAADvg1G5kU514.cfg

[root@linux-node6 ~]# find / -name wKgKaVujttqAdPZqAAUkK6yqBFI.tar.gz

/fastdfs/storage/data/00/00/wKgKaVujttqAdPZqAAUkK6yqBFI.tar.gz

 

七、 在各存储节点在node5和node6上操作上安装 Nginx

 

1、 fastdfs-nginx-module 作用说明在node5和node6上操作

FastDFS 通过 Tracker 服务器,将文件放在 Storage 服务器存储, 但是同组存储服务器之间需要进入文件复制, 有同步延迟的问题。假设 Tracker 服务器将文件上传到了 192.168.10.105,上传成功后文件 ID已经返回给客户端。此时 FastDFS 存储集群机制会将这个文件同步到同组存储 192.168.10.106,在文件还没有复制完成的情况下,客户端如果用这个文件 ID 在 192.168.10.106 上取文件,就会出现文件无法访问的错误。而 fastdfs-nginx-module 可以重定向文件连接到源服务器取文件,避免客户端由于复制延迟导致的文件无法访问错误。(解压后的 fastdfs-nginx-module 在 nginx 安装时使用)

 

 

 

2、下载 fastdfs-nginx-module  V1.20.tar.gz 并解压到/usr/local/src在node5和node6上操作 : 

 

wget https://github.com/happyfish100/fastdfs-nginx-module/archive/V1.20.tar.gz

 

tar -zxvf V1.20.tar.gz -C /usr/local/src/

cd /usr/local/src/fastdfs-nginx-module-1.20/

 

3、 修改 fastdfs-nginx-module 的 config 配置文件在node5和node6上操作:

vi /usr/local/src/fastdfs-nginx-module-1.20/src/config 

ngx_module_incs="/usr/local/include"

CORE_INCS="$CORE_INCS /usr/local/include"

修改为:

ngx_module_incs="/usr/include/fastdfs /usr/include/fastcommon/"

CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/"

 

使用sed修改

sed -i 's@ngx_module_incs="/usr/local/include"@ngx_module_incs="/usr/include/fastdfs /usr/include/fastcommon/"@g' /usr/local/src/fastdfs-nginx-module-1.20/src/config

sed -i 's@CORE_INCS="$CORE_INCS /usr/local/include"@CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/"@g' /usr/local/src/fastdfs-nginx-module-1.20/src/config

(注意: 这个路径修改是很重要的,不然在 nginx 编译的时候会报错的) 

 

 

4、安装nginx或参考8、高性能web架构之nginx反向代理

 

 

5、 安装编译 Nginx 所需的依赖包在node5和node6上操作):

yum -y install gcc automake autoconf  libtool make gcc gcc-c++ wget lrzsz net-tools gcc gcc-c++ zlib pcre openssl pcre-devel openssl openssl-devel

 

 

  1. 下载解压编译安装在node5和node6上操作

wget http://nginx.org/download/nginx-1.15.3.tar.gz

tar -zxvf nginx-1.15.3.tar.gz -C /usr/local/src/

cd /usr/local/src/nginx-1.15.3/

./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-http_stub_status_module --with-file-aio --add-module=/usr/local/src/fastdfs-nginx-module-1.20/src

make && make install

 

 

7、 复制 fastdfs-nginx-module 源码中的配置文件到/etc/fdfs 目录, 并修改在node5和node6上操作

cp /usr/local/src/fastdfs-nginx-module-1.20/src/mod_fastdfs.conf /etc/fdfs/

 

 

vi /etc/fdfs/mod_fastdfs.conf

connect_timeout=10

base_path=/tmp

tracker_server=192.168.10.105:22122

tracker_server=192.168.10.106:22122

storage_server_port=23000

group_name=group1

url_have_group_name = true

store_path0=/fastdfs/storage/

group_count = 2

[group1]

group_name=group1

storage_server_port=23000

store_path_count=1

store_path0=/fastdfs/storage/

[group2]

group_name=group2

storage_server_port=23000

store_path_count=1

store_path0=/fastdfs/storage/

第一组 Storage 的 mod_fastdfs.conf 配置与第二组配置只有 group_name 不同: group_name=group2

 

 

8、 复制 FastDFS 的部分配置文件到/etc/fdfs 目录在node5和node6上操作

 

cd /usr/local/src/fastdfs-5.11/conf/

cp http.conf mime.types /etc/fdfs/

 

 

9、 在/fastdfs/storage 文件存储目录下创建软连接,将其链接到实际存放数据的目录在node5和node6上操作

ln -s /fastdfs/storage/data/ /fastdfs/storage/data/M00

 

 

 

10、 配置 Nginx在node5和node6上操作

grep -v "#" /usr/local/nginx/conf/nginx.conf | grep -v "^$"

user  root;

worker_processes  1;

events {

    worker_connections  1024;

}

http {

    include       mime.types;

    default_type  application/octet-stream;

    sendfile        on;

    keepalive_timeout  65;

    server {

        listen       8888;

        server_name  localhost;

        location ~/group([0-9])/M00 {

            ngx_fastdfs_module;

        }

        error_page   500 502 503 504  /50x.html;

        location = /50x.html {

            root   html;

        }

    }

}

 

注意-说明:

  1. 8888 端口值是要与/etc/fdfs/storage.conf 中的 http.server_port=8888 相对应,因为 http.server_port 默认为 8888,如果想改成 80,则要对应修改过来。
  2. Storage 对应有多个 group 的情况下,访问路径带 group 名,如/group1/M00/00/00/xxx,对应的 Nginx 配置为:

        location ~/group([0-9])/M00 {

            ngx_fastdfs_module;

        }

C、如查下载时如发现老报 404, 将 nginx.conf 第一行 user nobody 修改为 user root 后重新启动。

 

 

 

 

11、启动nginx在node5和node6上操作

/usr/local/nginx/sbin/nginx -t   //检测配置是否有误

/usr/local/nginx/sbin/nginx      //启动

netstat -tunlp | grep nginx      //查看端口

 

 

 

12、通过浏览器访问测试上传的文件在node5和node6上操作

http://192.168.10.105:8888/group1/M00/00/00/wKgKaVujttqAdPZqAAUkK6yqBFI.tar.gz

http://192.168.10.106:8888/group1/M00/00/00/wKgKaVujttqAdPZqAAUkK6yqBFI.tar.gz

 

 

 

 

  • 在跟踪器节点(192.168.50.135、 192.168.50.136)上安装 Nginx

 

1、在 tracker 上安装的 nginx 主要为了提供 http 访问的反向代理、负载均衡以及缓存服务在node5和node6上操作: 

 

2、 安装编译 Nginx 所需的依赖包(因为我的追踪和存储都是在两台机子,所以我只需要重新编译就行了)在node5和node6上操作

 

 

  1. 下载ngx_cache_purge-2.3.tar.gz 并解压到/usr/local/src目录下在node5和node6上操作:  

wget http://labs.frickle.com/files/ngx_cache_purge-2.3.tar.gz

tar -zxvf ngx_cache_purge-2.3.tar.gz -C /usr/local/src/

 

 

4、重新编译安装nginx,添加ngx_cache_purge模块在node5和node6上操作

cd /usr/local/src/nginx-1.15.3/

./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-http_stub_status_module --with-file-aio --add-module=/usr/local/src/fastdfs-nginx-module-1.20/src --add-module=/usr/local/src/ngx_cache_purge-2.3/

make && make install

 

 

5、配置 Nginx, 设置负载均衡以及缓存在node5和node6上操作

vi /usr/local/nginx/conf/nginx.conf

user root;

worker_processes 1;

#error_log logs/error.log;

#error_log logs/error.log notice;

#error_log logs/error.log info;

#pid logs/nginx.pid;

events {

worker_connections 1024;

use epoll;

}

http {

include mime.types;

default_type application/octet-stream;

#log_format main '$remote_addr - $remote_user [$time_local] "$request" '

# '$status $body_bytes_sent "$http_referer" '

# '"$http_user_agent" "$http_x_forwarded_for"';

#access_log logs/access.log main;

sendfile on;

tcp_nopush on;

#keepalive_timeout 0;

keepalive_timeout 65;

#gzip on;

#设置缓存

server_names_hash_bucket_size 128;

client_header_buffer_size 32k;

large_client_header_buffers 4 32k;

client_max_body_size 300m;

proxy_redirect off;

proxy_set_header Host $http_host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_connect_timeout 90;

proxy_send_timeout 90;

proxy_read_timeout 90;

proxy_buffer_size 16k;

proxy_buffers 4 64k;

proxy_busy_buffers_size 128k;

proxy_temp_file_write_size 128k;

#设置缓存存储路径、存储方式、分配内存大小、磁盘最大空间、缓存期限

proxy_cache_path /fastdfs/cache/nginx/proxy_cache levels=1:2

keys_zone=http-cache:200m max_size=1g inactive=30d;

proxy_temp_path /fastdfs/cache/nginx/proxy_cache/tmp;

    server {

        listen       8888;

        server_name  localhost;

        location ~/group([0-9])/M00 {

            ngx_fastdfs_module;

        }

        error_page   500 502 503 504  /50x.html;

        location = /50x.html {

            root   html;

        }

    }

#设置 group1 的服务器

upstream fdfs_group1 {

server 192.168.10.105:8888 weight=1 max_fails=2 fail_timeout=30s;

server 192.168.10.106:8888 weight=1 max_fails=2 fail_timeout=30s;

}

#设置 group2 的服务器

#upstream fdfs_group2 {

# server 192.168.10.107:8888 weight=1 max_fails=2 fail_timeout=30s;

# server 192.168.10.108:8888 weight=1 max_fails=2 fail_timeout=30s;

#}

server {

listen 8000;

server_name localhost;

#charset koi8-r;

#access_log logs/host.access.log main;

#设置 group 的负载均衡参数

location /group1/M00 {

proxy_next_upstream http_502 http_504 error timeout invalid_header;

proxy_cache http-cache;

proxy_cache_valid 200 304 12h;

proxy_cache_key $uri$is_args$args;

proxy_pass http://fdfs_group1;

expires 30d;

}

#location /group2/M00 {

# proxy_next_upstream http_502 http_504 error timeout invalid_header;

# proxy_cache http-cache;

# proxy_cache_valid 200 304 12h;

# proxy_cache_key $uri$is_args$args;

# proxy_pass http://fdfs_group2;

# expires 30d;

#}

#设置清除缓存的访问权限

location ~/purge(/.*) {

allow 127.0.0.1;

allow 192.168.10.0/24;

deny all;

proxy_cache_purge http-cache $1$is_args$args;

}

#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html

#

error_page 500 502 503 504 /50x.html;

location = /50x.html {

root html;

}

}

}

 

 

按以上 nginx 配置文件的要求,创建对应的缓存目录:

 mkdir -p /fastdfs/cache/nginx/proxy_cache

 mkdir -p /fastdfs/cache/nginx/proxy_cache/tmp

 

 

6、启动nginx在node5和node6上操作

/usr/local/nginx/sbin/nginx -t

/usr/local/nginx/sbin/nginx

 

 

 

 

  1. 文件访问测试 前面直接通过访问 Storage 节点中的 Nginx 的文件: 

http://192.168.10.105:8888/group1/M00/00/00/wKgKaVujttqAdPZqAAUkK6yqBFI.tar.gz

http://192.168.10.106:8888/group1/M00/00/00/wKgKaVujttqAdPZqAAUkK6yqBFI.tar.gz 

现在可以通过 Tracker 中的 Nginx 来进行访问

(1)通过 Tracker1 中的 Nginx 来访问

http://192.168.10.105:8000/group1/M00/00/00/wKgKaVumUVKAJXOqAAADvg1G5kU514.cfg

http://192.168.10.106:8000/group1/M00/00/00/wKgKaVumUVKAJXOqAAADvg1G5kU514.cfg

 

由上面的文件访问效果可以看到,每一个 Tracker 中的 Nginx 都单独对后端的 Storage 组做了负载均衡,但整套 FastDFS 集群如果想对外提供统一的文件访问地址,还需要对两个 Tracker 中的 Nginx 进行 HA 集群

 

 

 

 

 

 

九、 使用 Keepalived + Nginx 组成的高可用负载均衡集群做两个 Tracker 节点中 Nginx 的负载均衡

 

 

 

1、参考博文3、高性能web架构之(LVS-NAT+keepalived实现web轮询)

 

2、 在 Keepalived+Nginx 实现高可用负载均衡集群中配置 Tracker 节点中 Nginx 的负载均衡反向代理在node5和node6上操作

(192.168.10.105 和 192.168.10.106 中的 Nginx 执行相同的配置)

 cp /usr/local/nginx/conf/nginx.conf /usr/local/nginx/conf/nginx.conf.bak2 //备份nginx的配置

cat /usr/local/nginx/conf/nginx.conf

user root;

worker_processes 1;

events {

        worker_connections 1024;

        use epoll;

}

http {

        include mime.types;

        default_type application/octet-stream;

        #log_format main '$remote_addr - $remote_user [$time_local] "$request" '

        # '$status $body_bytes_sent "$http_referer" '

        # '"$http_user_agent" "$http_x_forwarded_for"';

        #access_log logs/access.log main;

        sendfile on;

        tcp_nopush on;

        keepalive_timeout 0;

        #keepalive_timeout 65;

        #gzip on;

        #设置缓存

        server_names_hash_bucket_size 128;

        client_header_buffer_size 32k;

        large_client_header_buffers 4 32k;

        client_max_body_size 300m;

        proxy_redirect off;

        proxy_set_header Host $http_host;

        proxy_set_header X-Real-IP $remote_addr;

        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_connect_timeout 90;

        proxy_send_timeout 90;

        proxy_read_timeout 90;

        proxy_buffer_size 16k;

        proxy_buffers 4 64k;

        proxy_busy_buffers_size 128k;

        proxy_temp_file_write_size 128k;

        #设置缓存存储路径、存储方式、分配内存大小、磁盘最大空间、缓存期限

        proxy_cache_path /fastdfs/cache/nginx/proxy_cache levels=1:2

        keys_zone=http-cache:200m max_size=1g inactive=30d;

        proxy_temp_path /fastdfs/cache/nginx/proxy_cache/tmp;

        #设置 group1 的服务器

        server {

            listen       8888;

            server_name  localhost;

            location ~/group([0-9])/M00 {

                ngx_fastdfs_module;

            }

            error_page   500 502 503 504  /50x.html;

            location = /50x.html {

                root   html;

            }

        }

        upstream fdfs_group1 {

                server 192.168.10.105:8888 weight=1 max_fails=2 fail_timeout=30s;

                server 192.168.10.106:8888 weight=1 max_fails=2 fail_timeout=30s;

        }

        server {

                listen 8000;

                server_name localhost;

                #charset koi8-r;

                #access_log logs/host.access.log main;

                #设置 group 的负载均衡参数

                location /group1/M00 {

                        proxy_next_upstream http_502 http_504 error timeout invalid_header;

                        proxy_cache http-cache;

                        proxy_cache_valid 200 304 12h;

                        proxy_cache_key $uri$is_args$args;

                        proxy_pass http://fdfs_group1;

                        expires 30d;

                }

                #设置清除缓存的访问权限

                location ~/purge(/.*) {

                        allow 127.0.0.1;

                        allow 192.168.10.0/24;

                        deny all;

                        proxy_cache_purge http-cache $1$is_args$args;

                }

                #error_page 404 /404.html;

                # redirect server error pages to the static page /50x.html

                #

                error_page 500 502 503 504 /50x.html;

                        location = /50x.html {

                        root html;

                }

        }

        upstream fastdfs_tracker {

                server 192.168.10.105:8000 weight=1 max_fails=2 fail_timeout=30s;

                server 192.168.10.106:8000 weight=1 max_fails=2 fail_timeout=30s;

        }

        server {

                listen 80;

                server_name localhost;

                #charset koi8-r;

                #access_log logs/host.access.log main;

                #location / {

                #          root html;

                #          index index.html index.htm;

                #}

                location ~/group([0-9])/M00 {

                        proxy_next_upstream http_502 http_504 error timeout invalid_header;

                        proxy_cache http-cache;

                        proxy_cache_valid 200 304 12h;

                        proxy_cache_key $uri$is_args$args;

                        proxy_pass http://fdfs_group1;

                        expires 30d;

                }

                #error_page 404 /404.html;

                # redirect server error pages to the static page /50x.html

                error_page 500 502 503 504 /50x.html;

                location = /50x.html {

                           root html;

                }

                ## FastDFS Proxy

                location /dfs {

                           root html;

                           index index.html index.htm;

                           proxy_pass http://fastdfs_tracker/;

                           proxy_set_header Host $http_host;

                           proxy_set_header Cookie $http_cookie;

                           proxy_set_header X-Real-IP $remote_addr;

                           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

                           proxy_set_header X-Forwarded-Proto $scheme;

                           client_max_body_size 300m;

        }

    }

}

 

 

 

 

 

 

修改完测试配置是否有误:

/usr/local/nginx/sbin/nginx -t

 

 

 

 

 

 

3、修改默认主页在node5和node6上操作

 

[root@linux-node5 ~]# vi /usr/local/nginx/html/index.html

<h1>Welcome to nginx!  10.105</h1>

 

[root@linux-node6 ~]# vi /usr/local/nginx/html/index.html

<h1>Welcome to nginx!  10.106</h1>

 

 

 

  1. 配置vip在node5和node6上操作

cat /etc/init.d/web_vip.sh

#!/bin/bash

#description:config lvs-vip

    vip=192.168.10.88

    mask='255.255.255.255'

    

    case $1 in

    start)

    echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

    echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore

    echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce

    echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

    sysctl -p >/dev/null 2>&1

 

    /usr/sbin/ifconfig lo:1 $vip netmask $mask broadcast $vip up

    /usr/sbin/route add -host $vip dev lo:1

    echo "start VIP OK!!"

    ;;

    stop)

    /usr/sbin/ifconfig lo:1 down

    /usr/sbin/route del $vip >/dev/null 2>&1

 

    echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore

    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore

    echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce

    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce

    

    echo "stop VIP !!"

    ;;

    *)

    echo "Usage $(basename $0) start|stop"

    exit 1

    ;;

    esac

 

 

 

 

5、启动vip在node5和node6上操作

 

Chmod +x /etc/init.d/web_vip.sh

cat /etc/init.d/web_vip.sh

 

 

 

6、查看vip在node5和node6上操作

ifconfig lo:1

lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536

        inet 192.168.10.88  netmask 255.255.255.255

        loop  txqueuelen 0  (Local Loopback)

 

 

 

7、访问测试http://192.168.10.88/

 

 

 

 

 

 

8、 通过 Keepalived+Nginx 组成的高可用负载集群的 VIP(192.168.10.88)来访问 FastDFS 集群中的文件

http://192.168.10.88/group1/M00/00/00/wKgKaVujttqAdPZqAAUkK6yqBFI.tar.gz

 

http://192.168.10.88/group1/M00/00/00/wKgKaVumUVKAJXOqAAADvg1G5kU514.cfg

 

 

 

 

OK到此FastDFS就解决啦!使用的时候还需要多多测试。

猜你喜欢

转载自blog.csdn.net/weixin_41515615/article/details/82818646