分布式存储

rhel6.5系统环境:
server1 master
server2
server3
server4

首先server1下载软件包:
libpcap-1.4.0-4.20130826git2dbcaa1.el6.x86_64.rpm 依赖环境
libpcap-devel-1.4.0-4.20130826git2dbcaa1.el6.x86_64.rpm 依赖环境
moosefs-3.0.80-1.tar.gz
yum install -y rpm-build rpm包检测编译环境安装
rpmbuild -tb moosefs-3.0.80-1.tar.gz 查看编译软件包
其中可能要安装gcc编译环境
yum install gcc -y
做软连接ln -s moosefs-3.0.80-1.tar.gz moosefs-3.0.80.tar.gz
进入目录 cd /root/rpmbuild/RPMS/x86_64
[root@server1 x86_64]# yum install -y moosefs-master-3.0.80-1.x86_64.rpm moosefs-cgi-3.0.80-1.x86_64.rpm moosefs-cgiserv-3.0.80-1.x86_64.rpm
将客户端软件包发到server2和server3
scp moosefs-chunkserver-3.0.80-1.x86_64.rpm server2:/root/
scp moosefs-chunkserver-3.0.80-1.x86_64.rpm server3:/root/
[root@server1 x86_64]# cd /etc/mfs/
[root@server1 mfs]# vim mfsmaster.cfg 可默认不做修改,这里既不作修改
[root@server1 mfs]# vim /etc/hosts 注意mfs解析
172.25.35.1 server1 mfsmaster
172.25.35.2 server2
172.25.35.3 server3
[root@server1 mfs]# cd /var/lib/mfs/
[root@server1 mfs]# ls 查看权限
changelog.2.mfs changelog.5.mfs metadata.mfs.empty
changelog.3.mfs metadata.mfs stats.mfs
changelog.4.mfs metadata.mfs.back.1
[root@server1 mfs]# ll
总用量 3620
-rw-r----- 1 mfs mfs 33 6月 16 15:09 changelog.0.mfs
-rw-r----- 1 mfs mfs 67 6月 10 10:08 changelog.2.mfs
-rw-r----- 1 mfs mfs 1924 6月 10 09:58 changelog.3.mfs
-rw-r----- 1 mfs mfs 1712 6月 10 08:58 changelog.4.mfs
-rw-r----- 1 mfs mfs 213 6月 9 17:52 changelog.5.mfs
-rw-r----- 1 mfs mfs 3799 6月 10 11:08 metadata.mfs.back
-rw-r----- 1 mfs mfs 3799 6月 10 11:00 metadata.mfs.back.1
-rw-r--r-- 1 mfs mfs 8 6月 9 17:28 metadata.mfs.empty
-rw-r----- 1 mfs mfs 3672832 6月 10 11:08 stats.mfs
[root@server1 mfs]# mfsmaster 启动mfsmaster
open files limit has been set to: 16384
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
loading sessions data ... ok (0.0000)
loading storage classes data ... ok (0.0000)
loading objects (files,directories,etc.) ... ok (0.1752)
loading names ... ok (0.3000)
loading deletion timestamps ... ok (0.0000)
loading quota definitions ... ok (0.0000)
loading xattr data ... ok (0.0000)
loading posix_acl data ... ok (0.0000)
loading open files data ... ok (0.0000)
loading flock_locks data ... ok (0.0000)
loading posix_locks data ... ok (0.0000)
loading chunkservers data ... ok (0.0000)
loading chunks data ... ok (0.4275)
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 6
directory inodes: 3
file inodes: 3
chunks: 6
metadata file has been loaded
stats file has been loaded
master <-> metaloggers module: listen on :9419
master <-> chunkservers module: listen on
:9420
main master server module: listen on :9421
mfsmaster daemon initialized properly
[root@server1 mfs]# mfscgiserv 开启服务及端口
lockfile created and locked
starting simple cgi server (host: any , port: 9425 , rootpath: /usr/share/mfscgi)
浏览器访问:http://172.25.35.1:9425/mfs.cgi
分布式存储~~~~
[root@server1 x86_64]# pwd
/root/rpmbuild/RPMS/x86_64
[root@server1 x86_64]scp moosefs-client-3.0.80-1.x86_64.rpm [email protected]:/root/desktop 把客户端软件发给测试机(这里即是物理机稍后作测试使用)
[root@server2 ~]# rpm -ivh moosefs-chunkserver-3.0.80-1.x86_64.rpm
[root@server3 ~]# rpm -ivh moosefs-chunkserver-3.0.80-1.x86_64.rpm
server2,3 安装软件,注意解析,同server1
[root@server2 ~]# cd /etc/mfs/
[root@server2 mfs]# vim mfshdd.cfg
/mnt/chunk1 文档后面追加存储路径
[root@server2 mfs]# mkdir /mnt/chunk1/
[root@server2 mfs]# chown mfs.mfs /mnt/chunk1/
[root@server2 mfs]# mfschunkserver 启动,server3同server2,只要文件目录改为/mnt/chunk2和server2不同即可
给server2加一块硬盘: 装scsi服务
[root@server2 mfs]# yum install -y scsi-
安装scsi所有包
[root@server2 mfs]# vim /etc/tgt/targets.conf
<target iqn.2018-06.com.example:server.target1>
backing-store /dev/vdb
</target>
root@server2 mfs]# /etc/init.d/tgtd start
正在启动 SCSI target daemon: [确定]
server1上:将新加这块硬盘挂上
[root@server1 x86_64]# iscsiadm -m discovery -t st -p 172.25.35.2
172.25.35.2:3260,1 iqn.2018-06.com.example:server.target1
[root@server1 x86_64]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2018-06.com.example:server.target1, portal: 172.25.35.2,3260] (multiple)
Login to [iface: default, target: iqn.2018-06.com.example:server.target1, portal: 172.25.35.2,3260] successful.
[root@server1 x86_64]# fdisk -l 查看
Device Boot Start End Blocks Id System
/dev/sda1 2 8192 8387584 83 Linux
[root@server1 x86_64]# fdisk -cu /dev/sda
[root@server1 x86_64]# mkfs.ext4 /dev/sda1 分区格式化
[root@server1 x86_64]# mount /dev/sda1 /mnt/
[root@server1 x86_64]# df 先挂载看下
[root@server1 x86_64]# cd /var/lib/mfs/
[root@server1 mfs]# mfsmaster stop 停掉服务
[root@server1 mfs]# cp -p /mnt/ 拷贝所有文文件到mnt下
[root@server1 mfs]# chown mfs.mfs /mnt/
[root@server1 mfs]# umount /mnt/
[root@server1 mfs]# mount /dev/sda1 /var/lib/mfs
[root@server1 mfs]# mfsmaster
[root@server1 mfs]# df
/dev/sda1 8255928 153132 7683420 2% /var/lib/mfs
因要实现高可用故server4上安装master 解析同server1
[root@server4 ~]# yum install -y moosefs-master-3.0.80-1.x86_64.rpm
[root@server4 ~]# yum install iscsi-

[root@server4 ~]# iscsiadm -m discovery -t st -p 172.25.35.2
172.25.35.2:3260,1 iqn.2018-06.com.example:server.target1
[root@server4 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2018-06.com.example:server.target1, portal: 172.25.35.2,3260] (multiple)
Login to [iface: default, target: iqn.2018-06.com.example:server.target1, portal: 172.25.35.2,3260] successful.
[root@server4 ~]# fdisk -l
回到前面说道的物理机:同样需要解析
[root@localhost ~]# rpm -ivh moosefs-client-3.0.80-1.x86_64.rpm
[root@localhost ~]# rpm -qa |grep moosefs
moosefs-client-3.0.80-1.x86_64 查看
[root@localhost ~]# cd /etc/mfs/
[root@localhost mfs]# vim mfsmount.cfg
/mnt/mfs
[root@localhost ~]# mfsmount
[root@localhost mfs]# df
mfsmaster:9421 34365120 4873600 29491520 15% /mnt/mfs
文件删除恢复
[root@localhost mfs]# mkdir dir{1..2}
[root@localhost mfs]# mfsgetgoal dir1/
dir1/: 2
[root@localhost mfs]# mfsgetgoal dir2/
dir2/: 2
[root@localhost mfs]# cd dir1
[root@localhost mfs]# cp /etc/passwd . 拷贝一些文件测试
[root@localhost mfs]# cd dir2
[root@localhost mfs]#cp /etc/fstab .
[root@localhost mfs]# cd dir1
[root@localhost dir1]# dd if=/dev/zero of=bigfile bs=1M count=200
写入一个大文件
[root@localhost dir1]# mfsfileinfo bigfile
[root@localhost dir1]# rm -f passwd
[root@localhost dir1]# mfsgettrashtime .
.: 86400
[root@localhost dir1]# cd /etc/mfs/
[root@localhost mfs]# cat /etc/mfs/mfsmount.cfg
[root@localhost mnt]# mkdir mfsmeta
[root@localhost mnt]# mfsmount -m /mnt/mfsmeta/
[root@localhost mnt]# cd mfsmeta/
[root@localhost mfsmeta]# ls
sustained trash
[root@localhost mfsmeta]# cd trash/
[root@localhost trash]# find -name passwd
./004/00000004|dir1|passwd 寻找文件
[root@localhost trash]# mv ./004/00000004|dir1|passwd undel/
[root@localhost dir1]# pwd
/mnt/mfs/dir1
[root@localhost dir1]# ls
bigfile passwd 又恢复了文件

猜你喜欢

转载自blog.51cto.com/13810716/2130066