Note: The reason for using source installation is mainly dependent on the use of some library problems when installing glusterfs yum server
- Preparing three glusterfs server (also officially recommended at least three, to prevent the occurrence of split brain), and each server / etc / hosts add the following below (e.g., using a DNS server, the DNS added in DNS)
10.85.3.113 glusterfs-1.example.com 10.85.3.114 glusterfs-2.example.com 10.85.3.115 glusterfs-3.example.com
- Referring to official documents Build and Install GlusterFS dependent libraries
# yum install autoconf automake bison cmockery2-devel dos2unix flex fuse-devel glib2-devel libacl-devel libaio-devel libattr-devel libcurl-devel libibverbs-devel librdmacm-devel libtirpc-devel libtool libxml2-devel lvm2-devel make openssl-devel pkgconfig pyliblzma python-devel python-eventlet python-netifaces python-paste-deploy python-simplejson python-sphinx python-webob pyxattr readline-devel rpm-build sqlite-devel systemtap-sdt-devel tar userspace-rcu-devel
- Download userspace-rcu-master and unzip / home / userspace-rcu-master
- Download glusterfs-xlators / master and unzip / home / glusterfs-xlators-master
- Compiling userspace-rcu
# cd userspace-rcu-master # ./bootstrap # ./configure # make && make install # ldconfig
- Download glusterfs source code and unzip /home/glusterfs-5.7, this version 5.7
- The glusterfs's lib copied to the system directory (compiled gluster time will be used)
# cd /home/glusterfs-5.7
# cp -r libglusterfs/src /usr/local/include/glusterfs
- Dependent libraries uuid, yum install -y libuuid-devel
- Copy glupy (compiled gluster time will be used)
# cp -r /home/glusterfs-xlators-master/xlators/glupy/ /home/glusterfs-5.7/xlators/features/
- According to official documents in accordance with the implementation of glusterfs
# cd /home/glusterfs-5.7
# ./autogen.sh # ./configure --without-libtirpc # ./configure --enable-gnfs # make # make install
In make when you may encounter the following error
../../../../contrib/userspace-rcu/rculist-extra.h:33:6: error: redefinition of 'cds_list_add_tail_rcu' void cds_list_add_tail_rcu(struct cds_list_head *newp, ^ In file included from glusterd-rcu.h:15:0, from glusterd-sm.h:26, from glusterd.h:28, from glusterd.c:19: /usr/local/include/urcu/rculist.h:44:6: note: previous definition of 'cds_list_add_tail_rcu' was here void cds_list_add_tail_rcu(struct cds_list_head *newp,
Cds_list_add_tail_rcu following function to the file to add conditional compilation
/usr/local/include/urcu/rculist.h /home/glusterfs-5.7/contrib/userspace-rcu/rculist-extra.h
#ifndef CDS_LIST_ADD_TAIL_CRU #define CDS_LIST_ADD_TAIL_CRU static inline void cds_list_add_tail_rcu(struct cds_list_head *newp, struct cds_list_head *head) { newp->next = head; newp->prev = head->prev; rcu_assign_pointer(head->prev->next, newp); head->prev = newp; } #endif
- Run the following command to three servers format the disk and mount directory
mkfs.xfs -i size=512 /dev/vdb mkdir -p /data/brick echo '/dev/vdb /data/brick xfs defaults 1 2' >> /etc/fstab mount -a && mount
- Execute the following command in three servers, set up the boot and start glusterfs
# chkconfig glusterd on
# glusterd
- Run the following commands on master1, add peer
# gluster peer probe glusterfs-1.example.com
# gluster peer probe glusterfs-2.example.com
# gluster peer probe glusterfs-3.example.com
# gluster peer status
- Create and start volume on master1, size 5G
gluster volume create volume-5G replica 3 glusterfs-1.example.com:/data/brick/glusterfs1 glusterfs-2.example.com:/data/brick/glusterfs2 glusterfs-3.example.com:/data/brick/glusterfs3 gluster volume start volume-5G
Use the following commands to view volume
gluster volume status
gluster volume info
Recommended Replicated mode container environment, for more information see the official documentation , pay attention to part of the pattern has been abandoned
- Installed on the client machines client glusterfs
# yum install glusterfs-client -y
- In the client mount server volume, the first way is using nfs to mount, and the second command line using glusterfs mount, the same effect
mount -t glusterfs glusterfs-1.example.com:/volume-5G /data/mounttest/ glusterfs --volfile-id=volume-5G --volfile-server=glusterfs-1.example.com /data/glusterfsclient
PS:
- gluster command line can be found in the official documentation
- ": / Data / brick / glusterfs1 is already part Error" error, use the following way to clean up the environment, such as might occur when deleting a volume add
# rm -rf /data/brick/glusterfs1/.glusterfs/ # setfattr -x trusted.glusterfs.volume-id /data/brick/glusterfs1/
reference:
https://www.cnblogs.com/jicki/p/5801712.html
https://www.ibm.com/developerworks/cn/opensource/os-cn-glusterfs-docker-volume/index.html
https://jimmysong.io/kubernetes-handbook/practice/using-glusterfs-for-persistent-storage.html