GlusterFS运维中遇到的问题

作者:石文
时间:2018-11-10


1.删除卷后,其所在的目录无法再用于创建新的卷,报错信息和日志信息如下:

[root@10-211-105-75 glusterfs]# gluster volume start nfs
volume start: nfs: failed: Commit failed on localhost. Please check log file for details.
[root@10-211-105-75 glusterfs]# gluster volume start nfs
volume start: nfs: failed: Commit failed on 10.211.105.74. Please check log file for details.
Commit failed on 10.211.105.78. Please check log file for details.
Commit failed on 10.211.105.73. Please check log file for details.
Commit failed on 10.211.105.77. Please check log file for details.
Commit failed on 10.211.105.76. Please check log file for details.
[root@10-211-105-75 data0]# gluster volume create nfs3 replica 3 {10.211.105.73,10.211.105.75,10.211.105.77,10.211.105.78,10.211.105.76,10.211.105.74}:/data0/nfs3
volume create: nfs3: success: please start the volume to access data
[root@10-211-105-75 data0]# gluster volume delete nfs3
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: nfs3: success
[root@10-211-105-75 data0]# gluster volume create nfs3 replica 3 {10.211.105.73,10.211.105.75,10.211.105.77,10.211.105.78,10.211.105.76,10.211.105.74}:/data0/nfs3
volume create: nfs3: failed: /data0/nfs3 is already part of a volume

2.为了便于监控,调整了GlusterFS的brick端口号的限制,只为Brick提供一个端口号。但是出现了Volume无法启动的情况

[2018-11-10 09:13:32.766334] I [glusterd-utils.c:6089:glusterd_brick_start] 0-management: starting a fresh brick process for brick /data0/nfs
[2018-11-10 09:13:32.766377] E [MSGID: 106611] [glusterd-utils.c:2036:glusterd_volume_start_glusterfs] 0-management: All the ports in the range are exhausted, can't start brick /data0/nfs for volume nfs
[2018-11-10 09:13:32.766393] E [MSGID: 106005] [glusterd-utils.c:6095:glusterd_brick_start] 0-management: Unable to start brick 10.211.105.75:/data0/nfs
[2018-11-10 09:13:32.766430] E [MSGID: 106122] [glusterd-mgmt.c:333:gd_mgmt_v3_commit_fn] 0-management: Volume start commit failed.
[2018-11-10 09:13:32.766441] E [MSGID: 106122] [glusterd-mgmt.c:1637:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Start on local node
[2018-11-10 09:13:32.766452] E [MSGID: 106122] [glusterd-mgmt.c:2251:glusterd_mgmt_v3_initiate_all_phases] 0-management: Commit Op Failed

3.卷在被创建后,进程就会启动。但是就算卷删除了,进程还是不会关闭(端口还在占用)。只有将卷的文件夹删除了,进程才会关闭。b. 使用gluster volume stop nfs关闭卷会正常,进程也关闭了,但是仍然会无法重新启动,原因是没有合适的端口

[2018-11-10 09:45:20.501848] I [glusterd-utils.c:6089:glusterd_brick_start] 0-management: starting a fresh brick process for brick /data0/nfs
[2018-11-10 09:45:20.501903] E [MSGID: 106611] [glusterd-utils.c:2036:glusterd_volume_start_glusterfs] 0-management: All the ports in the range are exhausted, can't start brick /data0/nfs for volume nfs
[2018-11-10 09:45:20.501918] E [MSGID: 106005] [glusterd-utils.c:6095:glusterd_brick_start] 0-management: Unable to start brick 10.211.105.75:/data0/nfs
[2018-11-10 09:45:20.501957] E [MSGID: 106122] [glusterd-mgmt.c:333:gd_mgmt_v3_commit_fn] 0-management: Volume start commit failed.
[2018-11-10 09:45:20.501968] E [MSGID: 106122] [glusterd-mgmt.c:1637:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Start on local node
[2018-11-10 09:45:20.501978] E [MSGID: 106122] [glusterd-mgmt.c:2251:glusterd_mgmt_v3_initiate_all_phases] 0-management: Commit Op Failed

4.操作日志所在位置:/var/log/glusterfs/glusterd.log。但是日志的时间不是机器上的时间

5.使用命令gluster volume stop nfs 会存在有些机器上的Brick 停止不了的情况

[root@10-211-105-73 ~]# gluster volume start nfs
volume start: nfs: failed: Pre Validation failed on 10.211.105.75. Volume nfs already started

查看卷信息如下,卷的状态是stopped

[root@10-211-105-73 ~]# gluster volume info nfs
 
Volume Name: nfs
Type: Distributed-Replicate
Volume ID: 6f869b3b-a1d4-4fda-8567-4c3db9e1a43e
Status: Stopped
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.211.105.73:/data0/nfs
Brick2: 10.211.105.75:/data0/nfs
Brick3: 10.211.105.77:/data0/nfs
Brick4: 10.211.105.78:/data0/nfs
Brick5: 10.211.105.76:/data0/nfs
Brick6: 10.211.105.74:/data0/nfs
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

登录到报错机器上,发现进程仍然存在

[root@xdata-control glusterfs]# ssh 10.211.105.75
Last login: Sat Nov 10 19:05:13 2018 from 172.19.17.58
[root@10-211-105-75 ~]# ps -ef | grep gluster
root     182115      1  0 Nov08 ?        00:00:03 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root     451997      1  0 17:50 ?        00:00:00 /usr/sbin/glusterfsd -s 10.211.105.75 --volfile-id nfs.10.211.105.75.data0-nfs -p /var/run/gluster/vols/nfs/10.211.105.75-data0-nfs.pid -S /var/run/gluster/647967c9a8384620.socket --brick-name /data0/nfs -l /var/log/glusterfs/bricks/data0-nfs.log --xlator-option *-posix.glusterd-uuid=d94b305b-e5b2-4f6b-87dd-9b25d5042de2 --process-name brick --brick-port 49000 --xlator-option nfs-server.listen-port=49000
root     452022      1  0 17:50 ?        00:00:19 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/5f69239b6d271b28.socket --xlator-option *replicate*.node-uuid=d94b305b-e5b2-4f6b-87dd-9b25d5042de2 --process-name glustershd
root     458546 458466  0 19:06 pts/0    00:00:00 grep --color=auto gluster

使用kill命令关停brick的进程,但是只要是glusterd进程存在,这个brick进程的信息是保留的。

6.brick 出现”State: Peer Rejected (Connected)”

[root@10-211-106-108 ~]# gluster peer status
Number of Peers: 11

Hostname: 10.211.105.202
Uuid: 7b719b24-3aae-45da-b5c6-240ce0429007
State: Peer Rejected (Connected)

Hostname: 10.211.105.238
Uuid: f5499be2-6a3f-4f84-b480-50b7b75194d8
State: Peer in Cluster (Connected)

Hostname: 10.211.106.45
Uuid: 9643e5f4-6090-4275-a97b-ed18e7c5b894
State: Peer in Cluster (Connected)

Hostname: 10.211.105.168
Uuid: f2248e44-18ac-4f3e-825f-b62f69bcf31d
State: Peer Rejected (Connected)

Hostname: 10.211.106.42
Uuid: f34f6655-1376-4e4c-b81c-7f6d646a6131
State: Peer Rejected (Connected)

Hostname: 10.211.105.71
Uuid: 981d32ab-c0f3-4b80-b42f-e6a5435a5ea1
State: Peer Rejected (Connected)

Hostname: 10.211.106.76
Uuid: cd917750-b7f1-4837-a192-7705e77252a7
State: Peer Rejected (Connected)

Hostname: 10.211.105.109
Uuid: b7f2c400-ee59-4343-824f-4350995e09f5
State: Peer Rejected (Connected)

Hostname: 10.211.106.113
Uuid: a625c870-49f4-47ce-ab4f-34055e41f6ff
State: Peer Rejected (Connected)

Hostname: 10.211.106.10
Uuid: cba79243-1de6-436b-95ad-e5f8dab06d2f
State: Peer Rejected (Connected)

Hostname: 10.211.105.141
Uuid: 63cc37c5-1575-4bb1-bd81-4b8ba8b62763
State: Peer Rejected (Connected)
 

7.分布式复制卷中,一个有brick的实例意外挂了,卷不会迁移的。所以分布式复制卷的意义不大。建议使用复制卷。

8. glustershd进程是复制是复制卷的自修复进程(仅仅是对于复制卷来说的数据自修复)

/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/e2a1f33edf6a8edf.socket --xlator-option *replicate*.node-uuid=7fe14f7e-1a8c-47d9-9157-ae7d5f48c9b9 --process-name glustershd

9.如何调整glusterFS brick的日志级别?

diagnostics.brick-log-level

级别有:

DEBUG,WARNING,ERROR,CRITICAL,NONE,TRACE

调整客户端日志级别

diagnostics.client-log-level
发布了48 篇原创文章 · 获赞 0 · 访问量 1275

猜你喜欢

转载自blog.csdn.net/zhinengyunwei/article/details/103976581