环境:
系统 |
IP地址 |
主机名(登录用户) |
承载角色 |
Centos 7.4 64Bit 1611 |
10.199.100.170 |
dlp(yzyu) ceph-client(root) |
admin-node ceph-client
扫描二维码关注公众号,回复:
6878785 查看本文章
|
Centos 7.4 64Bit 1611 |
10.199.100.171 |
node1(yzyu) 添加一块硬盘 |
mon-node osd0-node mds-node |
Centos 7.4 64Bit 1611 |
10.199.100.172 |
node2(yzyu) 添加一块硬盘 |
mon-node osd1-node |
案例步骤:
- 配置基础环境:
- 配置ntp时间服务;
- 分别在dlp节点、node1、node2节点、client客户端节点上安装Ceph程序;
- 在dlp节点管理node存储节点,安装注册服务,节点信息;
- 配置Ceph的mon监控进程;
- 配置Ceph的osd存储进程;
- 验证查看ceph集群状态信息:
- 配置Ceph的mds元数据进程;
- 配置Ceph的client客户端;
- 测试Ceph的客户端存储;
- 错误整理;
- 配置基础环境:
[root@dlp ~]# useradd dhhy
[root@dlp ~]# echo "dhhy" |passwd --stdin dhhy
[root@dlp ~]# cat <<END >>/etc/hosts
192.168.100.101 dlp
192.168.100.102 node1
192.168.100.103 node2
192.168.100.104 ceph-client
END
[root@dlp ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy
[root@dlp ~]# chmod 0440 /etc/sudoers.d/dhhy
[root@node1~]# useradd dhhy
[root@node1 ~]# echo "dhhy" |passwd --stdin dhhy
[root@node1 ~]# cat <<END >>/etc/hosts
192.168.100.101 dlp
192.168.100.102 node1
192.168.100.103 node2
192.168.100.104 ceph-client
END
[root@node1 ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy
[root@node1 ~]# chmod 0440 /etc/sudoers.d/dhhy
[root@node2 ~]# useradd dhhy
[root@node2 ~]# echo "dhhy" |passwd --stdin dhhy
[root@node2 ~]# cat <<END >>/etc/hosts
192.168.100.101 dlp
192.168.100.102 node1
192.168.100.103 node2
192.168.100.104 ceph-client
END
[root@node2 ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy
[root@node2 ~]# chmod 0440 /etc/sudoers.d/dhhy
[root@ceph-client ~]# useradd dhhy
[root@ceph-client ~]# echo "dhhy" |passwd --stdin dhhy
[root@ceph-client ~]# cat <<END >>/etc/hosts
192.168.100.101 dlp
192.168.100.102 node1
192.168.100.103 node2
192.168.100.104 ceph-client
END
[root@ceph-client ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy
[root@ceph-client ~]# chmod 0440 /etc/sudoers.d/dhhy
- 配置ntp时间服务;
[root@dlp ~]# yum -y install ntp ntpdate
[root@dlp ~]# sed -i '/^server/s/^/#/g' /etc/ntp.conf
[root@dlp ~]# sed -i '25aserver 127.127.1.0\nfudge 127.127.1.0 stratum 8' /etc/ntp.conf
[root@dlp ~]# systemctl start ntpd
[root@dlp ~]# systemctl enable ntpd
[root@dlp ~]# netstat -utpln
[root@node1 ~]# yum -y install ntpdate
[root@node1 ~]# /usr/sbin/ntpdate 192.168.100.101
[root@node1 ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local
[root@node1 ~]# chmod +x /etc/rc.local
[root@node2 ~]# yum -y install ntpdate
[root@node2 ~]# /usr/sbin/ntpdate 192.168.100.101
[root@node1 ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local
[root@node1 ~]# chmod +x /etc/rc.local
[root@ceph-client ~]# yum -y install ntpdate
[root@ceph-client ~]# /usr/sbin/ntpdate 192.168.100.101
[root@ceph-client ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local
[root@ceph-client ~]# chmod +x /etc/rc.local
- 分别在dlp节点、node1、node2节点、client客户端节点上安装Ceph;
[root@dlp ~]# yum -y install yum-utils
[root@dlp ~]# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/
[root@dlp ~]# yum -y install epel-release --nogpgcheck
[root@dlp ~]# cat <<END >>/etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for \$basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
END
[root@dlp ~]# ls /etc/yum.repos.d/ ####必须保证有默认的官网源,结合epel源和网易的ceph源,才可以进行安装;
bak CentOS-fasttrack.repo ceph.repo
CentOS-Base.repo CentOS-Media.repo dl.fedoraproject.org_pub_epel_7_x86_64_.repo
CentOS-CR.repo CentOS-Sources.repo epel.repo
CentOS-Debuginfo.repo CentOS-Vault.repo epel-testing.repo
[root@dlp ~]# su - dhhy
[dhhy@dlp ~]$ mkdir ceph-cluster ##创建ceph主目录
[dhhy@dlp ~]$ cd ceph-cluster
[dhhy@dlp ceph-cluster]$ sudo yum -y install ceph-deploy ##安装ceph管理工具
[dhhy@dlp ceph-cluster]$ sudo yum -y install ceph --nogpgcheck ##安装ceph主程序
[root@node1 ~]# yum -y install yum-utils
[root@ node1 ~]# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/
[root@node1 ~]# yum -y install epel-release --nogpgcheck
[root@node1 ~]# cat <<END >>/etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for \$basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
END
[root@node1 ~]# ls /etc/yum.repos.d/ ####必须保证有默认的官网源,结合epel源和网易的ceph源,才可以进行安装;
bak CentOS-fasttrack.repo ceph.repo
CentOS-Base.repo CentOS-Media.repo dl.fedoraproject.org_pub_epel_7_x86_64_.repo
CentOS-CR.repo CentOS-Sources.repo epel.repo
CentOS-Debuginfo.repo CentOS-Vault.repo epel-testing.repo
[root@node1 ~]# su - dhhy
[dhhy@node1 ~]$ mkdir ceph-cluster
[dhhy@node1~]$ cd ceph-cluster
[dhhy@node1 ceph-cluster]$ sudo yum -y install ceph-deploy
[dhhy@node1 ceph-cluster]$ sudo yum -y install ceph --nogpgcheck
[dhhy@node1 ceph-cluster]$ sudo yum -y install deltarpm
[root@node2 ~]# yum -y install yum-utils
[root@ node1 ~]# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/
[root@node2 ~]# yum -y install epel-release --nogpgcheck
[root@node2 ~]# cat <<END >>/etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for \$basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
END
[root@node2 ~]# ls /etc/yum.repos.d/ ####必须保证有默认的官网源,结合epel源和网易的ceph源,才可以进行安装;
bak CentOS-fasttrack.repo ceph.repo
CentOS-Base.repo CentOS-Media.repo dl.fedoraproject.org_pub_epel_7_x86_64_.repo
CentOS-CR.repo CentOS-Sources.repo epel.repo
CentOS-Debuginfo.repo CentOS-Vault.repo epel-testing.repo
[root@node2 ~]# su - dhhy
[dhhy@node2 ~]$ mkdir ceph-cluster
[dhhy@node2 ~]$ cd ceph-cluster
[dhhy@node2 ceph-cluster]$ sudo yum -y install ceph-deploy
[dhhy@node2 ceph-cluster]$ sudo yum -y install ceph --nogpgcheck
[dhhy@node2 ceph-cluster]$ sudo yum -y install deltarpm
[root@ceph-client ~]# yum -y install yum-utils
[root@ node1 ~]# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/
[root@ceph-client ~]# yum -y install epel-release --nogpgcheck
[root@ceph-client ~]# cat <<END >>/etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for \$basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
END
[root@ceph-client ~]# ls /etc/yum.repos.d/ ####必须保证有默认的官网源,结合epel源和网易的ceph源,才可以进行安装;
bak CentOS-fasttrack.repo ceph.repo
CentOS-Base.repo CentOS-Media.repo dl.fedoraproject.org_pub_epel_7_x86_64_.repo
CentOS-CR.repo CentOS-Sources.repo epel.repo
CentOS-Debuginfo.repo CentOS-Vault.repo epel-testing.repo
[root@ceph-client ~]# yum -y install yum-plugin-priorities
[root@ceph-client ~]# yum -y install ceph ceph-radosgw --nogpgcheck
- 在dlp节点管理node存储节点,安装注册服务,节点信息;
[dhhy@dlp ceph-cluster]$ pwd ##当前目录必须为ceph的安装目录位置
/home/dhhy/ceph-cluster
[dhhy@dlp ceph-cluster]$ ssh-keygen -t rsa ##主节点需要远程管理mon节点,需要创建密钥对,并且将公钥复制到mon节点
[dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@dlp
[dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@node1
[dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@node2
[dhhy@dlp ceph-cluster]$ ssh-copy-id root@ceph-client
[dhhy@dlp ceph-cluster]$ cat <<END >>/home/dhhy/.ssh/config
Host dlp
Hostname dlp
User dhhy
Host node1
Hostname node1
User dhhy
Host node2
Hostname node2
User dhhy
END
[dhhy@dlp ceph-cluster]$ chmod 644 /home/dhhy/.ssh/config
[dhhy@dlp ceph-cluster]$ ceph-deploy new node1 node2 ##初始化节点
[dhhy@dlp ceph-cluster]$ cat <<END >>/home/dhhy/ceph-cluster/ceph.conf
osd pool default size = 2
END
[dhhy@dlp ceph-cluster]$ ceph-deploy install node1 node2 ##安装ceph
- 配置Ceph的mon监控进程;
[dhhy@dlp ceph-cluster]$ ceph-deploy mon create-initial ##初始化mon节点
注解:node节点的配置文件在/etc/ceph/目录下,会自动同步dlp管理节点的配置文件;
- 配置Ceph的osd存储;
配置node1节点的osd0存储设备:
[dhhy@dlp ceph-cluster]$ ssh dhhy@node1 ##创建osd节点存储数据的目录位置
[dhhy@node1 ~]$ sudo fdisk /dev/sdb
n p 回车 回车 回车 p w
[dhhy@node1 ~]$ sudo partx -a /dev/sdb
[dhhy@node1 ~]$ sudo mkfs -t xfs /dev/sdb1
[dhhy@node1 ~]$ sudo mkdir /var/local/osd0
[dhhy@node1 ~]$ sudo vi /etc/fstab
/dev/sdb1 /var/local/osd0 xfs defaults 0 0
:wq
[dhhy@node1 ~]$ sudo mount -a
[dhhy@node1 ~]$ sudo chmod 777 /var/local/osd0
[dhhy@node1 ~]$ sudo chown ceph:ceph /var/local/osd0/
[dhhy@node1 ~]$ ls -ld /var/local/osd0/
[dhhy@node1 ~]$ df -hT
[dhhy@node1 ~]$ exit
配置node2节点的osd1存储设备:
[dhhy@dlp ceph-cluster]$ ssh dhhy@node2
[dhhy@node2 ~]$ sudo fdisk /dev/sdb
n p 回车 回车 回车 p w
[dhhy@node2 ~]$ sudo partx -a /dev/sdb
[dhhy@node2 ~]$ sudo mkfs -t xfs /dev/sdb1
[dhhy@node2 ~]$ sudo mkdir /var/local/osd1
[dhhy@node2 ~]$ sudo vi /etc/fstab
/dev/sdb1 /var/local/osd1 xfs defaults 0 0
:wq
[dhhy@node2 ~]$ sudo mount -a
[dhhy@node2 ~]$ sudo chmod 777 /var/local/osd1
[dhhy@node2 ~]$ sudo chown ceph:ceph /var/local/osd1/
[dhhy@node2~]$ ls -ld /var/local/osd1/
[dhhy@node2 ~]$ df -hT
[dhhy@node2 ~]$ exit
dlp管理节点注册node节点:
[dhhy@dlp ceph-cluster]$ ceph-deploy osd prepare node1:/var/local/osd0 node2:/var/local/osd1 ##初始创建osd节点并指定节点存储文件位置
[dhhy@dlp ceph-cluster]$ chmod +r /home/dhhy/ceph-cluster/ceph.client.admin.keyring
[dhhy@dlp ceph-cluster]$ ceph-deploy osd activate node1:/var/local/osd0 node2:/var/local/osd1
##激活ods节点
[dhhy@dlp ceph-cluster]$ ceph-deploy admin node1 node2 ##复制key管理密钥文件到node节点中
[dhhy@dlp ceph-cluster]$ sudo cp /home/dhhy/ceph-cluster/ceph.client.admin.keyring /etc/ceph/
[dhhy@dlp ceph-cluster]$ sudo cp /home/dhhy/ceph-cluster/ceph.conf /etc/ceph/
[dhhy@dlp ceph-cluster]$ ls /etc/ceph/
ceph.client.admin.keyring ceph.conf rbdmap
[dhhy@dlp ceph-cluster]$ ceph quorum_status --format json-pretty ##查看Ceph群集详细信息
{
"election_epoch": 4,
"quorum": [
0,
1
],
"quorum_names": [
"node1",
"node2"
],
"quorum_leader_name": "node1",
"monmap": {
"epoch": 1,
"fsid": "dc679c6e-29f5-4188-8b60-e9eada80d677",
"modified": "2018-06-02 23:54:34.033254",
"created": "2018-06-02 23:54:34.033254",
"mons": [
{
"rank": 0,
"name": "node1",
"addr": "192.168.100.102:6789\/0"
},
{
"rank": 1,
"name": "node2",
"addr": "192.168.100.103:6789\/0"
}
]
}
}
- 验证查看ceph集群状态信息:
[dhhy@dlp ceph-cluster]$ ceph health
HEALTH_OK
[dhhy@dlp ceph-cluster]$ ceph -s ##查看Ceph群集状态
cluster 24fb6518-8539-4058-9c8e-d64e43b8f2e2
health HEALTH_OK
monmap e1: 2 mons at {node1=192.168.100.102:6789/0,node2=192.168.100.103:6789/0}
election epoch 6, quorum 0,1 node1,node2
osdmap e10: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds
pgmap v20: 64 pgs, 1 pools, 0 bytes data, 0 objects
10305 MB used, 30632 MB / 40938 MB avail ##已使用、剩余、总容量
64 active+clean
[dhhy@dlp ceph-cluster]$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.03897 root default
-2 0.01949 host node1
0 0.01949 osd.0 up 1.00000 1.00000
-3 0.01949 host node2
1 0.01949 osd.1 up 1.00000 1.00000
[dhhy@dlp ceph-cluster]$ ssh dhhy@node1 ##验证node1节点的端口监听状态以及其配置文件以及磁盘使用情况
[dhhy@node1 ~]$ df -hT |grep sdb1
/dev/sdb1 xfs 20G 5.1G 15G 26% /var/local/osd0
[dhhy@node1 ~]$ du -sh /var/local/osd0/
5.1G /var/local/osd0/
[dhhy@node1 ~]$ ls /var/local/osd0/
activate.monmap active ceph_fsid current fsid journal keyring magic ready store_version superblock systemd type whoami
[dhhy@node1 ~]$ ls /etc/ceph/
ceph.client.admin.keyring ceph.conf rbdmap tmppVBe_2
[dhhy@node1 ~]$ cat /etc/ceph/ceph.conf
[global]
fsid = 0fcdfa46-c8b7-43fc-8105-1733bce3bfeb
mon_initial_members = node1, node2
mon_host = 192.168.100.102,192.168.100.103
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 2
[dhhy@node1 ~]$ exit
[dhhy@dlp ceph-cluster]$ ssh dhhy@node2 ##验证node2节点的端口监听状态以及其配置文件及其磁盘使用情况
[dhhy@node2 ~]$ df -hT |grep sdb1
/dev/sdb1 xfs 20G 5.1G 15G 26% /var/local/osd1
[dhhy@node2 ~]$ du -sh /var/local/osd1/
5.1G /var/local/osd1/
[dhhy@node2 ~]$ ls /var/local/osd1/
activate.monmap active ceph_fsid current fsid journal keyring magic ready store_version superblock systemd type whoami
[dhhy@node2 ~]$ ls /etc/ceph/
ceph.client.admin.keyring ceph.conf rbdmap tmpmB_BTa
[dhhy@node2 ~]$ cat /etc/ceph/ceph.conf
[global]
fsid = 0fcdfa46-c8b7-43fc-8105-1733bce3bfeb
mon_initial_members = node1, node2
mon_host = 192.168.100.102,192.168.100.103
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 2
[dhhy@node2 ~]$ exit
- 配置Ceph的mds元数据进程;
[dhhy@dlp ceph-cluster]$ ceph-deploy mds create node1
[dhhy@dlp ceph-cluster]$ ssh dhhy@node1
[dhhy@node1 ~]$ netstat -utpln |grep 68
(No info could be read for "-p": geteuid()=1000 but you should be root.)
tcp 0 0 0.0.0.0:6800 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:6801 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:6802 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:6803 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:6804 0.0.0.0:* LISTEN -
tcp 0 0 192.168.100.102:6789 0.0.0.0:* LISTEN -
[dhhy@node1 ~]$ exit
- 配置Ceph的client客户端;
[dhhy@dlp ceph-cluster]$ ceph-deploy install ceph-client ##提示输入密码,请输入dhhy,温柔点;
[dhhy@dlp ceph-cluster]$ ceph-deploy admin ceph-client
[dhhy@dlp ceph-cluster]$ ssh root@ceph-client
[root@ceph-client ~]# chmod +r /etc/ceph/ceph.client.admin.keyring
[root@ceph-client ~]# exit
[dhhy@dlp ceph-cluster]$ ceph osd pool create cephfs_data 128 ##数据存储池
pool 'cephfs_data' created
[dhhy@dlp ceph-cluster]$ ceph osd pool create cephfs_metadata 128 ##元数据存储池
pool 'cephfs_metadata' created
[dhhy@dlp ceph-cluster]$ ceph fs new cephfs cephfs_data cephfs_metadata ##创建文件系统
new fs with metadata pool 1 and data pool 2
[dhhy@dlp ceph-cluster]$ ceph fs ls ##查看文件系统
name: cephfs, metadata pool: cephfs_data, data pools: [cephfs_metadata ]
[dhhy@dlp ceph-cluster]$ ceph -s
cluster 24fb6518-8539-4058-9c8e-d64e43b8f2e2
health HEALTH_WARN
clock skew detected on mon.node2
too many PGs per OSD (320 > max 300)
Monitor clock skew detected
monmap e1: 2 mons at {node1=192.168.100.102:6789/0,node2=192.168.100.103:6789/0}
election epoch 6, quorum 0,1 node1,node2
fsmap e5: 1/1/1 up {0=node1=up:active}
osdmap e17: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds
pgmap v54: 320 pgs, 3 pools, 4678 bytes data, 24 objects
10309 MB used, 30628 MB / 40938 MB avail
320 active+clean
- 测试Ceph的客户端存储;
[dhhy@dlp ceph-cluster]$ ssh root@ceph-client
[root@ceph-client ~]# mkdir /mnt/ceph
[root@ceph-client ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{print $3}' >>/etc/ceph/admin.secret
[root@ceph-client ~]# cat /etc/ceph/admin.secret
AQCd/x9bsMqKFBAAZRNXpU5QstsPlfe1/FvPtQ==
[root@ceph-client ~]# mount -t ceph 192.168.100.102:6789:/ /mnt/ceph/ -o name=admin,secretfile=/etc/ceph/admin.secret
[root@ceph-client ~]# df -hT |grep ceph
192.168.100.102:6789:/ ceph 40G 11G 30G 26% /mnt/ceph
[root@ceph-client ~]# dd if=/dev/zero of=/mnt/ceph/1.file bs=1G count=1
记录了1+0 的读入
记录了1+0 的写出
1073741824字节(1.1 GB)已复制,14.2938 秒,75.1 MB/秒
[root@ceph-client ~]# ls /mnt/ceph/
1.file
[root@ceph-client ~]# df -hT |grep ceph
192.168.100.102:6789:/ ceph 40G 13G 28G 33% /mnt/ceph
[root@ceph-client ~]# mkdir /mnt/ceph1
[root@ceph-client ~]# mount -t ceph 192.168.100.103:6789:/ /mnt/ceph1/ -o name=admin,secretfile=/etc/ceph/admin.secret
[root@ceph-client ~]# df -hT |grep ceph
192.168.100.102:6789:/ ceph 40G 15G 26G 36% /mnt/ceph
192.168.100.103:6789:/ ceph 40G 15G 26G 36% /mnt/ceph1
[root@ceph-client ~]# ls /mnt/ceph1/
1.file 2.file
- 错误整理:
1. 如若在配置过程中出现问题,重新创建集群或重新安装ceph,那么需要将ceph集群中的数据都清除掉,命令如下;
[dhhy@dlp ceph-cluster]$ ceph-deploy purge node1 node2
[dhhy@dlp ceph-cluster]$ ceph-deploy purgedata node1 node2
[dhhy@dlp ceph-cluster]$ ceph-deploy forgetkeys && rm ceph.*
2.dlp节点为node节点和客户端安装ceph时,会出现yum安装超时,大多由于网络问题导致,可以多执行几次安装命令;
3.dlp节点指定ceph-deploy命令管理node节点配置时,当前所在目录一定是/home/dhhy/ceph-cluster/,不然会提示找不到ceph.conf的配置文件;
4.osd节点的/var/local/osd*/存储数据实体的目录权限必须为777,并且属主和属组必须为ceph;
5. 在dlp管理节点安装ceph时出现以下问题
解决方法:
1.重新yum安装node1或者node2的epel-release软件包;
2.如若还无法解决,将软件包下载,使用以下命令进行本地安装;
6.如若在dlp管理节点中对/home/dhhy/ceph-cluster/ceph.conf主配置文件发生变化,那么需要将其主配置文件同步给node节点,命令如下:
node节点收到配置文件后,需要重新启动进程:
7.在dlp管理节点查看ceph集群状态时,出现如下,原因是因为时间不一致所导致;
解决方法:将dlp节点的ntpd时间服务重新启动,node节点再次同步时间即可,如下所示:
8.在dlp管理节点进行管理node节点时,所处的位置一定是/home/dhhy/ceph-cluster/,不然会提示找不到ceph.conf主配置文件;