ubuntu 18.04 搭建ceph luminous 12.2.12 创建OSD

1,ubuntu搭建ceph

  • ceph-deploy安装
wget --no-check-certificate -q -O- 'https://download.ceph.com/keys/release.asc' | apt-key add -
echo deb https://download.ceph.com/debian-luminous/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt update
sudo apt install ceph-deploy

2,节点

  • 安装NTP
sudo apt install ntpsec
apt-get install ntp
  • 设置hosts
  • 查看hostname
# hostname
node1
  • 主节点都需要添加其它节点的hosts
# vim /etc/hosts
192.168.1.20 node1
192.168.1.21 node2
192.168.1.22 node3
  • root用户配置SSH免密钥,主节点可以免密钥登陆其它节点
ssh-keygen
ssh-copy-id root@node2
ssh-copy-id root@node3
  • 登陆node2成功,免密钥配置成功。
ssh node2

3,集群搭建

mkdir my-cluster
cd my-cluster
  • 主节点创建集群
# ceph-deploy new node1
  • 每个节点都需要安装python-minimal
apt install python-minimal -y 
  • 安装Ceph安装包
ceph-deploy install node1 node2 node3
  • 安装日志
[node3][DEBUG ] ceph 已经是最新版 (12.2.12-0ubuntu0.18.04.4)[node3][DEBUG ] ceph-mon 已经是最新版 (12.2.12-0ubuntu0.18.04.4)[node3][DEBUG ] ceph-osd 已经是最新版 (12.2.12-0ubuntu0.18.04.4)[node3][DEBUG ] radosgw 已经是最新版 (12.2.12-0ubuntu0.18.04.4)[node3][DEBUG ] ceph-mds 已经是最新版 (12.2.12-0ubuntu0.18.04.4)

3.1 使用网易源

  • Ubuntu 18.04 每个节点都需要添加更新源,运行apt update
# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse

  • 主节点添加ceph源
export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/debian-luminous
export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc
ceph-deploy install node1 node2 node3 node4
  • 添加public_network
# vim ceph.conf 
public_network = 192.168.1.0/24

4,创建mon

ceph-deploy mon create-initial
  • 拷贝配置文件到每个节点
ceph-deploy admin node1 node2 node3
  • 创建3个mon
ceph-deploy mon create node1 node2 node3
  • 查看集群状态
# ceph -s
    health: HEALTH_OK
    mon: 3 daemons, quorum node1,node2,node3

5,创建mgr

ceph-deploy mgr create node1 node2 node3
ceph mgr module enable dashboard

6,创建OSD

  • 查看磁盘格式
# blkid -o value -s TYPE /dev/sda1
ext4
# umount /dev/sda1
# mkfs.ext4 /dev/sda
  • 清空磁盘
ceph-deploy disk zap node1:/dev/sda
  • ~ceph-deploy prepare/activate命令已经不能使用~
ceph-deploy osd prepare --fs-type xfs node1:/dev/sda
sda           8:0    0   7.3T  0 disk 
├─sda1        8:1    0   100M  0 part /var/lib/ceph/osd/ceph-0
└─sda2        8:2    0   7.3T  0 part 
  • ~激活OSD~
ceph-deploy osd activate node1:/dev/sda1
# ceph-deploy disk list node1
[node1][DEBUG ] /dev/sda :
[node2][DEBUG ]  /dev/sda1 ceph data, active, cluster ceph, osd.0, block /dev/sda2
[node3][DEBUG ]  /dev/sda2 ceph block, for /dev/sda1

6.1,删除OSD

  • 查看OSD状态
# ceph osd tree
# ceph osd stat
# ceph osd status
# ceph -s
  • OSD踢出集群
# ceph osd out 0
marked out osd.0. 
  • 停止OSD
# ceph osd down 0
marked down osd.0. 
# ceph osd crush remove osd.0
removed item id 0 name 'osd.0' from crush map

# ceph auth del osd.0
updated
  • 停止运行OSD
systemctl status [email protected] 
systemctl stop [email protected] 
# ceph osd rm 0
removed osd.0

6.2,增加OSD

  • 格式化磁盘
# mkfs.xfs -f /dev/sda
# blkid -o value -s TYPE /dev/sda
xfs
  • 升级ceph-deploy
# ceph-deploy --version
1.5.38
# sudo apt-get install ceph-deploy
# ceph-deploy --version
2.01
# ceph-deploy disk zap node1:/dev/sda
# ceph-deploy osd create node1:/dev/sda
  • OSD创建失败,报错日志
[node1][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[node1][DEBUG ] --> Absolute path not found for executable: lvs
[node1][WARNIN] -->  OSError: [Errno 2] No such file or directory
[node1][DEBUG ] --> Ensure $PATH environment variable contains common executable locations
[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
  • 安装LVM头皮发麻
apt install -y lvm2
  • osd创建成功
ceph-deploy osd create --data /dev/sda node1
  • 添加其它磁盘为OSD
# mkfs.xfs -f /dev/sdb
# ceph-deploy osd create --data /dev/sdb node1
# lvdisplay 
  LV Path                /dev/lvm_01/lv01
  • lvm添加为OSD
ceph-deploy osd create --data  /dev/lvm_01/lv01 node2
  • 查看OSD状态
ceph osd status

参考:

  1. Ceph文档
  2. Github Ceph
  3. 从零开始安装Ceph分布式存储|Ubuntu环境
  4. ubuntu18.04 Desktop版本部署13.2.6版本ceph
  5. Ceph-03 搭建 Ceph 存储集群
  6. L版Ceph实践 —— ubuntu部署Luminous版Ceph
  7. Ceph学习笔记 增加OSD
  8. Ceph-deploy快速部署Ceph分布式存储
  9. 理解ceph使用ceph-disk创建osd过程:为什么挂载到分区100M
  10. ceph-deploy osd 出错: ceph-deploy prepare/activate命令已经不能使用
  11. 删除 OSD手动
  12. 删除 OSD 详情
  13. ceph升级
发布了646 篇原创文章 · 获赞 179 · 访问量 115万+

猜你喜欢

转载自blog.csdn.net/u010953692/article/details/104734566