Ceph-deploy安装

ceph安装文档

1、安装ntp

我们建议在所有 Ceph 节点上安装 NTP 服务(特别是 Ceph Monitor 节点),以免因时钟漂移导致故障,详情见时钟。

sudo yum install ntp ntpdate ntp-doc
// yum -y install ntpdate ntp
ntpdate ntp.aliyun.com
vim /etc/ntp.conf

server ntp1.aliyun.com iburst
systemctl restart ntpd
2、安装 SSH 服务器

在所有 Ceph 节点上执行如下步骤:

sudo yum install openssh-server
3、关闭防火墙、selinux,配置hosts文件

关闭防火墙
systemctl stop firewalld&systemctl disable firewalld
关闭selinux
setenforce 0
vim /etc/selinux/config
配置hosts文件

vim /etc/hosts

192.168.0.88 controller
192.168.0.197 node1
192.168.0.245 node2
192.168.0.148 node3
修改主机名
hostnamectl set-hostname node0001
4、在各 Ceph 节点创建新用户

//创建账户
sudo useradd -d /home/ceph-admin -m ceph-admin
//修改密码
sudo passwd ceph-admin
echo “ceph-admin” | passwd --stdin ceph-admin
//确保各 Ceph 节点上新创建的用户都有 sudo 权限
echo “ceph-admin ALL = (root) NOPASSWD:ALL” | tee /etc/sudoers.d/ceph-admin
chmod 0440 /etc/sudoers.d/ceph-admin
//查看
cat /etc/sudoers.d/ceph-admin
5、允许无密码 SSH 登录

正因为 ceph-deploy 不支持输入密码,你必须在管理节点上生成 SSH 密钥并把其公钥分发到各 Ceph 节点。 ceph-deploy 会尝试给初始 monitors 生成 SSH 密钥对。

生成 SSH 密钥对,但不要用 sudo 或 root 用户。提示 “Enter passphrase” 时,直接回车,口令即为空:
ssh-keygen

Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
​ 2.把公钥拷贝到各 Ceph 节点

ssh-copy-id ceph-admin@controller
ssh-copy-id ceph-admin@node1
ssh-copy-id ceph-admin@node2
ssh-copy-id ceph-admin@node3
​ 3.配置sudo不需要tty(控制节点)

sed -i ‘s/Default requiretty/#Default requiretty/’ /etc/sudoers
​ 4.推荐做法)修改 ceph-deploy 管理节点上的 ~/.ssh/config 文件,这样 ceph-deploy 就能用你所建的用户名登录 Ceph 节点了,而无需每次执行 ceph-deploy 都要指定 --username {username} 。这样做同时也简化了 ssh 和 scp 的用法。把 {username} 替换成你创建的用户名。

Host node1
Hostname node1
User {username}
Host node2
Hostname node2
User {username}
Host node3
Hostname node3
User {username}
6、管理节点安装ceph-deploy工具

增加 yum配置文件(各个节点都需要增加yum源)
Luminous版的源:
export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-luminous/el7
export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc

Jewel版的源:
yum clean all
rm -rf /etc/yum.repos.d/*.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
sed -i ‘/aliyuncs/d’ /etc/yum.repos.d/CentOS-Base.repo
sed -i ‘/aliyuncs/d’ /etc/yum.repos.d/epel.repo
sed -i ‘s/$releasever/7/g’ /etc/yum.repos.d/CentOS-Base.repo

vim /etc/yum.repos.d/ceph.repo
添加以下内容:
[ceph]
name=ceph
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
更新软件源并安装ceph-deploy 管理工具
[root@ceph01 ~]# yum clean all && yum list
[root@ceph01 ~]# yum -y install ceph-deploy
7.创建monitor服务

mkdir my-cluster && cd my-cluster
#mon安装在node1节点
ceph-deploy new node1
8、修改副本数

[ceph-admin@controller ceph]# vim ceph.conf 配置文件的默认副本数从3改成2,这样只有两个osd也能达到active+clean状态,把下面这行加入到[global]段(可选配置)
[global]
fsid = c255a5ad-b772-402a-bd81-09738693fe10
mon_initial_members = node1
mon_host = 192.168.0.197
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2
9、在所有节点安装ceph

安装 Ceph

ceph-deploy install node1 node2 node3
安装ceph monitor

ceph-deploy mon create node1
收集节点的keyring文件

ceph-deploy gatherkeys node1
10、部署osd服务

格式化

mkfs.xfs -f /dev/sdb
挂载

mkdir -p /var/local/osd0
mount /dev/sdb /var/local/osd0/

//卸载
fuser -km /dev/sdb
umount /dev/sdb

//自动挂载
vim /etc/fstab
/dev/sdb /var/local/osd0 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
在各个节点上给/var/local/osd1/和/var/local/osd2/添加权限

chmod 777 -R /var/local/osd1/
创建激活osd

ceph-deploy osd prepare node1:/var/local/osd0 node2:/var/local/osd1 node3:/var/local/osd2

//激活
ceph-deploy osd activate node1:/var/local/osd0 node2:/var/local/osd1 node3:/var/local/osd2

//重新覆盖osd
ceph-deploy --overwrite-conf osd prepare node1:/var/local/osd0 node2:/var/local/osd1 node3:/var/local/osd2
查看状态

//统一配置(用ceph-deploy把配置文件和admin密钥拷贝到所有节点,这样每次执行Ceph命令行时就无需指定monitor地址和ceph.client.admin.keyring了)
ceph-deploy admin node1 node2 node3

ceph-deploy osd list node1 node2 node3

问题

#安装python
sudo yum -y install python-pip
$sudo pip install --upgrade pip
11、其他操作

ceph osd的删除

如果要删除某一个osd(不管是处于up状态的还是处于down状态的)

A) 如果osd处于up状态,第一步就是要将up状态转换为down状态,执行命令ceph osd down osd.num(此为osd id)

B) 如果osd处于down状态,直接执行命令ceph osd out osd.id, 将osd标记为out状态。

C) 接着执行命令ceph osd rm osd.id 将osd 直接删除。

D) 接下来删除cursh 中与osd.id对应的map,可以执行命令ceph osd crush rm osd.id。

E) 最后删除osd.id的auth,命令为:ceph auth del osd.id。

猜你喜欢

转载自blog.csdn.net/qq_33431394/article/details/107380003
今日推荐