ceph 分布式存储部署

ceph 分布式存储部署

添加ceph仓库用来安装ceph-deploy

ubuntu

1.Add the release key

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -

2.添加ceph软件包到仓库

echo deb https://download.ceph.com/debian-{
    
    ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
国内镜像
## echo deb https://mirrors.aliyun.com/ceph/debian-mimic/$(lsb_release -sc)main|sudo tee /etc/apt/sources.list.d/ceph/list
echo deb https://mirrors.aliyun.com/ceph/debian-nautilus/ $(lsb_release -sc ) main | sudo tee /etc/apt/sources.list.d/ceph.list
  • 务必使用国内镜像,否则速度会很慢

3.更新仓库并安装ceph-deploy

sudo apt update
sudo apt install ceph-deploy
centos7

1.向目标计算机注册 subscription-manager,验证您的订阅,并启用“ Extras”存储库以获取软件包依赖性

sudo subscription-manager repos --enable=rhel-7-server-extras-rpms

2.安装并启用企业Linux附加软件包(EPEL)存储库

sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

3./etc/yum.repos.d/ceph.repo使用以下命令将Ceph存储库添加到yum配置文件。替换 {ceph-stable-release}为稳定的Ceph版本(例如 luminous。)

cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-{
    
    ceph-stable-release}/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM

4.更新您的存储库并安装ceph-deploy

yum update
yum install ceph-deploy

ceph节点设置

管理节点必须具有对Ceph节点的无密码SSH访问。当ceph-deploy以用户身份登录到Ceph节点时,该特定用户必须具有无密码sudo特权

插件准备

议在Ceph节点上(尤其是在Ceph Monitor节点上)安装NTP,以防止时钟漂移引起问题

apt-get install -y ntp ntpdate ntp-doc
ntpdate 0.us.pool.ntp.org  #同步时间
hwclock --systohc
systemctl enable ntp #将ntp服务添加到开机启动
systemctl start ntp  #启动ntp服务
  • 若时间同步失败,请先停止ntp服务systemctl stop ntp 待时间同步完成后请再重启ntp服务

创建一个ceph部署用户

1.每个ceph节点都创建一个新用户

useradd -m -s /bin/bash cephuser
passwd cephuser
  • 一定不能创建名为ceph的用户

2.给新添加的用户sudo特权

echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
sudo chmod 0440 /etc/sudoers.d/cephuser

修改各节点主机名(分别是node1 node2 node3)

hostnamectl set-hostname node1 

ssh使用无密码

ceph-deploy不会提示您输入密码,因此您必须在管理节点上生成SSH密钥,并将公用密钥分发给每个Ceph节点。ceph-deploy将尝试为初始监视器生成SSH密钥

1.生成ssh密钥

ssh-keygen

Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.

2.将密钥复制到每个Ceph节点,{username}并用在创建Ceph部署用户中创建的用户名替换。

ssh-copy-id cephuser@node1
ssh-copy-id cephuser@node2
ssh-copy-id cephuser@node3

3.(推荐)修改 管理节点的/root/.ssh/config文件,ceph-deploy以便ceph-deploy可以以创建的用户身份登录到Ceph节点,而无需您指定每次执行的时间。这具有简化和使用的额外好处 。替换为您创建的用户名:–username {username}ceph-deploysshscp{username}`

Host node1
   Hostname node1
   User cephuser
Host node2
   Hostname node2
   User cephuser
Host node3
   Hostname node3
   User cephuser

各个节点安装ceph

1.使用国内镜像

ubuntu 使用国内镜像安装

export CEPH_DEPLOY_REPO_URL=https://mirrors.aliyun.com/ceph/debian-mimic/
export CEPH_DEPLOY_GPG_URL=https://mirrors.aliyun.com/ceph/keys/release.asc

centos7 使用国内镜像

export CEPH_DEPLOY_REPO_URL=https://mirrors.163.com/ceph/rmp-mimic/el7
export CEPH_DEPLOY_GPG_URL=https://mirrors.163.com/ceph/keys/release.asc
vi /etc/profile
source /etc/profile

2.每个几点安装python-minimal(不安装会报错)

apt install python-minimal
yum install python-setuptools

3.创建集群(这一步只需要在ceph-deploy主机执行即可)

ceph-deploy new node1 node2 node3

4.在每个节点安装ceph软件包(这一步只需要在ceph-deploy主机执行即可)

ceph-deploy install node1 node2 node3

5.部署初始mon(监视器)并收集密钥

ceph-deploy mon create-initial

完成后本地目录有以下密钥环:

  • ceph.bootstrap-mds.keyring

  • ceph.bootstrap-osd.keyring

  • ceph.client.admin.keyring

  • ceph.bootstrap-mgr.keyring

  • ceph.bootstrap-rgw.keyring

  • ceph.mon.keyring

6.使用ceph-deploy配置文件和管理密钥复制到您的管理节点和你的Ceph的节点,以便您可以使用ceph CLI,而无需指定监视地址和 ceph.client.admin.keyring每次执行命令。

ceph-deploy admin node1 node2 node3

7.部署管理器守护程序(mgr)

ceph-deploy mgr create node1 node2 node3

8.添加三个OSD。出于这些说明的目的,我们假设您在名为的每个节点中都有一个未使用的磁盘/dev/sda确保该设备当前未使用并且不包含任何重要数据。

 ceph-deploy disk zap node1 /dev/sda

ceph-deploy osd create  --data /dev/sda node1
ceph-deploy osd create  --data /dev/sda node2
ceph-deploy osd create  --data /dev/sda node3

可能遇到需要使用的命令

# 用deploy 把这个配置推到其他节点
ceph-deploy --overwrite-conf config push

# 如果有必要先删除原来安装的
ceph-deploy mon destroy k8s02
ceph-deploy mon add k8s03

# 加入osd时,如果以前此节点上有过osd要先删除
ceph-deploy disk zap k8s02 /dev/sdb
# 如果报磁盘忙,则可能挂在系统中,先到节点上执行下面命令
vgdisplay #看一下lvm名称
vgremove ceph-fb95a3c5-cb15-48d1-a2fa-5b444d493f53

# 列出节点上可用磁盘
ceph-deploy disk list node1

# Ceph部署出错后的回滚
ceph-deploy purge {
    
    ceph-node} [{
    
    ceph-node}]
ceph-deploy purgedata {
    
    ceph-node} [{
    
    ceph-node}]
ceph-deploy forgetkeys
rm ceph.*

9.创建pool

ceph osd pool create k8s 64

创建k8s用户

ceph auth get-or-create client.k8s mon 'allow r' osd 'allow rwx pool=k8s' -o ceph.client.k8s.keyring

在ceph集群上创建postgresql的块设备存储

rbd create k8s/pgdata-image -s 200G --image-feature layering

获取k8s用户密钥并进行base64编码
用上面获取的secret key创建k8s secrete cat ceph-k8s-secret.yaml
创建rbd
测试:在集群的宿主机中执行

rbd map k8s/pgdata-image

# 如果出现错误
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (110) Connection timed out

# 查看/var/log/syslog, 有以下错误信息
Jan  3 14:56:51 node1 kernel: [16817.926622] libceph: mon1 124.108.9.46:6789 feature set mismatch, my 106b84a842a42 < server's
40106b84a842a42, missing 400000000000000
Jan  3 14:56:51 node1 kernel: [16817.926740] libceph: mon1 124.108.9.46:6789 missing required protocol features

# 是因为pool的特性在挂载节点上不完全技持,解决方法:运行以下命令
ceph osd crush tunables hammer

pv和pvc

## pv-pgdata.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pgdata
  namespace: default
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  rbd:
    monitors:
      - 192.168.1.2:6789
      - 192.168.1.5:6789
      - 192.168.1.7:6789
    pool: k8s
    image: pgdata-image
    user: k8s
    secretRef:
      name: ceph-k8s-secret
  persistentVolumeReclaimPolicy: Retain

## pvc-pgdata.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pgdata
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  volumeName: pgdata

## pod
apiVersion: v1
kind: Pod
metadata:
  name: ceph-rbd-pvc-busybox2
spec:
  containers:
    - image: busybox
      name: ceph-rbd-pvc-busybox2-rw
      command: ["sleep", "60000"]
      volumeMounts:
      - name: pgdata-rdb
        mountPath: /mnt
  volumes:
  - name: pgdata-rdb
    persistentVolumeClaim:
      claimName: pgdata
 apiVersion: v1
kind: Pod
metadata:
  name: nginx-pvc
  labels:
    name: nginx-pvc
spec:
  containers:
  - name: nginx
    image: nginx:alpine
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: pgdata-rdb
      mountPath: /usr/share/nginx/html
  volumes:
  - name: pgdata-rdb
    persistentVolumeClaim:
claimName: pgdata
kubectl create -f pv-pgdata.yaml
kubectl create -f pvc-pgdata.yaml

创建Cephfs和挂载cephfs

安装ceph-common

每个worker节点都需要安装ceph-common

yum -y install ceph-common
yum -y install librbd1  &&   modprobe rbd

ceph配置拷至k8s worker node 节点

ceph.conf ,ceph.client.admin.keyring 至 /etc/ceph/ 目录

创建测试pod,基于busybox
或基于nginx
进入pod,查看是否生效

kubectl exec -it ceph-rbd-pvc-busybox2 sh
grep key ceph.client.k8s.keyring | awk '{printf "%s", $NF}' | base64

VBGFaeN3OWJYdUZPSHhBQTNrU2E2QlUyaEF5UUV0SnNPRHdXeRT8PQ==

也可以用以下命令获取
ceph auth get-key client.k8s | base64
apiVersion: v1
kind: Secret
metadata:
  name: ceph-k8s-secret
type: "kubernetes.io/rbd"
data:
  key: QVFCZzduNWRxUmZKT0JBQUE3UG8wNGIvSWdTK2JKNUlDNXpHVkE9PQo=
 rbd create k8s/pgdata-image -s 10G --image-feature  layering
rbd info k8s/pgdata-image

  rbd map k8s/pgdata-image
/dev/rbd0
root@node1:~# rbd showmapped
id pool image        snap device
0  k8s  pgdata-image -    /dev/rbd0
然后可以对此块设备做磁盘操作,如格式化
mke2fs /dev/rbd0
mount /dev/rbd0 /mnt
mount /dev/rbd0  /mnt
cd /mnt/
root@node1:/mnt# rm -fr lost+found/
最后记得umount退出
umount /mnt
rbd unmap k8s/pgdata-image

挂载cephfs

Ceph 文件系统( Ceph FS )是个 POSIX 兼容的文件系统,它使用 Ceph 存储集群来存储数据。Ceph 文件系统要求 Ceph 存储集群内至少有一个 Ceph 元数据服务器。

1、添加MDS,接上篇,这里把ceph01节点作为元数据服务器MDS。

#ceph-deploy mds create ceph01
#netstat -tnlp | grep mds
tcp        0      0 0.0.0.:6804       0.0.0.0:*               LISTEN      12787/ceph-mds

2、创建两个存储池。MDS需要使用两个pool,一个pool用来存储数据,一个pool用来存储元数据。

#ceph osd pool create fs_data 32
#ceph osd pool create fs_metadata 32
#rados lspools

3、创建Cephfs

#ceph fs new cephfs fs_metadata fs_data
#ceph fs ls
name: cephfs, metadata pool: fs_metadata, data pools: [fs_data ]

4、查看MDS状态

#ceph mds stat
e5: 1/1/1 up {
    
    0=ceph01=up:active}
挂载Cephfs
CephFS有不同的挂载方式,这里只说其中一种,后面结合k8s使用的时候会用到该种方式。

1、加载rbd内核模块

#modprobe rbd
#lsmod | grep rbd
rbd                    83938  0
libceph               287066  2 rbd,ceph

2、获取admin key

#cat ceph.client.admin.keyring
[client.admin]
    key = AQDchXhYTtjwHBAAk2/H1Ypa23WxKv4jA1NFWw==
    caps mds = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"

3、创建挂载点,尝试本地挂载

#mkdir /cephfs_test
#mount -t ceph 0.0.0.0:6789:/ /cephfs_test -o name=admin,secret=AQDchXhYTtjwHBAAk2/H1Ypa23WxKv4jA1NFWw==
#df -hT
0.0.0.0:/ ceph       60G  104M   60G   1% /cephfs_test

4、如果有多个mon节点,可以挂载多个节点,保证了CephFS的高可用,当有一个节点down的时候不影响数据读写

#mount -t ceph 0,1,2:6789:/ /cephfs_test -o name=admin,secret=AQDchXhYTtjwHBAAk2/H1Ypa23WxKv4jA1NFWw==
#df -hT
0,1,2:6789:/ ceph       60G  104M   60G   1% /cephfs_test

猜你喜欢

转载自blog.csdn.net/qq_36607860/article/details/115413680