Ceph storage cluster and block the deployment of test equipment

Cluster environment

Configuring basic environment

Add ceph.repo

wget -O /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repo
yum makecache

Configuring NTP

yum -y install ntpdate ntp
ntpdate cn.ntp.org.cn
systemctl restart ntpd ntpdate;systemctl enable ntpd ntpdate

Create a user login and ssh-free close

useradd ceph-admin
echo "ceph-admin"|passwd --stdin ceph-admin
echo "ceph-admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-admin
sudo chmod 0440 /etc/sudoers.d/ceph-admin
Configuring host parse
 CAT >> / etc / hosts << EOF
 10.1 . 10.201 ceph01
 10.1 . 10.202 ceph02
 10.1 . 10.203 ceph03 
EOF

Sudo configuration does not require tty

sed -i 's/Default requiretty/#Default requiretty/' /etc/sudoers

Use ceph-deploy cluster deployment

Configure a free dense Login

su - ceph-admin
ssh-keygen
ssh-copy-id ceph-admin@ceph01
ssh-copy-id ceph-admin@ceph02
ssh-copy-id ceph-admin@ceph03

Installation ceph-deploy

sudo yum install -y ceph-deploy -pip

Node deployment

mkdir my-cluster;cd my-cluster
ceph-deploy new ceph01 ceph02 ceph03

Edit the configuration file ceph.conf

echo >>/home/ceph-admin/my-cluster/ceph.conf<<EOF
public network = 10.1.10.0/16
cluster network = 10.1.10.0/16
EOF

Installation package ceph (instead ceph-deploy install node1 node2, the following command needs to be installed on each node)

sudo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
sudo
yum install -y ceph ceph-radosgw

Configuring the initial monitor (s), collect all keys

ceph-deploy mon create-initial
ls -l *.keyring

The configuration information is copied to each node

ceph-deploy admin ceph01 ceph02 ceph03

Configuration osd

su - ceph-admin
cd /home/my-cluster
for dev in /dev/sdb /dev/sdc /dev/sdd
do
ceph-deploy disk zap ceph01 $dev
ceph-deploy osd create ceph01 --data $dev
ceph-deploy disk zap ceph02 $dev
ceph-deploy osd create ceph02 --data $dev
ceph-deploy disk zap ceph03 $dev
ceph-deploy osd create ceph03 --data $dev
done

Deployment mgr, after Luminous version only need to deploy

ceph-deploy mgr create ceph01 ceph02 ceph03

Open dashboard module

sudo chown -R ceph-admin /etc/ceph/
ceph mgr module enable dashboard
netstat -lntup|grep 7000

http://10.1.10.201:7000

Configuration block storage ceph

Check whether the device environment in claim composite block

uname -r
modprobe rbd
echo $?

Create pool and block devices

ceph osd lspools
ceph osd pool create rbd 128

Pg_num determination value is mandatory, because not automatically calculated, the following are a few common values

When less than 5 OSD, pg_num set to 128
the number of the OSD 5-10 when, pg_num set to 512
the number of the OSD 10 to 50 when, pg_num 4096 is set to
the number of OSD greater than 50, tradeoffs understood methods, and how they The value is calculated pg_num

The client creates a block device

rbd create rbd1 --size 1G --image-feature layering --name client.admin

Mapping block device

rbd map --image rbd1 --name client.admin

Create a file system and mount

fdisk -l /dev/rbd0
mkfs.xfs /dev/rbd0
mkdir /mnt/ceph-disk1
mount /dev/rbd0 /mnt/ceph-disk1
df -h /mnt/ceph-disk1

Written test data

dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M

The use of stress testing software fio

Pressure measurement software installed fio

yum install zlib-devel -y
yum install ceph-devel -y
git clone git://git.kernel.dk/fio.git
cd fio/
./configure
make;make install

Test Disk Performance

fio -direct=1 -iodepth=1 -rw=read -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - 
filename=/dev/rbd0 -name=readiops
fio -direct=1 -iodepth=1 -rw=write -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - 
filename=/dev/rbd0 -name=writeiops
fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - 
filename=/dev/rbd0 -name=randreadiops
fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - 
filename=/dev/rbd0 -name=randwriteiops

Guess you like

Origin www.linuxidc.com/Linux/2019-08/159915.htm