ceph installation

 

Document address: http://docs.ceph.org.cn/start/quick-ceph-deploy/

Documentation: http://blog.csdn.net/younger_china/article/details/51823571

 

 

Install the CEPH Deployment Tool

 

1 Install some plugins for yum:

sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*

 

2 Create a yum source, all machines are added: sudo vim /etc/yum.repos.d/ceph.repo

[ceph-noarch]

name=Ceph noarch packages

baseurl=http://download.ceph.com/rpm-jewel/el7/noarch/

#http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

 

 

3 Install NTP (time synchronization module) ntpdate ntp1.aliyun.com or sudo yum install ntp ntpdate ntp-doc

4 Install ssh and make sure password-free login is set: sudo apt-get install openssh-server

5 It is recommended to create a separate ceph deployment user: ceph-deploy --username {username} 

6. Unsecured login for each node: ssh-keygen, ssh-copy-id {username}@node1

7. Disable Firewall and SELINUX

8. Set a static IP (otherwise the IP will change dynamically)

9. There is no ip after the virtual machine image is copied

1) ifconfig to view the network card name (/etc/sysconfig/network-scripts/ip address can also view the network card information)

2) Copy the existing network card configuration /etc/sysconfig/network-scripts/ifcfg-eno16777736, delete uuid, and modify device and name

3) ifdown network card name, ifup network card name to restart the network card

10. Permission modification: visudo modify suoders

1) Comment Defaults requiretty

Defaults requiretty is changed to #Defaults requiretty, indicating that no control terminal is required.

Otherwise, sudo: sorry, you must have a tty to run sudo

2) Increase the line Defaults visiblepw

Otherwise sudo: no tty present and no askpass program specified will appear

3) Add installation user to use sudo without password: tony ALL=(ALL) NOPASSWD:ALL

4)修改hostname:hostnamectl set-hostname node4

 

11 Create a directory for the cluster

mkdir my-cluster & cd my-cluster

12 If reinstallation requires:

  ceph-deploy purgedata   node1 node2 node3  &&  ceph-deploy forgetkeys && eph-deploy purge  node1 node2 node3

13. Add to the configuration file ceph.conf: osd pool default size = 2

14、安装 ceph-deploy install {ceph-node} [{ceph-node} ...]

15. Exception sudo ceph --version /usr/lib/python2.7/site-packages/ceph_deploy/lib/vendor/remoto/process.py"

    The timeout needs to be executed on each node: sudo yum -y install ceph and sudo yum -y install ceph-radosgw

 

16、ceph-deploy osd activate异常:

    Failedto execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount/var/local/osd0

    Add permissions to /var/local/osd1/ and /var/local/osd1/ on each node

    As follows: chmod 777 /var/local/osd0/

   chmod 777  /var/local/osd0/*

17. Ceph start and stop (a service, or the specific operation of a service of a machine)

   http://docs.ceph.com/docs/master/rados/operations/operating/#running-ceph-with-sysvinit

 

18. Display of various statistics of ceph

 (ceph health cluster health status, ceph -w events happening in the cluster, data usage of the ceph df cluster and its distribution in the storage pool)

   ceph osd tree|stat (status of osd),

   ceph mon stat (mon status),

   ceph quorum_status (detail of monitor)

   ceph mds stat|dump (status of mds)

   http://docs.ceph.org.cn/rados/operations/monitoring/

19. Newly add osd node:

   1) Create a disk directory on the node: mkdir /var/local/osd2

   2) The management node prepares to publish information: ceph-deploy osd prepare node1:/var/local/osd2

   3) Activate the osd node and execute ceph status to see that there is one more osd node

20. Metadata service creation

   1)ceph-deploy mds create node1

   2)ceph mds stat

21. RGW service

   1) Create rgw gateway: ceph-deploy rgw create node2

   2) ceph mds stat

 

22. Add a new mon to the cluster

  异常: admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

   Solution: When the server has multiple network cards, you need to add the IP of the external network to ceph to the cluster

Such as: public network=192.168.145.140/24

Distribute configuration files to other servers: ceph-deploy --overwrite-conf admin node2 node3 node4 

    Information about mds in the cluster: ceph quorum_status --format json-pretty

 

23. View of pools (data usage of ceph df cluster and its distribution in storage pools)

  Documentation: http://docs.ceph.org.cn/rados/operations/pools/

  rados lspools,ceph osd lspools,ceph osd dump |grep pool

 

24. Ceph object storage:

    http://docs.ceph.org.cn/start/quick-ceph-deploy/#id4

25. Operation of pg in ceph

ceph pg dump (the overall information of ceph, the first column is pg-num, which is composed of {poolnum}.{pg-id})|ceph pg map {pg-num}|ceph pg stat

 

26. Store files in ceph

  1) List all pools: ceph osd lspools

  2) Store the created file into the pool: rados put test-object-2 testfile.txt --pool=default.rgw.data.root

  3) List all objects in the pool: rados -p default.rgw.data.root ls

  4) List the details of the objects in the pool: ceph osd map default.rgw.data.root test-object-2

  5) Delete an object in the pool: rados rm test-object-2 --pool=default.rgw.data.root

 

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326102292&siteId=291194637