Build a small cluster of ceph

    Learn ceph

    DFS (distributed file system) distributed storage system, the file system refers to a physical storage resource management, not necessarily directly connected to the local node, but the node is connected via a computer network, a number of categories, is the current Ceph used widely distributed storage systems, is a typical client - server mode.

    1.ceph has many features, such as, high scalability (unlimited extension nodes), high availability (providing backup copy), high performance (Crush algorithm, data distribution equilibrium forces, parallel high) and the like;

    2.ceph may provide block storage file system storage, object store;

    3. The basic component has OSD storage device, monitor assembly to monitor the cluster, RGW object storage gateway, MDS storage file system metadata, client client.

    Ceph content on a deeper level, check out the official help documentation: http: //docs.ceph.org/start/intro

   Here to build a small cluster ceph-depth research to do.

    The general idea: build environment -> Cluster Setup -> verification

    Topology:

  

First, set up the environment

    Ideas: (1) the creation of four virtual machine, a client do the other three do storage cluster; 

             (2) configure the host name, IP address, source yum, mount ceph optical disks;

             (3) Configuration of host 4 connected ssh without passwords, to achieve synchronous operation;

             (4) arranged NTP time synchronization;  

             (5) for the virtual machine disks are added to the back of clustered storage.

    Specific steps:

    // ideas: first on the host operating node1, then sync content to other hosts.

    1) Create a mount point / var / ftp / ceph, and mounts the optical disk ceph10.iso real machine;

    2) 4 is disposed ssh without password hosts connected, the unit comprising;  

      #ssh-keygen -f /root/.ssh/id_rsa -N ''

      #for i in 10 11 12 13 // synchronize to other hosts

       do

           scp-copy-ip 192.168.4.$i

      done

   3) use the machine resolve the IP address, and synchronized to the host 4 (where no additional set up a DNS server)

     #vim /etc/hosts

      192.168.4.10 client

      192.168.4.11  node1

      192.168.4.12 node2

      192.168.4.13 node3

     for i in 10 11 12 13 // synchronize to other hosts

     do

        scp /etc/hosts 192.168.4.$i:/etc/

   4) Configure yum source, call the mount point where the real machine tool ceph

     #vim /etc/yum.repos.d/ceph.repo

      [mon]

      name=mon

      baseurl = ftp: //192.168.4.254/ceph/MON

      gpgcheck=0

      [Osd]

      name=osk

      baseurl = ftp: //192.168.4.254/ceph/OSD

      gpgcheck=0

      [tools]

      name=tools

      baseurl = ftp: //192.168.4.254/ceph/Tools

     gpgcheck=0

    for i in 10 11 12 13 // synchronization source to another host yum

    do

       scp /etc/yum.repos.d/ceph.repo  192.168.4.$i:/etc/yum.repos.d/

    done

 5) Configure all hosts NTP consistent with the real machine

   #vim /etc/chrony.conf

     server 192.168.4.254 iburst

     for i in 10 11 12 13 // synchronize to other hosts

     do

          scp /etc/chrony.conf  192.168.4.$i:/etc/

     done

 6) virt-manager running on the real machine, the virtual call up the system manager, add three disks vdb, vdc, vdd for each virtual host.

Second, the Cluster Setup

  Ideas: (1) installation tool ceph-deploy

            (2) create a cluster ceph

            (3) Prepare the log, and disk partitions

            (4) create OSD storage space

            (5) View ceph status and verification

 Specific steps:

  (1) installation tool, and create a directory

     yum -y install ceph-deploy

     mkdir ceph-cluster

  (2) create a cluster ceph

      a. the definition monitor to the host in the configuration file ceph.conf

         node1 ceph-cluster]#ceph-deploy new node1 node2 node3

      b. Install ceph related packages to all nodes

        node1 ceph-cluster]#for i in node1 node2 node3

        do

            ssh 192.168.4.$i  "yum -y install ceph-mon ceph-osd ceph-mds ceph-radosgw"

        done

      c. Monitor service initialization of all nodes that start the mon

        node1 ceph-cluster]# ceph-deploy mon create-initial

 (3) create OSD storage

     a. The vdb partitioned into vdb1 and vdb2 as journal disk cache of cache servers

        node1 ceph-cluster]# for i in node1 node2 node3

        do

          ssh 192.168.4.$i "parted /dev/vdb mklabel gpt"

          ssh 192.168.4.$i "parted /dev/vdb mkpart primary 1 50%"

          ssh 192.168.4.$i "parted /dev/vdb mkpart primary 50% 100%"

        done

     b. Partition the default permissions after vdb1 and vdb2, do not let them ceph software to read and write, you need to modify the permissions.

       Note 4 hosts permissions must be changed, for example node1 to the following:

       node1 ceph-cluster] # chown ceph.ceph / dev / vdb1 // temporary permission to modify

       node1 ceph-cluster] # chown ceph.ceph / dev / vdb2 // temporary permission to modify

       Permanently set permissions:

       node1 deph-cluster]# vim /etc/udev/rules.d/70-vdb.rules

       ENV{DEVNAME}=="/dev/vdb1",OWNER="ceph",GROUP="ceph"

       ENV{DEVNAME}=="/dev/vdb2"OWNER="ceph",GROUP="ceph"

       #for i in node1 node2 node3 // sync to all hosts

         do

             scp /etc/udev/rules.d/70-vdb.rules  192.168.4.$i:/etc/udev/rules.d/

        done

      c. initialize the disk data, that is emptied.

         #for i in node1 node2 node3

           do

              ceph-deploy disk zap $i:vdc $i:vdd

           done

       d. OSD create storage space

          #for i in node1 node2 node3

           do

              ceph-deploy osd create $i:vdc:/dev/vdb1 $i:vdd:/dev/vdb2 

             // Create osd storage devices, vdc provide storage space, vdb1 provide log buffer; vdd provide storage space, vdb2 provide log cache.

           done

Third, verify

      node1 ~] # ceph -s // view status,

       // If that fails, try restarting ceph service: # systemctl restart ceph \ * server ceph \ * target..

Just completed.

Guess you like

Origin www.cnblogs.com/liusingbon/p/10980625.html