cephfs Distributed Systems

                                                           cephfs Distributed Systems

 

CephFS: Distributed File System

What is CephFS :

Distributed File System (Distributed File System) refers to the physical storage resource file system management is not necessarily directly connected to the local node , but the node is connected via a computer network

CephFS use Ceph clusters provide a POSIX -compliant file system

Allows Linux directly Ceph storage mount locally

metadata server

  • What is metadata

Metadata (the Metadata) :

Any file system data is divided into data and metadata.

Data refers to the actual data files in common

And metadata used to describe the characteristics of a file system data

For example : the distribution of information access, file owner, and file data blocks (inode ...) , etc.

So CephFS must have MDSs node

 

ceph Object Storage

  • What is object storage

Object Store:

That is key-value store , through its interface command , which is a simple GET , PUT , DEL and other extensions , uploading and downloading data to a storage service

Object store all the data is considered to be an object , therefore , no data can be stored in the object storage servers , such as pictures, video, audio, etc.

RGW full name Rados Gateway

RGW is Ceph object storage gateway , for presenting storage interface to the client application , providing RESTful API access interface

 

ceph combat

: Creating KVM virtual machines, virtual hard disks using ceph storage

 

1. In ceph create a mirror on a remote server

[root @ node1 ceph-cluster] # rbd create vm1-image --image-feature layering --size 10G // vm1-image is mirrored pond name

[root@node1 ceph-cluster]# rbd  create  vm2-image  --image-feature  layering --size  10G

[root@node1 ceph-cluster]# rbd  list

vm1-image

vm2-image

2. View

[root@node1 ceph-cluster]# qemu-img   info  rbd:rbd/vm1-image

image: rbd:rbd/vm1-image

file format: raw

virtual size: 10G (10737418240 bytes)

disk size: unavailable

3. The physical host as ceph client

[root@room9pc01 ~]# yum  -y  install   ceph-common

[root@node1ceph-cluster]#scp /etc/ceph/ceph.conf   192.168.4.254:/etc/ceph/

[root@node1ceph-cluster]#scp /etc/ceph/ceph.client.admin.keyring  192.168.4.254:/etc/ceph/

4. normal create a virtual machine, when you click Finish, the virtual machine will be up and running, this time forced to shut down the virtual machine

The configuration of the virtual machine generating the configuration file

[root@room9pc01 ~]# virsh   dumpxml  vm1  >  /tmp/vm1.xml

[root@room9pc01 ~]# cat  /tmp/vm1.xml

6. To delete a virtual machine vm1 , later modified by vm1.xml generate virtual machine vm1

7. virtual machine ceph storage, the need for a "pass", write xml file, pass form

[root@room9pc01 ~]# vim  /tmp/secret.xml

<secret ephemeral='no'  private='no'>

        <usage  type='ceph'>

                <name>client.admin  secret</name>

        </usage>

</secret>

8. generated UUID

[root@room9pc01 ~]# virsh   secret-define  /tmp/secret.xml

Generating a secret 51a11275-f9fa-41cd-a358 -ff6d00bd8085

[root @ room9pc01 ~] # virsh   secret-list view UUID

 

UUID                                   amount

--------------------------------------------------------------------------------

 51a11275-f9fa-41cd-a358-ff6d00bd8085  ceph client.admin  secret

 

9 . Check ceph of client.admin the key

[root@room9pc01 ~]# ceph  auth  get-key  client.admin

// == AQD0vAJby0NiERAAcdzYc ONLqlyNXO37xlJA

10 . The first 7 , 8 -step of virtual machines secret and ceph of client.admin associate

[root@room9pc01 ~]# virsh   secret-set-value  --secret   51a11275-f9fa-41cd-a358-ff6d00bd8085  --base64   AQD0vAJby0NiERAAcdzYc//ONLqlyNXO37xlJA==

secret value is set

// Here secret behind before the creation of the secret of the UUID

base64 followed client.admin password account

Now secret in both the account information have key information

1 1 . Modify the generated virtual machine configuration file

[root@room9pc01 ~]# vim  /tmp/vm1.xml

32      <disk type='network' device='disk'>

33       <driver name='qemu' type='raw'/>

Comment out 34 lines, manually add the following contents

34         <auth  username='admin'>

35 <secret type = 'ceph'    uuid = '51a11275-f9fa-41cd-a358-ff6d00bd8085' /> where uuid is the secret of uuid, there client.admin account and key information

36         </auth>

37    <source protocol='rbd'  name='rbd/vm1-image'>

ü 38         <host  name='192.168.4.2' port='6789' />

39         </source>

Which connection using the account ceph host and port , which pool access and mirror

40 <target dev='vda' bus='virtio'/>

The acquired image , as a virtual machine vda disk

. 1 2 . -Generated virtual machine

[root@room9pc01 ~]# virsh  define  /tmp/vm1.xml

Domain VM1 (from /tmp/vm1.xml )

Then you can see just delete the virtual machine restore gave back, and then click the light bulb and then create a virtual machine, add images, click the Boot Options -> Enabled -> VirtlO disk 1 , up to the first

Click --- IDE CDROM 1  -----> Add Mirror -> Boot Options ---->   IDE CDROM 1 (check) -> enable the new virtual machine

CephFS use: Note that this method is not mature, do not use in a production environment

 

Deployment mds server

1 Configure the hostname, yum , the NTP , name resolution, node1 avoid dense Login mds node

[root@node4 ~]# yum  -y  install   ceph-mds

2. Create a metadata server, you must ceph-cluster on directory

[root@node1 ceph-cluster]# pwd

/root/ceph-cluster

[root@node1 ceph-cluster]# ceph-deploy   mds  create  node4

3. synchronization profile and key

[root@node1 ceph-cluster]# ceph-deploy   admin  node4

Then node4 View node

[root@node4 ~]# ceph  -s

health HEALTH_OK

4 is cephFS create data and metadata pool pool, each designated OSD has 128 th PG

About PG Description: At http: //www,wzxue.com/ceph-osd-and-pg/

[root@node4 ~]# ceph  osd  pool  create  cephfs_data  128

pool 'cephfs_data' created

[root@node4 ~]# ceph  osd  pool  create  cephfs_metadata  128

pool 'cephfs_metadata' created

5. Check mds state

[root@node4 ~]# ceph  mds  stat

e2:, 1 up:standby

6. Create a file named myfs1 file system

[root@node4 ~]# ceph  fs  new  myfs1  cephfs_metadata   cephfs_data

new fs with metadata pool 2 and data pool 1

The default can only create 1 file system , the excess will complain

7. View information

[root@node4 ~]# ceph  mds  stat

e5: 1/1/1 up {0=node4=up:active}

[root@node4 ~]# ceph  fs  ls

name: myfs1, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

8.Linux kernel has supported cephFS , as long as you can mount

-t   type

[root@client ~]# mkdir  /mnt/ceph_root

[root @ client ~] # ceph   auth list to view the admin of key

client.admin

key: AQD0vAJby0NiERAAcdzYc//ONLqlyNXO37xlJA==

[root@client ~]# mount  -t  ceph  192.168.4.11:6789:/  /mnt/ceph_root/  -o  name=admin,secret=AQD0vAJby0NiERAAcdzYc//ONLqlyNXO37xlJA==

[root@client ~]# df  -h

File System                 Capacity   Used     Available    Used % loading point

192.168.4.11:6789:/     60G  1008M   59G    2% /mnt/ceph_root


 

Object Storage

 

1 . Create a ceph-deploy working directory

[root@node5 ~]# mkdir  ceph-cluster

[root@node5 ~]# cd  ceph-cluster

2. What is object storage

3. Installation rgw

[root@node5 ceph-cluster]# ceph-deploy  install  --rgw  node5

4. synchronization profile and key

[root@node5 ceph-cluster]# ceph-deploy   admin  node5

5. Start r gw Service

[root@node5 ceph-cluster]# ceph-deploy  rgw  create  node5

[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host node5 and default port 7480    //网关接口

6. Modify rgw , the default port is 7480 , not to be changed

[root@node5 ceph-cluster]# vim  /etc/ceph/ceph.conf

Adding a few lines about:

[client.rgw.node5]

host = node5

rgw_frontends = "civetweb port = 8081" // just write, write a catchy

7. restart the service in order to take effect

[root@node5 ~]# systemctl   restart  ceph-radosgw@\*

8. The client access rgw authentication

[root@client ~]# curl   http://node5:8081

<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

The emergence of these contents indicate normal

9. Create a user object access

[root@node5 ~]# radosgw-admin  user  create  --uid="testuser" --display-name="First User"     

 // First User is a nickname, just write , to use

"user": "testuser",

"access_key": "CJ38ADJYNMR3F3DJ3C9J",

"secret_key": "k5DZgUXBMJs3fdv5bT7yNFJiNw2bKac7D1IxO12I"

10. View User Information

[root@node5 ~]# radosgw-admin  user  info  --uid="testuser"

1 1. Client Installation s3 tool

[root @ room9pc01 ~] # scp -r cluster software /ceph/s3cmd-2.0.1-1.el7.noarch.rpm 192.168.4.10:/root

[root@client ~]# yum  -y  localinstall  s3cmd-2.0.1-1.el7.noarch.rpm

1 2. Configure the client

[root@client ~]# s3cmd   --configure

Access Key: CJ38ADJYNMR3F3DJ3C9J

Secret Key: k5DZgUXBMJs3fdv5bT7yNFJiNw2bKac7D1IxO12I

Default Region [US]: Enter

S3 Endpoint [s3.amazonaws.com]: 192.168.4.15:8081

onaws.com]: %(bucket)s.192.168.4.15:8081

Encryption password: Enter

Path to GPG program [/ usr / bin / gpg]: Enter

Use HTTPS protocol [Yes]: N

HTTP Proxy server name: Enter

Test access with supplied credentials? [Y/n] y

Save settings? [y/N] y

1 3. The client upload and download test

[root @ client ~] # s3cmd  ls // View data

[root @ client ~] # s3cmd   mb s3: // my_bucket create my_bucket

Bucket 's3://my_bucket/' created

 

Upload

[root@client ~]# s3cmd  put  /var/log/messages  s3://my_bucket/log/

upload: '/var/log/messages' -> 's3://my_bucket/log/'  [1 of 1]

 403379 of 403379   100% in    4s    97.49 kB/s  done

[root@client ~]# s3cmd  ls  s3://my_bucket

2018-05-22 09:24    403379   s3://my_bucket/log

 

download

[root@client ~]# s3cmd  get  s3://my_bucket/log/messages   /tmp

download: 's3://my_bucket/log/messages' -> '/tmp/messages'  [1 of 1]

 403379 of 403379   100% in    0s    32.63 MB/s  done

 

delete

[root@client ~]# s3cmd  del  s3://my_bucket/log/messages

delete: 's3://my_bucket/log/messages'

 

 

 

Guess you like

Origin www.cnblogs.com/qingbai/p/11951277.html