gitlab use minio implement distributed storage

1, the advantages of distributed

1.minio distributed introduction

MinIO in distributed mode allows you to drive a plurality (or even on a different computer) into an object storage server. Since the drive distributed across multiple nodes, distributed MinIO can withstand multiple node failures, but still ensures complete data protection.

MinIO in distributed mode helps you to set up high-availability storage systems through a single object storage deployment. With distributed MinIO, regardless of how storage location in the network, you can optimize the use of storage devices.

2. Data protection

Distributed MinIO using an erasure code is provided for a plurality of nodes / drive failure bits and rot protection. Since the minimum required for a disk distributed Minio 4 (with the same minimum required erasure coding disk), distributed at startup so Minio, erase code will start automatically.

3. High Availability

If the managed disk server is offline, the server will shut down independent MinIO. In contrast, distributed as long as the n / 2 or more disks online, with n disks MinIO setting will make your data safe. However, you need at least (n / 2 + 1) statutory disk to create new objects.

For example, even if up to eight servers is offline, a set of distributed MinIO 16 nodes (each node has a disk 16) can continue to provide documentation. However, you need to be online at least nine servers to create new objects.

4. limit

And MinIO as in standalone mode, each tenant distributed MinIO is limited to a minimum of 2 up to 32 server. The number of disks on the server does not have these restrictions. If you need to set up a multi-tenant, you can easily start multiple instances of the MinIO coordination tools (such as Kubernetes, Docker Swarm, etc.) management.

Please note distributed MinIO, as long as compliance with restrictions on the number of nodes and drives can handle. For example, you can have two nodes, each four drives, four nodes for each drive 4, eight nodes per two drives 32 for each server 64 drives, and so on.

5. ensure consistency

After MinIO follow strict writing and reading lists read / model of consistency all inputs and outputs are distributed and standalone mode operation.

2, Helm Chart deploy MinIO

Prerequisites:

1. a cluster k8s

2. Have helm environment

1. Deploy minio

Creating pv minio needed

#vim pv1.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: minio-pv1
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  -  ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /helm/minio/pv1

As used herein, it is hostpath, create the appropriate directories in the worker nodes.

image-20191212211059077.png

2. Install minio

# First pulled in the local installation
helm  pull  stable/minio
-zxvf of minio-3.0.4.tgz
helm  install  minio  ./minio

# Or directly
helm  install  minio  stable/minio

image-20191212211624813.png

View pod

image-20191212212110685.png

3. Log in to access minio

Default parameters:

1576157102(1)_看图王_看图王.png

Access the web interface

image-20191212213046502.png

image-20191212213103278.png

access Key and secret Key is the default, the configuration in FIG.

image-20191212213237181.png

3, gitlab and associate minio

kubectl get deploy minio -oyaml view the associated Key

image-20191212213506646.png

Add gitlab of yaml file

kubectl edit deploy gitlab-gitlab-ce

- name: MINIO_ACCESS_KEY
 valueFrom:
  secretKeyRef:
   key: accesskey
   name: minio
- name: MINIO_SECRET_KEY
 valueFrom:
  secretKeyRef:
   key: secretkey
   name: minio

image-20191212213738745.png

View the normal state

image-20191212214106629.png

Login gotlab create a file

image-20191212222659865.png

Delete the original pod

image-20191212223057279.png

gitlab the pod start-up time is a bit long, and so you can start to see complete access interface

image-20191212223514760.png

Guess you like

Origin blog.51cto.com/14268033/2458435
Recommended