Summary of Minio deployment

official introduction

MinIO is an object storage service based on the Apache License v2.0 open source protocol. It is compatible with the Amazon S3 cloud storage service interface, and is very suitable for storing large-capacity unstructured data , such as pictures, videos, log files, backup data, and container/virtual machine images, etc., and an object file can be of any size, from several kb to maximum 5T

MinIO can be deployed on a single point, distributed clusters, easy to operate and deploy, and can support expansion. The SDK is similar to Ali OSS, which just meets my needs. The only flaw is that it does not support dynamic expansion.

Docker single node deployment

First pull the mirror and download the latest version of the mirror

docker pull minio/minio

Start the container, the start port is 9000 "-v /mnt/data:/data", specify the storage address of the host pointing to the container, the uploaded file is stored here, the command to start "server /data", specify the internal storage address of the container as / data

docker run -p 9000:9000 --name minio1 \
 --restart=always \
 --net=host \
  -e MINIO_ACCESS_KEY=minioadmin \
   -e MINIO_SECRET_KEY=minioadmin \
  -v /mnt/data:/data \
  -v /mnt/config:/root/.minio \
  minio/minio server /data


After the startup is successful, the browser visits http://{ip}:9000, accessKey and secretKey are required for login, and the docker container starts with "minioadmin" by default, which will be displayed after startup

Linux single-node binary deployment

download

mkdir /home/minio/{app,config,data,logs} -p
cd /home/minio/app
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio

Create data directory

mkdir /minio_data

startup script

min_server_start.sh

chmod 755 min_server_single.sh

#!/bin/bash
export MINIO_ROOT_USER=admin
export MINIO_ROOT_PASSWORD=12345678
MINIO_HOME=/home/minio/app	
nohup ${MINIO_HOME}/minio server /minio_data  --address :9000 --console-address :8000 > ${MINIO_HOME}/minio.log 2>&1 &

startup script

bash min_server_single.sh

 After the startup is successful, the browser visits http://{ip}:9000, accessKey and secretKey are required for login, which are admin/12345678 in the script respectively

Linux distributed deployment

Official introduction: Distributed Minio requires at least 4 hard disks , and the use of distributed Minio automatically introduces the erasure code function.

Advantages of Distributed Minio

Data Protection
Distributed Minio uses erasure codes to prevent multiple node downtime and bit rot.
Distributed Minio requires at least 4 hard disks, and the use of distributed Minio automatically introduces the erasure code function.

There is a single point of failure in the highly available
stand-alone Minio service. On the contrary, if it is a distributed Minio with N hard disks, as long as there are N/2 hard disks online, your data is safe. But you need at least N/2+1 hard drives to create new objects .
For example, a 16-node Minio cluster with 16 hard disks per node, even if 8 servers are down, the cluster is still readable, but you need 9 servers to write data.

Consistency
Minio in distributed and stand-alone mode, all read and write operations strictly abide by the read-after-write consistency model
 

prepare the environment

192.168.10.159 minion-2
192.168.10.153 minion-1

4 hard drives per node

Note: There must be four hard disks, and the hard disk is empty, otherwise an error will be reported.

#创建挂载点
mkdir /mnt/mongo1
mkdir /mnt/mongo2
mkdir /mnt/mongo3
mkdir /mnt/mongo4
#分区
fdisk /dev/sdc
fdisk /dev/sdd
fdisk /dev/sde
fdisk /dev/sdf
#格式化
mkfs.ext4 /dev/sdc1
mkfs.ext4 /dev/sdd1
mkfs.ext4 /dev/sde1
mkfs.ext4 /dev/sdf1
#加载
mount /dev/sdc1 /mnt/mongo2/
mount /dev/sdd1 /mnt/mongo3/
mount /dev/sde1 /mnt/mongo4/
mount /dev/sdf1 /mnt/mongo1/
#写入系统配置
echo "/dev/sdc1 /mnt/mongo2 ext4 defaults 0 0" >> /etc/fstab
echo "/dev/sdd1 /mnt/mongo3 ext4 defaults 0 0" >> /etc/fstab
echo "/dev/sde1 /mnt/mongo4 ext4 defaults 0 0" >> /etc/fstab
echo "/dev/sdf1 /mnt/mongo1 ext4 defaults 0 0" >> /etc/fstab

df -h

 Make sure the clocks are in sync

​​​​​​Centos server setting time automatic synchronization

layout script

Note: Skip non-scalable deployment methods and pseudo-distributed deployment methods here.

First start the minio-1 single-node cluster (you need to try to expand the cluster later)

minio_cluster.sh

chmod 755 minio_cluster.sh

#!/bin/bash
  
export MINIO_ROOT_USER=admin
export MINIO_ROOT_PASSWORD=12345678
MINIO_HOME=/home/minio/app
nohup ${MINIO_HOME}/minio server --address :9000 --console-address :8000  \
        http://minio-1/mnt/mongo{1...4} > ${MINIO_HOME}/minio.log 2>&1 &

Start the cluster

bash chmod 755 minio_cluster.sh

verify

Cluster expansion method

peer-to-peer expansion

First of all, the minimalist design concept of MinIO makes the MinIO distributed cluster not support the expansion method of adding a single node to the cluster and performing automatic adjustment. This is because the data balance and erasure group division caused by adding a single node will cause It brings complex scheduling and processing procedures to the entire cluster, which is not conducive to maintenance. Therefore, MinIO provides a peer-to-peer expansion method, that is, the number of nodes and disks required to be increased must be equal to the original cluster.

Note: Each region added must have the same number of disks (erasure code set) size as the original region in order to maintain the same data redundancy SLA .
For example, the first zone has 8 disks, you can expand the cluster to zones of 16, 32 or 1024 disks, just make sure that the deployment SLA is a multiple of the original zone.

The startup script of the minio-1 node adds configuration

For capacity expansion, add the expanded node at the end of the last line of the original command and restart the cluster.

#!/bin/bash
  
export MINIO_ROOT_USER=admin
export MINIO_ROOT_PASSWORD=12345678
MINIO_HOME=/home/minio/app
nohup ${MINIO_HOME}/minio server --address :9000 --console-address :8000  \
        http://minio-1/mnt/mongo{1...4} http://minio-2/mnt/mongo{1...4} > ${MINIO_HOME}/minio.log 2>&1 &

Sync startup script

Synchronize the startup script to the minio-2 node, and then both nodes start the service.

verify

Federation expansion

MinIO officially provides another expansion mechanism - federated expansion, that is, by introducing etcd, multiple MinIO distributed clusters are logically formed into a federation, providing external services as a whole, and providing a unified namespace. The architecture of the MinIO federated cluster is shown in Figure 3-1.

 Advantages and disadvantages of federated expansion

Compared with peer-to-peer expansion, the advantages of federation expansion are: ① each cluster in the federation does not require the number of nodes and disks to be equal; ② the federation can be expanded indefinitely, and new clusters are continuously added; ③ if a cluster in the federation fails , the failure will not affect other clusters in the federation to provide services. The disadvantage is that etcd needs to be introduced additionally, and the configuration process is more complicated.

It has been verified that the federated cluster has a flaw. When cluster A is full, only cluster A can be expanded; otherwise, buckets related to A cannot be added. Expansion of cluster B is invalid .

install etcd

Installation location: 192.168.10.153

yum install -y etcd

vi /etc/etcd/etcd.conf

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:3380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:3379"

#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd_minio"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.153:3380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.153:3379"

#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
 

Make sure etcd is installed successfully. You can check the etcd installation status with the following command:

rpm -qa | grep etcd

If the etcd version information is displayed in the output, etcd is already installed.

Start the etcd service. Start etcd with the following command:

systemctl start etcd


This starts the etcd service up and running.

Verify that the etcd service started successfully. You can use the following command to check the running status of etcd service:

systemctl status etcd

If active (running) is displayed in the output, the etcd service started successfully.

Cluster 1 startup script

minio_cluster.sh

#!/bin/bash
  
export MINIO_ROOT_USER=admin
export MINIO_ROOT_PASSWORD=12345678
MINIO_HOME=/home/minio/app

export MINIO_ETCD_ENDPOINTS="http://192.168.10.153:3379"
export MINIO_PUBLIC_IPS=192.168.10.153
export MINIO_DOMAIN=bo.test.com


nohup ${MINIO_HOME}/minio server --address :9000 --console-address :8000  \
        http://minio-1/mnt/mongo{1...4} > ${MINIO_HOME}/minio.log 2>&1 &

Cluster 2 startup script

minio_cluster.sh

#!/bin/bash
  
export MINIO_ROOT_USER=admin
export MINIO_ROOT_PASSWORD=12345678
MINIO_HOME=/home/minio/app

export MINIO_ETCD_ENDPOINTS="http://192.168.10.153:3379"
export MINIO_PUBLIC_IPS=192.168.10.159
export MINIO_DOMAIN=bo.test.com

nohup ${MINIO_HOME}/minio server --address :9000 --console-address :8000  \
        http://minio-2/mnt/mongo{1...4} > ${MINIO_HOME}/minio.log 2>&1 &

configuration explanation

The MINIO_ETCD_ENDPOINTS parameter needs to correspond to the IPs of all nodes in the built ETCD cluster;

The MINIO_PUBLIC_IPS parameter is the IP of all nodes in the cluster;

The MINIO_DOMAIN parameter must be configured. Even if you do not access the bucket through a domain name, otherwise the federation will not take effect. Only clusters with the same MINIO_DOMAIN parameter value will form a federation.

verify

Create a Bucket in cluster A, and if it can be seen immediately in cluster B, it means that the cluster is built successfully.

docker deployment peer-to-peer expansion

demand scene

The original cluster deployment method requires at least 4 empty hard disks, and requires a lot of resources. Sometimes when resources are limited, a cluster that can expand storage resources is needed , which can be deployed in the way of docker.

The minio cluster installed by docker can use a folder instead of a disk, and the original installation minio cluster data directory must use an empty entire disk.
The cluster version also uses erasure codes to maintain data, ensuring data backup (although only in folders), and the stand-alone version does not use erasure codes.
It is recommended to use one data directory and one disk in the production environment.

server list

192.168.10.159,192.168.10.160

important point

1. The network must use the --net-host mode. After experimenting with the mapping mode, it is not available . The guess is that when using the mapping mode, when reporting the identity of each node to the cluster, the obtained IP is the IP of the internal container, which is inconsistent with the data address IP on the following parameters.

2. If the data needs to be persisted, the data directory and configuration file directory need to be mapped to the host

3. The service port and console port can be customized using --address, --console-address parameters

4. The number of copies of all disks is recommended to be 2 to the nth power, greater than or equal to 4
 

Build Dockerfile

FROM centos:centos7.9.2009
wget https://dl.min.io/server/minio/release/linux-amd64/minio
WORKDIR /opt
RUN  chmod +x minio
ENTRYPOINT ["./minio"]
CMD ["server", "--address :9000","--console-address :9999","http://10.22.1.27/data{1...4}"]

If the minio file has been downloaded, you can use

FROM centos:centos7.9.2009
COPY ./minio /opt/
WORKDIR /opt
RUN  chmod +x minio
ENTRYPOINT ["./minio"]
CMD ["server", "--address :9000","--console-address :9999","http://10.22.1.27/data{1...4}"]

build build image

sudo docker build -t myminio .

startup script

docker run   --name minio1   \
    --restart=always \
    --net=host \
    -e MINIO_ACCESS_KEY=minioadmin \
    -e MINIO_SECRET_KEY=minioadmin \
    -v /data1:/data1 \
    -v /data2:/data2 \
	-v /data3:/data3 \
    -v /data4:/data4 \
    myminio server \
    --address :29000 \
    --console-address :29001  \
    http://192.168.10.159/data{1...4} http://192.168.10.160/data{1...4}

Notice:

1. The reason why wget obtains the minio file above is not the default minio/minio image. The experiment found that using the {} syntax for the minio/minio image will report an error, so the latest binary file is used directly.

2. Sometimes if the initialized mapping directory already exists, an error may be reported at startup. The safest way is to ensure that the directory does not exist and let docker automatically create it when it starts.

Provide a reverse expansion idea

Mount as many directories as possible first, and when the data directory is almost full, mv part of the data directory to a new empty hard disk, delete the container, modify the mount path in the startup script and restart. If 32 directories are mounted, there are 5 expansion opportunities, and if 16 directories are mounted, there are 5 expansion opportunities. If there are 8 directories, there are 4 opportunities for expansion. Finally, if there is no opportunity to expand the mv directory, you can add machine nodes.

For example, first mount 16 directories

docker run   --name minio1  -d \
    --restart=always \
    --net=host \
    -e MINIO_ACCESS_KEY=minioadmin \
    -e MINIO_SECRET_KEY=minioadmin \
    -v /mnt/mongo1/data1:/data1 \
    -v /mnt/mongo1/data2:/data2 \
	-v /mnt/mongo1/data3:/data3 \
    -v /mnt/mongo1/data4:/data4 \
	-v /mnt/mongo1/data5:/data5 \
    -v /mnt/mongo1/data6:/data6 \
	-v /mnt/mongo1/data7:/data7 \
    -v /mnt/mongo1/data8:/data8 \
	-v /mnt/mongo1/data9:/data9 \
    -v /mnt/mongo1/data10:/data10 \
	-v /mnt/mongo1/data11:/data11 \
    -v /mnt/mongo1/data12:/data12 \
	-v /mnt/mongo1/data13:/data13 \
    -v /mnt/mongo1/data14:/data14 \
	-v /mnt/mongo1/data15:/data15 \
    -v /mnt/mongo1/data16:/data16 \
    myminio server  \
    --address :29000 \
    --console-address :29001  \
    http://192.168.10.159:29000/data{1...16}

When the hard disk where /mnt/mongo1 is located is almost full, move 9-16 to a new hard disk

mv /mnt/mongo1/data9 /total_min/
mv /mnt/mongo1/data10 /total_min/
mv /mnt/mongo1/data11 /total_min/
mv /mnt/mongo1/data12 /total_min/
mv /mnt/mongo1/data13 /total_min/
mv /mnt/mongo1/data14 /total_min/
mv /mnt/mongo1/data15 /total_min/
mv /mnt/mongo1/data16 /total_min/

Delete the container, modify the startup script, and restart the container, so that the expansion effect is enabled, and the experimental verification is effective.

docker run   --name minio1  -d \
    --restart=always \
    --net=host \
    -e MINIO_ACCESS_KEY=minioadmin \
    -e MINIO_SECRET_KEY=minioadmin \
    -v /mnt/mongo1/data1:/data1 \
    -v /mnt/mongo1/data2:/data2 \
	-v /mnt/mongo1/data3:/data3 \
    -v /mnt/mongo1/data4:/data4 \
	-v /mnt/mongo1/data5:/data5 \
    -v /mnt/mongo1/data6:/data6 \
	-v /mnt/mongo1/data7:/data7 \
    -v /mnt/mongo1/data8:/data8 \
	-v /total_min/data9:/data9 \
    -v /total_min/data10:/data10 \
	-v /total_min/data11:/data11 \
    -v /total_min/data12:/data12 \
	-v /total_min/data13:/data13 \
    -v /total_min/data14:/data14 \
	-v /total_min/data15:/data15 \
    -v /total_min/data16:/data16 \
    myminio server  \
    --address :29000 \
    --console-address :29001  \
    http://192.168.10.159:29000/data{1...16}

You can also not delete the container. After moving the file, use ln to create a file link to point to the original location

 ln -s /mnt/mongo2/data9 /mnt/mongo1/data9
 ln -s /mnt/mongo2/data10 /mnt/mongo1/data10
 ln -s /mnt/mongo2/data11 /mnt/mongo1/data11
 ln -s /mnt/mongo2/data12 /mnt/mongo1/data12
 ln -s /mnt/mongo2/data13 /mnt/mongo1/data13
 ln -s /mnt/mongo2/data14 /mnt/mongo1/data14
 ln -s /mnt/mongo2/data15 /mnt/mongo1/data15
 ln -s /mnt/mongo2/data16 /mnt/mongo1/data16

As shown in the figure, you can restart the container without modifying the docker startup command.

Test hard disk usage, this is a very useful command

fallocate -l 8G /mnt/mongo1/test2.zip

source

Minio Distributed Cluster Deployment_51CTO Blog_Distributed Cluster Deployment

Minio Distributed Cluster Deployment_51CTO Blog_Distributed Cluster Deployment

minio cluster, expansion_minio cluster_leoppeng's blog-CSDN blog

MinIO deployment and expansion

 Talk about MinIO cluster expansion method

Docker-based minio federal expansion (ETCD) 2023 latest _minio expansion_qq_38343011's blog-CSDN blog

 etcd+minio federation expansion plan_minio etcd_18-year-old Xu Song's Blog-CSDN Blog

docker-install minio cluster_docker deploy minio cluster_Don't wave your blog-CSDN blog

Minio cluster installation docker version_docker minio cluster is the best

Guess you like

Origin blog.csdn.net/csdncjh/article/details/131754563