docker from entry to application

Three months have passed in a flash. I didn't want to write about csdn at first, but after thinking about it, the first article I published was on the csdn platform. My current positioning is the content of operation and maintenance development. Every IT staff has a full-stack dream, let’s continue to set sail

insert image description here

1.docker (centos) quick installation
_docker_centos(){
#安装工具
yum install -y yum-utils
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
#安装docker
yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin
}
_docker_centos
2.docker (ubuntu) fast installation
_docker_install(){
    
    
#卸载旧的docker版本
sudo apt-get remove docker docker-engine docker.io containerd runc
#apt存储库更新,确保https和ca证书模块被安装
sudo apt-get update
sudo apt-get install \
        ca-certificates \
        apt-transport-https \
        curl \
        gnupg \
        lsb-release \
        vim
#添加 Docker 的官方 GPG 密钥
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
#设置稳定存储库。要添加 nightly或test存储库
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
#安装docker
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo systemctl enable docker.service
sudo systemctl start docker.service
cat > /etc/docker/daemon.json <<EOF
{ 
          "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn/"] 
}
EOF
systemctl daemon-reload
systemctl restart docker
}
_docker_install
3. docker-compose installation
docker-compose_install(){
    
    
curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.4/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x  /usr/local/bin/docker-compose 
}
docker-compose_install
4.docker install Elasticsearch

Elasticsearch is a search engine based on the full-text search engine Apache Lucene. It can be said that Lucene is the most advanced and efficient full-featured open source search engine framework today .
Developed by Shay Banon and released in 2010. It is now maintained by Elasticsearch BV. Its latest version is: 5.2.0.
Elasticsearch is a real-time distributed and open source full-text search and analysis engine. It can be accessed from a RESTful web service interface and uses schema-less JSON (JavaScript Object Notation) documents to store data. It is based on the Java programming language , which enables Elasticsearch to run on different platforms. Enables users to search very large amounts of data very quickly.

ElasticSearch Advantages :
Elasticsearch is developed based on Java, which makes it compatible on almost every platform.
Elasticsearch is real-time, in other words, after a second, the added document can be searched in this engine.
Elasticsearch is distributed, which makes it easy to scale and integrate in any large organization.
Creating a full backup is easy by using the gateway concept in Elasticsearch.
Handling multi-tenancy in Elasticsearch is very easy compared to Apache Solr.
Elasticsearch responds with a JSON object, which makes it possible to call the Elasticsearch server using different programming languages.
Elasticsearch supports almost most document types, but not text-rendered document types

ES跑起来需要至少1.2GIB内存,所以如果内存不够的话可以通过-e配置修改实现,一下ES_JAVA_OPTS参数选择性添加
Xms #分配的内存
Xmx #最大限度内存
docker run -d --name elasticsearch  -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx1000m" elasticsearch:7.6.2

Baidu access: http://ip address: 9200

{
    
    
  "name" : "8fecaad4e721",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "qUtP0qneSiqbn2SS69VQ6A",
  "version" : {
    
    
    "number" : "7.6.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Access: http://ip address:9200/_cat/

/_cat/allocation
/_cat/shards
/_cat/shards/{index}
/_cat/master
/_cat/nodes
/_cat/tasks
/_cat/indices
/_cat/indices/{index}
/_cat/segments
/_cat/segments/{index}
/_cat/count
/_cat/count/{index}
/_cat/recovery
/_cat/recovery/{index}
/_cat/health
/_cat/pending_tasks
/_cat/aliases
/_cat/aliases/{alias}
/_cat/thread_pool
/_cat/thread_pool/{thread_pools}
/_cat/plugins
/_cat/fielddata
/_cat/fielddata/{fields}
/_cat/nodeattrs
/_cat/repositories
/_cat/snapshots/{repository}
/_cat/templates
5.docker install vscode
# DOMAIN 强烈推荐设置,设置域名或IP, 设置后将支持 Draw.io 等插件
# PASSWORD 引号内可设置登录密码
__run_vscode() {
    
    
    docker rm -f vscode
    rm -rf /data/docker-data/vscode/.local/share/code-server
    docker pull registry.cn-hangzhou.aliyuncs.com/lwmacct/code-server:v3.12.0-shell
    docker run -itd --name=vscode \
        --hostname=code \
        --restart=always \
        --privileged=true \
        --net=host \
        -v /proc:/host \
        -v /docker/vscode:/root/ \
        -e DOMAIN='' \
        -e PASSWORD="123456" \
        registry.cn-hangzhou.aliyuncs.com/lwmacct/code-server:v3.12.0-shell
}
__run_vscode
6.docker deploys 100 nginx services

Start 100 nginx services in a loop

 for i in $(seq 0 99);do docker run -itd -p 80$i:80 --name=nginx$i --privileged nginx:latest ;done

insert image description here

View IP addresses of one hundred nginx container services

for i in $(docker ps|grep -aiE nginx|awk '{print $1}');do docker inspect $i |grep -aiE ipaddr|tail -1|grep -aiowE "([0-9]{1,3}\.){3}[0-9]{1,3}";done

insert image description here

Take out the IP address, container id, cpu memory, and disk information to take out and print

for i in $(docker ps|grep -aiE nginx|awk '{print $1}');do echo $i; docker inspect $i |grep -aiE ipaddr|tail -1|grep -aiowE "([0-9]{1,3}\.){3}[0-9]{1,3}";done|sed 'N;s/\n/ /g'|awk '{print $NR,$0" 2C 2G 40GB Nginx"}'

insert image description here


7.docker installs EFK distributed log platform

insert image description here

Usually we will use the ELK distributed log management system (Elasticsearch, Logstash, Kibana, all open source software)

  • Elasticsearch is a distributed search engine that provides functions such as collecting, storing data, and automatically searching loads
  • Logstash is a tool for collecting, analyzing, and filtering logs, and supports a large number of data acquisition methods . The general working method is c/s architecture, the client is installed on the host that needs to collect logs, and the server is responsible for filtering and modifying the received logs of each node and sending them to elasticsearch.
  • Kibana is also an open source and free tool. Kibana can provide a log analysis friendly web interface for Logstash and ElasticSearch, which can help summarize, analyze and search important data logs

And now we need to use EFK , F is Filebeats, search file data

Compared with Logstash, Filebeats
is written in Java, and the plug-in is written in jrub, which consumes more resources.
Filebeats is written in go language. It is very lightweight and takes up less system resources, but Beats has fewer plug-ins than LogStash.

docker deployment EFK

#Pull the mirror image, preferably version 7 or above
#EFK version is consistent

Pull three images

#拉取三个镜像
docker pull elasticsearch:7.6.2
docker pull kibana:7.6.2
docker pull elastic/filebeat:7.6.2

Elasticsearch configuration

#启动ES
docker run -d --name elasticsearch  -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx1000m" elasticsearch:7.6.2
#解决跨域问题,容器内文件追加俩行内容
docker exec -it elasticsearch bash -c "echo -e 'http.cors.enabled: true\nhttp.cors.allow-origin: \"*\"' >>  config/elasticsearch.yml"
#重启
docker exec -it elasticsearch

kibana configuration

#启动kibana
#elasticsearch是ES容器名,--link参数是将将两个容器关联到一起可以互相通信,kibana到时候需要从ElasticSearch中拿数据
docker run --link elasticsearch:elasticsearch -p 5601:5601 -d --name kibana kibana:7.6.2
#配置kibana的yml文件,让其找到ES,并且汉化一下
docker exec  -it kibana /bin/bash
vim config/kibana.yml 
#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
i18n.locale: "zh-CN"
#重启kibana
docker restart kibana

insert image description here
Filebeat configuration
Filebeat configuration help

#Filebeat配置字段介绍(yml格式)
- type		#输入的文件类型,本例中输入为log文件
- enabled	#是否启用当前输入,一般为true
- paths		#文件路径,可以配置多个,没有对配置目录做递归处理,比如配置的如果是:/var/log/ /.log,则只会去/var/log目-录的所有子目录中寻找以".log"结尾的文件,而不会寻找/var/log目录下以".log"结尾的文件。
- multiline	#多行合并配置,例如日志为堆栈异常,应将多行日志合并为一条消息。
- setup.template #模板配置,filebeat会加载默认的模板,当我们想指定自己的索引名称时在此指定name和匹配规则pattern
- setup.ilm.enabled 			#当配置模板名称后不生效,需要关闭此开关
- output.elasticsearch.hosts 	#输出到ES的地址
- output.elasticsearch.index	#输出到ES的索引名称,这个配置和上面setup.template.name setup.template.pattern需要一- 起使用
- setup.kibana.host				#配置kibana的地址,一般来说filebeat直接发送数据给ES,kibana只做展示可以不用配置,但要使用- - kibana的dashboard等功能的话需要作出此配置

My Profile

>#编写filebeat.docker.yml文件
cd /usr/src/
vim filebeat.docker.yml
filebeat.inputs:
# 多个后端服务,每个后端服务日志目录不同,给不同的日志目录设置不同的tag
- type: log
  # 更改为true以启用此输入配置
  enabled: true
  #解决中文乱码问题
  encoding: GB2312
  #过滤日志字段
  #您可以添加可用于过滤日志数据的字段。字段可以是标量值,数组,字典或它们的任何嵌套组合。默认情况下,
  #您在此处指定的字段将被分组fields到输出文档中的子词典下。要将自定义字段存储为顶级字段,
  #请将fields_under_root选项设置为true。如果在常规配置中声明了重复字段,则其值将被此处声明的值覆盖。
  fields:
    type: s1
  paths:
    #- /var/log/*.logs
    #- c:\programdata\elasticsearch\logs\*
    - G:\\s1.log
- type: log
  # 更改为true以启用此输入配置
  enabled: true
  #解决中文乱码问题
  encoding: GB2312
  #过滤日志字段
  #您可以添加可用于过滤日志数据的字段。字段可以是标量值,数组,字典或它们的任何嵌套组合。默认情况下,
  #您在此处指定的字段将被分组fields到输出文档中的子词典下。要将自定义字段存储为顶级字段,
  #请将fields_under_root选项设置为true。如果在常规配置中声明了重复字段,则其值将被此处声明的值覆盖。
  fields:
    type: s2
  # Paths that should be crawled and fetched. Glob based paths.
  #paths也是数组(下面也有-这个符号),path用于指定日志路径。
  paths:
    #- /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
    - G:\s2.log
	#pattern:多行日志开始的那一行匹配的pattern
    #negate:是否需要对pattern条件转置使用,不翻转设为true,反转设置为false
    #match:匹配pattern后,与前面(before)还是后面(after)的内容合并为一条日志
  multiline.pattern: ^\d{
    
    4}-\d{
    
    1,2}-\d{
    
    1,2}\s\d{
    
    1,2}:\d{
    
    1,2}:\d{
    
    1,2}|^\[|^[[:space:]]+(at|\.{
    
    3})\b|^Caused by:'
  multiline.negate: false
  multiline.match: after  
# ============================== Filebeat modules ==============================
filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
# ======================= Elasticsearch template setting =======================
setup.template.settings:
  index.number_of_shards: 1 #指定索引的分片
  output.elasticsearch: #输出到elasticsearch集群,没用集群就写本机或者一个就行了
  hosts: ["127.0.0.1:9300"]
  ##输出到console
   #filebeat断电异常关闭,会自动修改yml文件的 enabled: true属性为false,请注意查看
   #Filebeat Console(标准输出):Filebeat将收集到等数据,输出到console里,一般用于开发环境中,用于调试。
 #output.console:
 #   pretty: true
 #   enable: true
  #index.codec: best_compression
  #_source.enabled: false
# =================================== Kibana ===================================
setup.kibana:
  host: "127.0.0.1:5601"
# ================================== Outputs ==================================
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["127.0.0.1:9200"]
  indices:
  # 注意!!! 对应es 创建索引名称 。 统一前缀或者后缀 。多服务想一起查询某个数据,以便于以后在kibana创建索引模型可以同时匹配)
    - index: "s_1_%{+yyyy.MM.dd}"
      when.equals:
	  #对应filebeat.inputs: type字段
         fields.type: "s1"
    - index: "s_2_%{+yyyy.MM.dd}"
      when.equals:
         fields.type: "s2"
  # Protocol - either `http` (default) or `https`.
  #protocol: "https"
  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"
   #indices:
processors:
#需要删除的标签
- drop_fields:  
    fields: ["input_type", "log.offset", "host.name", "input.type", "agent.hostname", "agent.type", "ecs.version", "agent.ephemeral_id", "agent.id", "agent.version", "fields.ics", "log.file.path", "log.flags", "host.os.version", "host.os.platform","host.os.family", "host.os.name", "host.os.kernel", "host.os.codename", "host.id", "host.containerized", "host.hostname", "host.architecture"]
#启动filebeat
docker run -d -v /usr/src/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml -v /var/log/es/:/var/log/es/ --link elasticsearch:kibana --name filebeat elastic/filebeat:7.6.2
8.docker visual management container

portainer is a visual Docker operation interface that provides status display panels, rapid deployment of application templates, basic operations of container image network data volumes (including uploading and downloading images, creating containers, etc.), event log display, container console operations, and Swarm clusters functions such as centralized management and operation, login user management and control, etc. The functions are very comprehensive and can basically meet all the needs of small and medium-sized units for container management
. Rancher is an open source software platform that enables organizations to run and manage Docker and Kubernetes in production. With Rancher, organizations no longer need to build a container service platform from scratch using a unique set of open source technologies. Rancher provides the entire software stack needed to manage containers in production

#--privileged=true 	#特权授予此容器扩展权限
#--restart=always 	#容器自动启动
docker run -d -p 9000:9000 \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
--privileged=true \
--name prtainer \
docker.io/portainer/portainer

insert image description here
After setting the password
, click the local connection interface to see the home page interface, and
insert image description here
insert image description here
the operation of the web platform will be added later


9.docker redis distributed cluster
9.2 Three masters and three slaves based on hash slot partition

Hash slot introduction:
the hash slot is essentially an array, and [0,2^14-1] forms a hash slot space

Why?
To solve the problem of uniform distribution, another layer is added between data and nodes, called hash slot (slot), which is used to manage the relationship between data and nodes

Hash solves the mapping problem, using the hash value of the key to calculate the slot where it is located, which is convenient for data movement

The number of slots
A cluster has only 0-2^14-1=0-16384 slots, the slots will be assigned to the master node according to the number, the cluster will record the corresponding relationship between the node and the slot, the
key calculates the hash value, and takes the remainder of 16384, slot=crc16(key)%16384 to decide which slot to place.
Why are there 16384 slots instead of 65536?
1. If it is 65536, the sending heartbeat information header reaches 8k, which is too large, which will lead to a waste of bandwidth. 2. It is basically
impossible for the redis cluster to have more than 1000 master nodes, otherwise it will cause network congestion. 16384 is enough for use.
3. The fewer slots, When there are few nodes, the compression ratio is high and the transmission is easy

9.3 docker redis three-master three-slave deployment

The environment is based on a server, using docker containerized second-level deployment
IP address: 192.168.142.128

insert image description here
redis cluster configuration

#关闭防火墙
systemctl disable --now firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config

#docker秒级启动六个redis服务
#-cluster-enabled是否开启集群,appendonly持久化数据
for i in $(seq 1 6);do docker run -d --name redis-node-$i --net host --privileged=true -v /mydata/redis/redis-node$i:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 637$i;done

#查看容器状态
docker ps -a
CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS          PORTS     NAMES
c9b8b822a86a   redis:6.0.8   "docker-entrypoint.s…"   41 seconds ago   Up 41 seconds             redis-node-6
4a6f36a1c8b7   redis:6.0.8   "docker-entrypoint.s…"   41 seconds ago   Up 41 seconds             redis-node-5
98390020eaa3   redis:6.0.8   "docker-entrypoint.s…"   42 seconds ago   Up 41 seconds             redis-node-4
4f9436b43bc6   redis:6.0.8   "docker-entrypoint.s…"   42 seconds ago   Up 41 seconds             redis-node-3
4aeb79c8dd54   redis:6.0.8   "docker-entrypoint.s…"   42 seconds ago   Up 42 seconds             redis-node-2
568e4c387b0a   redis:6.0.8   "docker-entrypoint.s…"   42 seconds ago   Up 42 seconds             redis-node-1


#配置集群关系
docker exec -it redis-node-1 /bin/bash
#--cluster-replicas 1表示为每一个master创建一个slave节点
redis-cli --cluster create 192.168.142.128:6371 192.168.142.128:6372 192.168.142.128:6373 192.168.142.128:6374 192.168.142.128:6375 192.168.142.128:6376 --cluster-replicas 1
#接下来会显示如下信息
>>>正在6个节点上执行哈希槽分配。。。[0]>插槽0-5460
主机[1]>插槽5461-10922
主机[2]>插槽10923-16383
将副本192.168.142.128:6375添加到192.168.142.128:6371
将副本192.168.142.128:6376添加到192.168.142.128:6372
将副本192.168.142.128:6374添加到192.168.142.128:6373
>>>尝试为反亲和力优化从属分配
[警告]一些奴隶与其主人在同一主机中
M: e6a761497e0fe78c066db91e32449ff5b71e29ae 192.168.142.128:6371
插槽:[0-5460](5461插槽)主
M: e7fd3352b9999ba0e75c9972971652e3d694eca3 192.168.142.128:6372
插槽:[5461-10922](5462插槽)主
M: 84bcd2ee94cc91edcb1b4298b2a280b4f1588a52 192.168.142.128:6373
插槽:[10923-16383](5461插槽)主
S: 267df70ab24d26946ef0b9ddde5dc34f6a4eb83b 192.168.142.128:6374
复制84bcd2ee94cc91edcb1b4298b2a280b4f1588a52
S: 6f09a00eb85e485ba2da608bc106a8b8641647e4 192.168.142.128:6375
复制e6a761497e0fe78c066db91e32449ff5b71e29ae
S: 551c2fa30f0c0b72843cd51097c74539981c8e93 192.168.142.128:6376
复制e7fd3352b9999ba0e75c9972971652e3d694eca3
我可以设置上述配置吗?(键入“yes”接受):yes         #输入yes即可
M: 84bcd2ee94cc91edcb1b4298b2a280b4f1588a52 192.168.142.128:6373
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.                #全部覆盖并且完成

Enter the 6371 master node to view the cluster status

redis-cli -p 6371

#查看集群信息
127.0.0.1:6371> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384     					#总共槽位16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6								#节点数量6个
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:104
cluster_stats_messages_pong_sent:108
cluster_stats_messages_sent:212
cluster_stats_messages_ping_received:108
cluster_stats_messages_pong_received:104
cluster_stats_messages_received:212

#查看master节点与slave节点的分配情况
127.0.0.1:6371> cluster nodes
267df70ab24d26946ef0b9ddde5dc34f6a4eb83b 192.168.142.128:6374@16374 slave 84bcd2ee94cc91edcb1b4298b2a280b4f1588a52 0 1658891898700 3 connected
551c2fa30f0c0b72843cd51097c74539981c8e93 192.168.142.128:6376@16376 slave e7fd3352b9999ba0e75c9972971652e3d694eca3 0 1658891898000 2 connected
6f09a00eb85e485ba2da608bc106a8b8641647e4 192.168.142.128:6375@16375 slave e6a761497e0fe78c066db91e32449ff5b71e29ae 0 1658891897000 1 connected
e6a761497e0fe78c066db91e32449ff5b71e29ae 192.168.142.128:6371@16371 myself,master - 0 1658891898000 1 connected 0-5460
e7fd3352b9999ba0e75c9972971652e3d694eca3 192.168.142.128:6372@16372 master - 0 1658891899708 2 connected 5461-10922
84bcd2ee94cc91edcb1b4298b2a280b4f1588a52 192.168.142.128:6373@16373 master - 0 1658891897693 3 connected 10923-16383

One-to-one correspondence between Redis master and slave
insert image description here

Master Slave
1 ----> 5
2 ----> 6
3 ----> 4

9.4 Redis master-slave switch migration case
#正确插入数据的方式,(以集群cluster的方式连接redis-node-1机器)
redis-cli -p 6371 -c
127.0.0.1:6371> FLUSHALL      #清空数据(可输可不输)
OK
127.0.0.1:6371> set k1 v1      #插入一条数据
-> Redirected to slot [12706] located at 192.168.142.128:6373   
OK
192.168.142.128:6373>     #由于槽位是12706属于是6373机器的槽位,直接跳到6373机器上


#增加10条数据内容
redis-cli -p 6371 -c
127.0.0.1:6371> set k1 v1                   #所有的数据均只会在三台主机上操作,三台从主机只能查看
-> Redirected to slot [12706] located at 192.168.142.128:6373
OK
192.168.142.128:6373> set k2 v2
-> Redirected to slot [449] located at 192.168.142.128:6371
OK
192.168.142.128:6371> set k3 v3
OK
192.168.142.128:6371> set k4 v4
-> Redirected to slot [8455] located at 192.168.142.128:6372
OK
192.168.142.128:6372> set k5 v5
-> Redirected to slot [12582] located at 192.168.142.128:6373
OK
192.168.142.128:6373> set k6 v6
-> Redirected to slot [325] located at 192.168.142.128:6371
OK
192.168.142.128:6371> set k7 v7
OK
192.168.142.128:6371> set k8 v8
-> Redirected to slot [8331] located at 192.168.142.128:6372
OK
192.168.142.128:6372> set k9 v9
-> Redirected to slot [12458] located at 192.168.142.128:6373
OK
192.168.142.128:6373> set k10 v10


#主master 6371机器上查看数据
root@k8s-master:/data# redis-cli -p 6371 -c
127.0.0.1:6371> keys *
1) "k7"
2) "k6"
3) "k3"
4) "k2"

#从slave 6375机器上查看是否同步数据
root@k8s-master:/data# redis-cli -p 6375 -c
127.0.0.1:6375> keys *
1) "k6"
2) "k3"
3) "k2"
4) "k7"

Redis master-slave switch
insert image description here

#模拟6371机器挂了
docker stop redis-node-1

#连接主slave 6375机器
root@k8s-master:/data# redis-cli -p 6375 -c
127.0.0.1:6375> cluster nodes
6f09a00eb85e485ba2da608bc106a8b8641647e4 192.168.142.128:6375@16375 master - 0 1658898284597 7 connected 0-5460    #6375切换为了master状态
551c2fa30f0c0b72843cd51097c74539981c8e93 192.168.142.128:6376@16376 slave e7fd3352b9999ba0e75c9972971652e3d694eca3 0 1658898283587 2 connected
e7fd3352b9999ba0e75c9972971652e3d694eca3 192.168.142.128:6372@16372 myself,master - 0 1658898282000 2 connected 5461-10922
84bcd2ee94cc91edcb1b4298b2a280b4f1588a52 192.168.142.128:6373@16373 master - 0 1658898282579 3 connected 10923-16383
267df70ab24d26946ef0b9ddde5dc34f6a4eb83b 192.168.142.128:6374@16374 slave 84bcd2ee94cc91edcb1b4298b2a280b4f1588a52 0 1658898284000 3 connected
e6a761497e0fe78c066db91e32449ff5b71e29ae 192.168.142.128:6371@16371 master,fail - 1658897948468 1658897944000 1 disconnected   #6371已经master,fail掉了

#可以看到,6375机器已经具备操作数据的能力
127.0.0.1:6375> set k12 v12
OK

It can be seen from the above that after machine 6371 went down, machine 6375 changed from slave to master.
Think about a question. If 6371 recovers, it will become master immediately. Will 6375 become slave immediately? show won't

#恢复6371机器
docker start redis-node-1
#进去查看集群状态,很明显6371集群是处于myself,slave状态
docker exec -it redis-node-1 /bin/bash
root@k8s-master:/data# cat nodes.conf 
84bcd2ee94cc91edcb1b4298b2a280b4f1588a52 192.168.142.128:6373@16373 master - 0 1658898881521 3 connected 10923-16383
e7fd3352b9999ba0e75c9972971652e3d694eca3 192.168.142.128:6372@16372 master - 0 1658898881521 2 connected 5461-10922
267df70ab24d26946ef0b9ddde5dc34f6a4eb83b 192.168.142.128:6374@16374 slave 84bcd2ee94cc91edcb1b4298b2a280b4f1588a52 0 1658898881521 3 connected
6f09a00eb85e485ba2da608bc106a8b8641647e4 192.168.142.128:6375@16375 master - 0 1658898881521 7 connected 0-5460
551c2fa30f0c0b72843cd51097c74539981c8e93 192.168.142.128:6376@16376 slave e7fd3352b9999ba0e75c9972971652e3d694eca3 1658898881521 1658898881520 2 connected
e6a761497e0fe78c066db91e32449ff5b71e29ae 192.168.142.128:6371@16371 myself,slave 6f09a00eb85e485ba2da608bc106a8b8641647e4 0 1658898881520 7 connected

Usually we still want 6371 to be the master and 6375 to be the slave, so we only need to suspend 6375 first and then resume

#暂停并启动6375集群
docker stop redis-node-5
sleep 10
docker start redis-node-5

#进入6371机器查看集群状态
docker exec -it redis-node-1 /bin/bash
root@k8s-master:/data# redis-cli -p 6371 -c
127.0.0.1:6371> cluster nodes
e7fd3352b9999ba0e75c9972971652e3d694eca3 192.168.142.128:6372@16372 master - 0 1658900310064 2 connected 5461-10922
6f09a00eb85e485ba2da608bc106a8b8641647e4 192.168.142.128:6375@16375 slave e6a761497e0fe78c066db91e32449ff5b71e29ae 0 1658900309054 8 connected
84bcd2ee94cc91edcb1b4298b2a280b4f1588a52 192.168.142.128:6373@16373 master - 0 1658900311073 3 connected 10923-16383
267df70ab24d26946ef0b9ddde5dc34f6a4eb83b 192.168.142.128:6374@16374 slave 84bcd2ee94cc91edcb1b4298b2a280b4f1588a52 0 1658900308045 3 connected
e6a761497e0fe78c066db91e32449ff5b71e29ae 192.168.142.128:6371@16371 myself,master - 0 1658900309000 8 connected 0-5460
551c2fa30f0c0b72843cd51097c74539981c8e93 192.168.142.128:6376@16376 slave e7fd3352b9999ba0e75c9972971652e3d694eca3 0 1658900309000 2 connected

Obviously, the 6371 machine has become myself, the master state, and the 6375 machine has returned to the slave state

9.5 Master-slave expansion case

insert image description here
Requirements: Level 1 system, all high concurrent traffic comes in, what should I do if the three masters and three slaves can't handle it?
On the basis of three masters and three slaves, add one-to-one master and one slave to bear the traffic.
The main steps of changing from three masters and three slaves to four masters and four slaves:
1. Add new nodes 2. Add two new nodes
for slot allocation (also
Just two redis services)

for i in $(seq 7 8);do docker run -d --name redis-node-$i --net host --privileged=true -v /mydata/redis/redis-node$i:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 637$i ;done
[root@k8s-master ~]# docker ps -a
CONTAINER ID   IMAGE         COMMAND                  CREATED         STATUS          PORTS     NAMES
88ce7c01b591   redis:6.0.8   "docker-entrypoint.s…"   6 minutes ago   Up 6 minutes              redis-node-8
7adb00fc0187   redis:6.0.8   "docker-entrypoint.s…"   6 minutes ago   Up 6 minutes              redis-node-7
c9b8b822a86a   redis:6.0.8   "docker-entrypoint.s…"   26 hours ago    Up 3 hours                redis-node-6
4a6f36a1c8b7   redis:6.0.8   "docker-entrypoint.s…"   26 hours ago    Up 27 minutes             redis-node-5
98390020eaa3   redis:6.0.8   "docker-entrypoint.s…"   26 hours ago    Up 3 hours                redis-node-4
4f9436b43bc6   redis:6.0.8   "docker-entrypoint.s…"   26 hours ago    Up 3 hours                redis-node-3
4aeb79c8dd54   redis:6.0.8   "docker-entrypoint.s…"   26 hours ago    Up 3 hours                redis-node-2
568e4c387b0a   redis:6.0.8   "docker-entrypoint.s…"   26 hours ago    Up 34 minutes             redis-node-1

6377 machines (empty slots) join the original cluster as the master (new master node)

#进入6377容器
docker exec -it redis-node-7 /bin/bash

#add-node是添加节点,6371作为6377的领路人成为新的master节点
redis-cli --cluster add-node 192.168.142.128:6377 192.168.142.128:6371
....
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.142.128:6377 to make it join the cluster.
[OK] New node added correctly.     #新的节点已加入

#查看6377端口集群状态
root@k8s-master:/data# redis-cli --cluster check 192.168.142.128:6377
192.168.142.128:6377 (2d932fa6...) -> 0 keys | 0 slots | 0 slaves.
192.168.142.128:6373 (84bcd2ee...) -> 5 keys | 5461 slots | 1 slaves.
192.168.142.128:6371 (e6a76149...) -> 5 keys | 5461 slots | 1 slaves.
192.168.142.128:6372 (e7fd3352...) -> 3 keys | 5462 slots | 1 slaves.
[OK] 13 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.142.128:6377)
M: 2d932fa60af3b3658e7dd5cf74cfaac66e068cef 192.168.142.128:6377
   slots: (0 slots) master         #已经是master状态,槽位为0
M: 84bcd2ee94cc91edcb1b4298b2a280b4f1588a52 192.168.142.128:6373
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: e6a761497e0fe78c066db91e32449ff5b71e29ae 192.168.142.128:6371
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 267df70ab24d26946ef0b9ddde5dc34f6a4eb83b 192.168.142.128:6374
   slots: (0 slots) slave
   replicates 84bcd2ee94cc91edcb1b4298b2a280b4f1588a52
S: 6f09a00eb85e485ba2da608bc106a8b8641647e4 192.168.142.128:6375
   slots: (0 slots) slave
   replicates e6a761497e0fe78c066db91e32449ff5b71e29ae
S: 551c2fa30f0c0b72843cd51097c74539981c8e93 192.168.142.128:6376
   slots: (0 slots) slave
   replicates e7fd3352b9999ba0e75c9972971652e3d694eca3
M: e7fd3352b9999ba0e75c9972971652e3d694eca3 192.168.142.128:6372
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.

Redistribute slots
reshard #Re-allocate by hashing algorithm
After new nodes join, slots must be re-planned, 16383/number of primary master nodes = evenly allocated slots
insert image description here
insert image description here
insert image description here
to view cluster information
insert image description here

As can be seen from the figure above, the slot of the machine with port 6377 is the slot assigned to it by the other three main master machines.

Add the slave node 6378 to the main machine 6377
insert image description here
Check the cluster again insert image description here
so far the expansion of four masters and four slaves is complete!

9.6 Master-slave scaling case

insert image description here
Billion-level traffic will not appear again for a long time. How to restore from four masters and four slaves to three masters and three slaves?

The first step: delete the 6378 machine from the redis cluster

#获得6378的节点ID
root@k8s-master:/data# redis-cli --cluster check 192.168.142.128:6378
S: e21816512f027ad07db8a945578f75961add9029 192.168.142.128:6378
   slots: (0 slots) slave

#删除该节点
redis-cli --cluster del-node 192.168.142.128:6378 e21816512f027ad07db8a945578f75961add9029
>>> Removing node e21816512f027ad07db8a945578f75961add9029 from cluster 192.168.142.128:6378
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.

Step 2: Empty the slots of the 6377 machine and reassign them

redis-cli --cluster reshard 192.168.142.128:6371
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? e6a761497e0fe78c066db91e32449ff5b71e29ae  #被接收的槽位ID,此ID为6371机器
Source node #1: 2d932fa60af3b3658e7dd5cf74cfaac66e068cef    #6377机器的ID,槽位全部交给6371
Source node #2: done

insert image description here
Step 3: Delete the 6377 machine from the redis cluster

#获得6377的节点ID
redis-cli --cluster check 192.168.142.128:6371
M: 2d932fa60af3b3658e7dd5cf74cfaac66e068cef 192.168.142.128:6377

#删除该节点
redis-cli --cluster del-node 192.168.142.128:6377 2d932fa60af3b3658e7dd5cf74cfaac66e068cef
>>> Removing node 2d932fa60af3b3658e7dd5cf74cfaac66e068cef from cluster 192.168.142.128:6377
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.

insert image description here
So far the shrinkage is complete!


10.docker container monitoring-CAdvisor+InfluxDB+Granfana

CAdvisor is an open source tool for monitoring host container system resources, and obtains information by viewing directories
such as /proc, /sys, etc.
The source supports zabbix, Graphite, InfluxDB, OpenTSDB, Elasticsearch, etc.

CAdvisor (data collection) + InfluxDB (data storage) + Grafana (data display)

influxdb deployment

docker run -d --name influxdb -p 8086:8086 -p 8083:8083 tutum/influxdb

insert image description here
Create database user and password

docker exec -it influxdb influx
CREATE DATABASE "test"
CREATE USER "root" WITH PASSWORD 'root' WITH ALL PRIVILEGES

CAdvisor deployment

-storage_driver=influxdb #Driver
-storage_driver_db=test #Influxdb created database name
-storage_driver_user=root #Created user
-storage_driver_password=root #Created password
-storage_driver_host=192.168.142.128:8086 #Connect to the influxdb host library on port 8086

docker run -d \
--name=cadvisor \
-p 18080:8080 \
-v /:/rootfs,ro \
-v /var/run:/var/run \
-v /sys:/sys,ro \
-v /var/lib/docker/:/var/lib/docker,ro \
-v /etc/machine-id:/etc/machine-id:ro \
-v /dev/disk/:/dev/disk,ro \
--detach=true \
--privileged=true \
--name=cadvisor \
google/cadvisor:latest \
-storage_driver=influxdb \
-storage_driver_host=192.168.142.128:8086 \
-storage_driver_db=test \
-storage_driver_user=root \
-storage_driver_password=root

insert image description here
Test whether CAdvisor data is written on influxdb
insert image description here
font color=red>grafana deployment<

docker run -d \
-p 3000:3000 \
-e INFLUXDB_HOST=localhost \
-e INFLUXDB_port=8086 \
-e INFLUXDB_NAME=cadvisor \
-e INFLUXDB_USER=test \
-e INFLUXDB_PASS=root \
--link influxdb:influxdb \
--name grafana \
 grafana/grafana

insert image description here
Add data sources
insert image description here
insert image description here
insert image description here
and add monitoring indicators for visualization
insert image description here

11.docker harbor warehouse
11.1 Harbor Introduction


The Harbor Registry mirror warehouse on Harbor’s official website was created by the cloud-native laboratory of VMware’s R&D center. It includes functions such as authority management, LDAP, log review, management interface, self-registration, mirror replication, and Chinese support. It is matched with jenkins

version requirements

docker version: 20.10.17
docker-compose version: 1.25.4

11.2 Docker deploys Harbor
#docker-compose安装
docker-compose_install(){
    
    
curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.4/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x  /usr/local/bin/docker-compose 
}
docker-compose_install

#下载harbor离线安装包
wget https://mirrors.yangxingzhen.com/harbor/harbor-offline-installer-v1.10.3.tgz
#解压文件包
tar -zxf harbor-offline-installer-v1.10.3.tgz -C /usr/local/
cd /usr/local/harbor/
#包内的文件情况
#common.sh  harbor.v1.10.3.tar.gz  harbor.yml  install.sh  LICENSE  prepare

#备份配置文件
cp harbor.yml harbor.yml.bak
vim harbor.yml
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 192.168.142.128              #本主机IP地址
# https related config              
#https:                                #如果没有配置443证书的要注释掉
#  # https port for harbor, default is 443
#  port: 443
  # The path of cert and key files for nginx
  #  certificate: /your/certificate/path
  # private_key: /your/private/key/path
  
#运行安装脚本
./install.sh

#运行成功的信息
#Creating network "harbor_harbor" with the default driver
#Creating harbor-log ... done
#Creating harbor-portal ... done
#Creating harbor-db     ... done
#Creating redis         ... done
#Creating registry      ... done
#Creating registryctl   ... done
#Creating harbor-core   ... done
#Creating harbor-jobservice ... done
#Creating nginx             ... done
#✔ ----Harbor has been installed and started successfully.----

[root@k8s-master harbor]# ss -antl
State              Recv-Q             Send-Q                           Local Address:Port                           Peer Address:Port             Process             
LISTEN             0                  128                                    0.0.0.0:80                                  0.0.0.0:*                                    
LISTEN             0                  128                                    0.0.0.0:22                                  0.0.0.0:*                                    
LISTEN             0                  128                                  127.0.0.1:1514                                0.0.0.0:*                                    
LISTEN             0                  128                                       [::]:80                                     [::]:*                                    
LISTEN             0                  128                                       [::]:22                                     [::]:*    

Access the web interface

User: admin
Password: Harbor12345

insert image description here

11.3 Upload the image to the harbor warehouse

Create a project test_images
insert image description here

Mirroring and labeling

[root@k8s-master harbor]# docker tag nginx 192.168.142.128/test_images/nginx:latest
[root@k8s-master harbor]# docker images
REPOSITORY                          TAG       IMAGE ID       CREATED         SIZE
192.168.142.128/test_images/nginx   latest    605c77e624dd   7 months ago    141MB
nginx                               latest    605c77e624dd   7 months ago    141MB

Modify Docker default port 443 access

The default access to the Docker warehouse is port 443, which is modified to port 80

cat > /etc/docker/daemon.json <<EOF
{
	    "registry-mirrors": ["http://hub-mirror.c.163.com"],
    	"insecure-registries":["主机ip地址"]
}
EOF
systemctl daemon-reload
systemctl restart docker

Log in to the local warehouse

[root@k8s-master harbor]# docker login 192.168.142.128
Username: admin
Password:             #输入harbor仓库密码
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Push the image to the warehouse

[root@k8s-master harbor]# docker push 192.168.142.128/test_images/nginx:latest
The push refers to repository [192.168.142.128/test_images/nginx]
d874fd2bc83b: Pushed 
32ce5f6a5106: Pushed 
f1db227348d0: Pushed 
b8d6e692a25e: Pushed 
e379e8aedd4d: Pushed 
2edcec3590a4: Pushed 
latest: digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3 size: 1570

insert image description here
insert image description here

insert image description here

Download the mirror repository


[root@k8s-master harbor]# docker pull 192.168.142.128/test_images/nginx:latest

Guess you like

Origin blog.csdn.net/qq_47945825/article/details/125913252