基于k8s+docker的高可用集群

网络拓扑图

在这里插入图片描述

ip地址规划

server ip
k8s-masker 192.168.1.200
k8s-node1 192.168.1.201
k8s-node2 192.168.1.202
prometheus 192.168.1.230
nfs 192.168.1.231
ansible 192.168.1.232
harbor 192.168.1.233

项目描述

模拟k8s在生产中的使用,通过k8s管理web集群,nfs保持前端页面一致性,并且搭建harbor仓库满足自身需求,使用Prometheus监控集群性能,让集群保持高可用。

项目环境

CentOS 7.9,ansible 2.9.27,Docker 20.10.6,Kubernetes 1.20.6,Harbor 2.4.1,nfs v4,Prometheus 2.34.0

一、k8s集群的搭建(单master双node)

vim /etc/hosts
 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.200 k8smaster
192.168.1.201 k8snode1
192.168.1.202 k8snode2
# 1.互相之间建立免密通道
ssh-keygen      # 一路回车
 
ssh-copy-id k8smaster
ssh-copy-id k8snode1
ssh-copy-id k8snode2
 
# 2.关闭交换分区(Kubeadm初始化的时候会检测)
# 临时关闭:swapoff -a
# 永久关闭:注释swap挂载,给swap这行开头加一下注释
[root@k8smaster ~]# cat /etc/fstab
 
#
# /etc/fstab
# Created by anaconda on Thu Mar 23 15:22:20 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=00236222-82bd-4c15-9c97-e55643144ff3 /boot                   xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
 
# 3.加载相关内核模块
modprobe br_netfilter
 
echo "modprobe br_netfilter" >> /etc/profile
 
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
 
#重新加载,使配置生效
sysctl -p /etc/sysctl.d/k8s.conf
 
 
# 为什么要执行modprobe br_netfilter?
#    "modprobe br_netfilter"命令用于在Linux系统中加载br_netfilter内核模块。这个模块是Linux内# 核中的一个网络桥接模块,它允许管理员使用iptables等工具对桥接到同一网卡的流量进行过滤和管理。
# 因为要使用Linux系统作为路由器或防火墙,并且需要对来自不同网卡的数据包进行过滤、转发或NAT操作。
 
# 为什么要开启net.ipv4.ip_forward = 1参数?
#   要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指# 定了Linux系统当前对路由转发功能的支持情况;其值为0时表示禁止进行IP转发;如果是1,则说明IP转发# 功能已经打开。
 
# 4.配置阿里云的repo源
yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
 
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet ipvsadm
 
# 5.配置安装k8s组件需要的阿里云的repo源
[root@k8smaster ~]# vim  /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
 
# 6.配置时间同步
[root@k8smaster ~]# crontab -e
* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org
 
#重启crond服务
[root@k8smaster ~]# service crond restart
 
# 7.安装docker服务
yum install docker-ce-20.10.6 -y
 
 
# 启动docker,设置开机自启
systemctl start docker && systemctl enable docker.service
 
# 8.配置docker镜像加速器和驱动
vim  /etc/docker/daemon.json 
 
{
    
    
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
} 
 
# 重新加载配置,重启docker服务
systemctl daemon-reload  && systemctl restart docker
 
# 9.安装初始化k8s需要的软件包
yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
 
# 设置kubelet开机启动
systemctl enable kubelet 
 
#注:每个软件包的作用
#Kubeadm:  kubeadm是一个工具,用来初始化k8s集群的
#kubelet:   安装在集群所有节点上,用于启动Pod的
#kubectl:   通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
 
# 10.kubeadm初始化k8s集群
# 把初始化k8s集群需要的离线镜像包上传到k8smaster、k8snode1、k8snode2机器上,然后解压
docker load -i k8simage-1-20-6.tar.gz
 
# 把文件远程拷贝到node节点
root@k8smaster ~]# scp k8simage-1-20-6.tar.gz root@k8snode1:/root
root@k8smaster ~]# scp k8simage-1-20-6.tar.gz root@k8snode2:/root
 
# 查看镜像
[root@k8snode1 ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED       SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.20.6    9a1ebfd8124d   2 years ago   118MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.6    b93ab2ec4475   2 years ago   47.3MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.6    560dd11d4550   2 years ago   116MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.6    b05d611c1af9   2 years ago   122MB
calico/pod2daemon-flexvol                                         v3.18.0    2a22066e9588   2 years ago   21.7MB
calico/node                                                       v3.18.0    5a7c4970fbc2   2 years ago   172MB
calico/cni                                                        v3.18.0    727de170e4ce   2 years ago   131MB
calico/kube-controllers                                           v3.18.0    9a154323fbf7   2 years ago   53.4MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   2 years ago   253MB
registry.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   3 years ago   45.2MB
registry.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   3 years ago   683kB
 
# 11.使用kubeadm初始化k8s集群
kubeadm config print init-defaults > kubeadm.yaml
 
[root@k8smaster ~]# vim kubeadm.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.2.104         #控制节点的ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8smaster                        #控制节点主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {
    
    }
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 需要修改为阿里云的仓库
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16         #指定pod网段,需要新增加这个
scheduler: {
    
    }
#追加如下几行
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
 
# 12.基于kubeadm.yaml文件初始化k8s
[root@k8smaster ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
 
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c 
 
# 13.扩容k8s集群-添加工作节点
[root@k8snode1 ~]# kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c 
 
[root@k8snode2 ~]# kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c 
 
# 14.在k8smaster上查看集群节点状况
[root@k8smaster ~]# kubectl get nodes
NAME        STATUS     ROLES                  AGE     VERSION
k8smaster   NotReady   control-plane,master   2m49s   v1.20.6
k8snode1    NotReady   <none>                 19s     v1.20.6
k8snode2    NotReady   <none>                 14s     v1.20.6
 
# 15.k8snode1,k8snode2的ROLES角色为空,<none>就表示这个节点是工作节点。
可以把k8snode1,k8snode2的ROLES变成work
[root@k8smaster ~]# kubectl label node k8snode1 node-role.kubernetes.io/worker=worker
node/k8snode1 labeled
 
[root@k8smaster ~]# kubectl label node k8snode2 node-role.kubernetes.io/worker=worker
node/k8snode2 labeled
[root@k8smaster ~]# kubectl get nodes
NAME        STATUS     ROLES                  AGE     VERSION
k8smaster   NotReady   control-plane,master   2m43s   v1.20.6
k8snode1    NotReady   worker                 2m15s   v1.20.6
k8snode2    NotReady   worker                 2m11s   v1.20.6
# 注意:上面状态都是NotReady状态,说明没有安装网络插件
 
# 16.安装kubernetes网络组件-Calico
# 上传calico.yaml到k8smaster上,使用yaml文件安装calico网络插件 。
wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate
 
[root@k8smaster ~]# kubectl apply -f  calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
 
# 再次查看集群状态
[root@k8smaster ~]# kubectl get nodes
NAME        STATUS   ROLES                  AGE     VERSION
k8smaster   Ready    control-plane,master   5m57s   v1.20.6
k8snode1    Ready    worker                 3m27s   v1.20.6
k8snode2    Ready    worker                 3m22s   v1.20.6
# STATUS状态是Ready,说明k8s集群正常运行了

二、ansible部署

Ansible 的主要意义在于简化和自动化系统管理、配置管理和应用程序部署,从而提高效率、降低风险,并允许基础设施即代码的实践,有助于现代化的运维管理。

[root@ansible ~]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:YGv9ScKv4RTUxlIrhGpD3tVttUkUm2yjT8aA28sEK6M root@ansible
The key's randomart image is:
+---[RSA 2048]----+
|       ..... o=. |
|    . ...+.ooo = |
|   o oo.+ B.. O  |
|    =..* + = = . |
|   . .o S + + +  |
|     . . O + =   |
|      E o + o .  |
|       o o       |
|        o        |
+----[SHA256]-----+
[root@ansible ~]# 
[root@ansible ~]# cd /root/.ssh/
[root@ansible .ssh]# ls
id_rsa  id_rsa.pub
# 部署免密通道
[root@ansible .ssh]# ssh-copy-id  -i id_rsa.pub  [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_rsa.pub"
The authenticity of host '192.168.1.230 (192.168.1.230)' can't be established.
ECDSA key fingerprint is SHA256:GhEQWCholuMPMqDpZvuk5UpFFhgy8N3NV+45MdJwWu4.
ECDSA key fingerprint is MD5:b2:d2:40:7b:77:39:b5:4e:fa:e7:1e:eb:17:d1:8e:6b.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

[root@ansible .ssh]# ssh [email protected]
Last login: Tue Sep 12 21:31:34 2023 from 192.168.1.110
[root@prometheus ~]# ls
anaconda-ks.cfg
[root@prometheus ~]# exit
登出
Connection to 192.168.1.230 closed.
安装ansible
[root@ansible .ssh]# yum install epel-release -y
[root@ansible .ssh]# yum  install ansible -y
编写主机清单
[root@ansible .ssh]# cd /etc/ansible/
[root@ansible ansible]# ls
ansible.cfg  hosts  roles
[root@ansible ansible]# vim hosts 
[k8smabster]
192.168.1.200

[k8snode]
192.168.1.201
192.168.1.202

[nfs]
192.168.1.231

[harbor]
192.168.1.233

[prometheus]
192.168.1.230

三、部署堡垒机

准备一台 2核4G (最低)且可以访问互联网的 64 位 Linux 主机;

curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bash

在这里插入图片描述

四、部署nfs服务器

为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现。

1.搭建好nfs服务器

# 在nfs服务器和k8s集群上安装nfs
[root@nfs ~]# yum install nfs-utils -y
[root@master ~]# yum install nfs-utils -y
[root@node1 ~]# yum install nfs-utils -y
[root@node2 ~]# yum install nfs-utils -y

2.设置共享目录

[root@nfs ~]# vim /etc/exports
[root@nfs ~]# cat /etc/exports
/web   192.168.1.0/24(rw,no_root_squash,sync)
[root@nfs ~]# mkdir /web
[root@nfs ~]# cd /web
[root@nfs web]# echo "have a nice day" >index.html
[root@nfs web]# ls
index.html
[root@localhost web]# exportfs -rv		#刷新nfs服务
exporting 192.168.78.0/24:/web
#重启服务并且设置开机启动
[root@nfs web]# systemctl restart nfs && systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.

3.创建pv使用nfs服务器上的共享目录

[root@master ~]# mkdir /pv
[root@master ~]# cd /pv/
[root@master pv]# vim  nfs-pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-web
  labels:
    type: pv-web
spec:
  capacity:
    storage: 10Gi 
  accessModes:
    - ReadWriteMany
  storageClassName: nfs         # pv对应的名字
  nfs:
    path: "/web"       # nfs共享的目录
    server: 192.168.2.121   # nfs服务器的ip地址
    readOnly: false   # 访问模式
[root@master pv]#  kubectl apply -f nfs-pv.yml
persistentvolume/pv-web created
[root@master pv]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-web   10Gi       RWX            Retain           Available           nfs                     12s

# 创建pvc使用pv
[root@master pv]# vim nfs-pvc.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-web
spec:
  accessModes:
  - ReadWriteMany      
  resources:
     requests:
       storage: 1Gi
  storageClassName: nfs #使用nfs类型的pv
[root@master pv]# kubectl apply -f nfs-pvc.yml
persistentvolumeclaim/pvc-web created
[root@master pv]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-web   Bound    pv-web   10Gi       RWX            nfs            13s
#创建pod使用pvc
[root@master pv]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: sc-pv-storage-nfs
          persistentVolumeClaim:
            claimName: pvc-web
      containers:
        - name: sc-pv-container-nfs
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
              name: "http-server"
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: sc-pv-storage-nfs
[root@master pv]# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
[root@master pv]# kubectl get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-76855d4d79-mbbf7    1/1     Running   0          13s   10.244.166.131   node1   <none>           <none>
nginx-deployment-76855d4d79-qgvth    1/1     Running   0          13s   10.244.104.4     node2   <none>           <none>
nginx-deployment-76855d4d79-xkgz7    1/1     Running   0          13s   10.244.166.132   node1   <none>           <none>

4.测试访问

[root@master pv]# curl 10.244.166.131
have a nice day
[root@master pv]# curl 10.244.166.132
have a nice day
[root@master pv]# curl 10.244.104.4
have a nice day
# 修改nfs服务器挂载内容
[root@nfs web]# echo "hello" >> index.html
#再次访问
[root@master pv]# curl 10.244.104.4
have a nice day
hello
[root@master pv]# curl 10.244.166.132
have a nice day
hello
[root@master pv]# curl 10.244.166.131
have a nice day
hello

五、harbor仓库的搭建

docker私有仓库的搭建和使用(harbor)

六、k8s的pod部署

1.k8s部署mysql pod

1.编写yaml文件,包括了deployment、service
[root@master xm]# cat mysql_deploy_svc.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
    labels:
        app: mysql
    name: mysql
spec:
    replicas: 1
    selector:
        matchLabels:
            app: mysql
    template:
        metadata:
            labels: 
                app: mysql
        spec:
            containers:
            - image: mysql:5.7.42
              name: mysql
              imagePullPolicy: IfNotPresent
              env:
              - name: MYSQL_ROOT_PASSWORD
                value: "xzx527416"
              ports:
              - containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: svc-mysql
  name: svc-mysql
spec:
  selector:
    app: mysql
  type: NodePort
  ports:
  - port: 3306
    protocol: TCP
    targetPort: 3306
    nodePort: 30007

2.部署
[root@master xm]# kubectl apply -f mysql_deploy_svc.yaml 
deployment.apps/mysql created
service/svc-mysql unchanged
[root@master xm]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP          3h
svc-mysql    NodePort    10.99.226.10   <none>        3306:30007/TCP   47s  




[root@master xm]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
mysql-7964c6f547-v7m8d   1/1     Running   0          2m36s
[root@master xm]# kubectl exec -it mysql-7964c6f547-v7m8d -- bash
bash-4.2# mysql -uroot -pxzx527416
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.42 MySQL Community Server (GPL)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

2.部署nginx,采用HPA技术

首先配置daemon.json,让docker到自己的harbor仓库拉mysql镜像  (每个节点都要做)
vim /etc/docker/daemon.json

{
    
    
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["192.168.1.233:80"]
}  


# 重启docker服务
systemctl daemon-reload  && systemctl restart docker

登录到harbor  (默认)账户:admin 密码:Harbor12345

[root@master docker]# docker login 192.168.1.233:80
Username: admin
Password: 
Error response from daemon: Get "http://192.168.1.233:80/v2/": unauthorized: authentication required
[root@master docker]# docker login 192.168.1.233:80
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded 

从harbor仓库拉nginx镜像 (node节点都拉一下)
[root@node1 ~]# docker pull 192.168.1.233:80/library/nginx:1.0
1.0: Pulling from library/nginx
360eba32fa65: Extracting [===========================================>       ]  25.07MB/29.12MB
c5903f3678a7: Downloading [==========================================>        ]  34.74MB/41.34MB
27e923fb52d3: Download complete 
72de7d1ce3a4: Download complete 
94f34d60e454: Download complete
e42dcfe1730b: Download complete 
907d1bb4e931: Download complete    


安装metrics


下载配置文件

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml 
# 替换image
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0
        imagePullPolicy: IfNotPresent
        args:
#        // 新增下面两行参数
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname 

部署:
kubectl apply -f components.yaml

[root@master ~]# kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6949477b58-tbkl8   1/1     Running   1          7h10m
calico-node-4t8kx                          1/1     Running   1          7h10m
calico-node-6lbdw                          1/1     Running   1          7h10m
calico-node-p6ghl                          1/1     Running   1          7h10m
coredns-7f89b7bc75-dxc9v                   1/1     Running   1          7h15m
coredns-7f89b7bc75-kw7ph                   1/1     Running   1          7h15m
etcd-master                                1/1     Running   1          7h15m
kube-apiserver-master                      1/1     Running   2          7h15m
kube-controller-manager-master             1/1     Running   1          7h15m
kube-proxy-87ptg                           1/1     Running   1          7h15m
kube-proxy-8gbsd                           1/1     Running   1          7h15m
kube-proxy-x4fbj                           1/1     Running   1          7h15m
kube-scheduler-master                      1/1     Running   1          7h15m
metrics-server-7787b94d94-jt9sc            1/1     Running   0          47s
[root@master ~]# kubectl top node
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   151m         7%     1221Mi          64%       
node1    85m          8%     574Mi           65%       
node2    193m         19%    573Mi           65%   

使用hpa:
[root@master ~]# mkdir hpa
[root@master ~]# cd hpa  
[root@master hpa]# cat myweb.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myweb
  name: myweb
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: 192.168.1.233:80/library/nginx:1.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 300m
          requests:
            cpu: 100m
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myweb-svc
  name: myweb-svc
spec:
  selector:
    app: myweb
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30001



部署:kubectl apply -f myweb.yaml


使用hpa功能
[root@master hpa]# kubectl autoscale deployment myweb --cpu-percent=50 --min=1 --max=10
horizontalpodautoscaler.autoscaling/myweb autoscale

[root@master hpa]# kubectl get hpa
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
myweb   Deployment/myweb   0%/50%    1         10        3          40s 

3. 使用ingress给web做负载均衡

[root@master ingress]# ls
ingress-controller-deploy.yaml  ingress-nginx-controllerv1.1.0.tar.gz  kube-webhook-certgen-v1.1.0.tar.gz  my-ingress.yaml  my-nginx-svc.yaml

介绍:
ingress-controller-deploy.yaml   是部署ingress controller使用的yaml文件
ingress-nginx-controllerv1.1.0.tar.gz    ingress-nginx-controller镜像
kube-webhook-certgen-v1.1.0.tar.gz       kube-webhook-certgen镜像
my-ingress.yaml 创建ingress的配置文件
my-nginx-svc.yaml  启动sc-nginx-svc-1服务和相关pod的yaml


将压缩包传到node1和node2节点
[root@master ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz root@node1:/root
ingress-nginx-controllerv1.1.0.tar.gz                                                                                                                           100%  276MB  81.6MB/s   00:03    
[root@master ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz root@node2:/root
ingress-nginx-controllerv1.1.0.tar.gz                                                                                                                           100%  276MB  81.4MB/s   00:03    
[root@master ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz root@node2:/root
kube-webhook-certgen-v1.1.0.tar.gz                                                                                                                              100%   47MB 100.7MB/s   00:00    
[root@master ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz root@node1:/root
kube-webhook-certgen-v1.1.0.tar.gz                                                                                                                              100%   47MB 120.5MB/s   00:00  

在node节点导入
docker load -i ingress-nginx-controllerv1.1.0.tar.gz 
docker load -i kube-webhook-certgen-v1.1.0.tar.gz   


部署
kubectl apply -f  


[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   5m33s
ingress-nginx     Active   65s
kube-node-lease   Active   5m34s
kube-public       Active   5m34s
kube-system       Active   5m34s  

创建svc,暴露服务
[root@master ingress]# cat my-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: simple-fanout-example
  annotations:
    kubernets.io/ingress.class: nginx
spec:
  ingressClassName: nginx
  rules:
  - host: www.wen.com
    http:
      paths:
      - path: /foo
        pathType: Prefix
        backend:
          service:
            name: sc-nginx-svc-3
            port:
              number: 80
      - path: /bar
        pathType: Prefix
        backend:
          service:
            name: sc-nginx-svc-4
            port:
              number: 80  


kubectl apply -f my-nginx-svc.yaml  


[root@master ingress]# kubectl describe svc sc-nginx-svc
Name:              sc-nginx-svc-4
Namespace:         default
Labels:            app=sc-nginx-svc-4
Annotations:       <none>
Selector:          app=sc-nginx-feng-4
Type:              ClusterIP
IP Families:       <none>
IP:                10.104.90.254
IPs:               10.104.90.254
Port:              name-of-service-port  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.104.2:80,10.244.104.3:80,10.244.166.130:80
Session Affinity:  None
Events:            <none>

[root@master ingress]# curl 10.104.90.254
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html {
    
     color-scheme: light dark; }
body {
    
     width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
您在 /var/spool/mail/root 中有新邮件   


启动ingress,[root@master ingress]# cat my-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: simple-fanout-example
  annotations:
    kubernets.io/ingress.class: nginx         #设置ingress-controller为nginx
spec:
  ingressClassName: nginx
  rules:
  - host: www.wen.com
    http:
      paths:
      - path: /foo
        pathType: Prefix
        backend:
          service:
            name: sc-nginx-svc-3
            port:
              number: 80
      - path: /bar
        pathType: Prefix
        backend:
          service:
            name: sc-nginx-svc-4
            port:
              number: 80

[root@master ingress]# kubectl apply -f my-ingress.yaml
ingress.networking.k8s.io/simple-fanout-example created


[root@master ingress]# kubectl get ingress
NAME                    CLASS   HOSTS         ADDRESS                       PORTS   AGE
simple-fanout-example   nginx   www.wen.com   192.168.1.201,192.168.1.202   80      7


[root@master ingress]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.98.48.252   <none>        80:31174/TCP,443:30047/TCP   63m
ingress-nginx-controller-admission   ClusterIP   10.111.62.4    <none>        443/TCP                      63m

安照上面ingress的配置,需要进入pod并且修改
[root@master ingress]# kubectl get pod 
NAME                                 READY   STATUS    RESTARTS   AGE
nginx-deployment-76855d4d79-mbbf7    1/1     Running   0          160m
nginx-deployment-76855d4d79-qgvth    1/1     Running   0          160m
nginx-deployment-76855d4d79-xkgz7    1/1     Running   0          160m 
sc-nginx-deploy-4-766c99dd77-6f4xm   1/1     Running   0          171m
sc-nginx-deploy-4-766c99dd77-h79r2   1/1     Running   0          171m
sc-nginx-deploy-4-766c99dd77-pk4w6   1/1     Running   0          171m  

[root@master ingress]# kubectl exec -it sc-nginx-deploy-3-7496c84fcf-b8tkm -- bash
root@sc-nginx-deploy-3-7496c84fcf-b8tkm:/# cd /usr/share/nginx/html
root@sc-nginx-deploy-3-7496c84fcf-b8tkm:/usr/share/nginx/html# ls
50x.html  index.html
root@sc-nginx-deploy-3-7496c84fcf-b8tkm:/usr/share/nginx/html# mkdir bar
root@sc-nginx-deploy-3-7496c84fcf-b8tkm:/usr/share/nginx/html# cp index.html bar
root@sc-nginx-deploy-3-7496c84fcf-b8tkm:/usr/share/nginx/html# exit
exit


在另一台机器上添加hosts解析
[root@ansible ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.201 www.wen.com
192.168.1.202 www.wen.com

访问
[root@ansible ~]# curl www.wen.com/bar/index.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html {
    
     color-scheme: light dark; }
body {
    
     width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>           

这里使用Ingress 将来自同一个主机名的请求根据路径前缀分发到不同的后端服务。它允许你在同一个集群中托管多个应用程序, 并使用统一的入口(域名)来访问它们,而不需要为每个应用程序创建单独的域名和负载均衡器。这可以简化整体的架构和管理,并提高灵活性。

扫描二维码关注公众号,回复: 17019896 查看本文章

4.使用dashboard管理集群

1.首先从官网获取yaml文件
[root@master dashboard]# ls
recommended.yaml
您在 /var/spool/mail/root 中有新邮件
2.部署
[root@master dashboard]# kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

可以看到这里多了一个namespace,叫kubernetes-dashboard
[root@master dashboard]# kubectl get ns
NAME                   STATUS   AGE
default                Active   3h43m
ingress-nginx          Active   3h39m
kube-node-lease        Active   3h43m
kube-public            Active   3h43m
kube-system            Active   3h43m
kubernetes-dashboard   Active   7s

3.查看pod和svc信息
[root@master dashboard]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS              RESTARTS   AGE
dashboard-metrics-scraper-66dd8bdd86-s27cx   0/1     ContainerCreating   0          34s
kubernetes-dashboard-785c75749d-hbn6f        0/1     ContainerCreating   0          35s
[root@master dashboard]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS              RESTARTS   AGE
dashboard-metrics-scraper-66dd8bdd86-s27cx   0/1     ContainerCreating   0          42s
kubernetes-dashboard-785c75749d-hbn6f        1/1     Running             0          43s
您在 /var/spool/mail/root 中有新邮件

[root@master dashboard]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.96.10.205     <none>        8000/TCP   51s
kubernetes-dashboard        ClusterIP   10.108.234.121   <none>        443/TCP    51s

#由于kubernetes-dashboard服务是ClusterIP类型的,不便于浏览器访问,所以自己该为NodePort类型
先删除svc
[root@master dashboard]# kubectl delete svc kubernetes-dashboard -n kubernetes-dashboard
service "kubernetes-dashboard" deleted
自己编写的svc的yaml文件
[root@master dashboard]# vim dashboard-svc.yaml
您在 /var/spool/mail/root 中有新邮件
[root@master dashboard]# cat dashboard-svc.yaml 
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard




4.部署
[root@master dashboard]# kubectl apply -f dashboard-svc.yaml 
service/kubernetes-dashboard created
[root@master dashboard]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.96.10.205    <none>        8000/TCP        3m23s
kubernetes-dashboard        NodePort    10.104.39.223   <none>        443:30389/TCP   16s

5.创建一个登录dashboard的用户
[root@master dashboard]# vim dashboard-svc-account.yaml
[root@master dashboard]# cat dashboard-svc-account.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: dashboard-admin
subjects:
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
[root@master dashboard]# kubectl apply -f dashboard-svc-account.yaml 
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
您在 /var/spool/mail/root 中有新邮件   


6.查看secret信息
[root@master dashboard]# kubectl get secret -n kube-system|grep admin|awk '{print $1}'
dashboard-admin-token-d65vh
secret的describe中可以看到登录使用的token。
[root@master dashboard]# kubectl describe secret dashboard-admin-token-d65vh -n kube-system
Name:         dashboard-admin-token-d65vh
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 6e65a1b3-0669-47b7-a8d4-a16b0cacc069

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IloyczRwd1g0WGFxMXdfdG5TUVBRRy1sUW5mT0FEcEpYMWwwdC1EYnBHT1kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZDY1dmgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNmU2NWExYjMtMDY2OS00N2I3LWE4ZDQtYTE2YjBjYWNjMDY5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.2homyNjWI18vJA81aNoyQ0cQkNhsRxHk-4PFeWrkqtX-DSidbg68nNEyEFWf2b3lswdJ33szLM51ulDr5qp8cmpBlPUCw8Wcl-5k2sY3eZoaMJDFdWARdbs20xmxA73wYNcHNhttkncrmuDXKuJs39j_Nff17kHJYCj9wOKAwfezvwDQEqOb7u7riUle2w54aELornD4AGemDGivdBR5AWOguSoLl3RTZ74cPycG_-IP-pggSNGCYc4LCnfkfMZdx6LFBh0Dzz10blWUSCUNFGXzD1rkG-TVvcug4infG8BYmGtYgl55_xAH_LMjGz9gSQdMnFOdC_hL27e9lONajg
您在 /var/spool/mail/root 中有新邮件
[root@master dashboard]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.96.10.205    <none>        8000/TCP        9m17s
kubernetes-dashboard        NodePort    10.104.39.223   <none>        443:30389/TCP   6m10s
您在 /var/spool/mail/root 中有新邮件
[root@master dashboard]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-66dd8bdd86-s27cx   1/1     Running   0          9m32s
kubernetes-dashboard-785c75749d-hbn6f        1/1     Running   0          9m33s
[root@master dashboard]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.96.10.205    <none>        8000/TCP        9m42s
kubernetes-dashboard        NodePort    10.104.39.223   <none>        443:30389/TCP   6m35s

七、搭建Prometheus+grafana监控

1.安装prometheus server

#上传下载的源码包到linux服务器
[root@prometheus ~]# mkdir /prom
[root@prometheus ~]# cd /prom
[root@prometheus prom]# ls
prometheus-2.34.0.linux-amd64.tar.gz
#解压源码包
[root@prometheus prom]# tar xf prometheus-2.34.0.linux-amd64.tar.gz
[root@prometheus prom]# ls
prometheus-2.34.0.linux-amd64  prometheus-2.34.0.linux-amd64.tar.gz
[root@prometheus prom]# mv prometheus-2.34.0.linux-amd64 prometheus
[root@prometheus prom]# ls
prometheus  prometheus-2.34.0.linux-amd64.tar.gz
#临时和永久修改PATH变量,添加prometheus的路径
[root@prometheus prometheus]# PATH=/prom/prometheus:$PATH		#临时
[root@prometheus prometheus]# cat /root/.bashrc				
PATH=/prom/prometheus:$PATH   #添加
#执行prometheus程序
[root@prometheus prometheus]# nohup prometheus  --config.file=/prom/prometheus/prometheus.yml &
[1] 8431
[root@prometheus prometheus]# nohup: 忽略输入并把输出追加到"nohup.out"

2.把prometheus做成一个服务来进行管理

[root@prometheus prometheus]# vim /usr/lib/systemd/system/prometheus.service 
[Unit]
Description=prometheus
[Service]
ExecStart=/prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]WantedBy=multi-user.target
#重新加载systemd相关的服务
[root@prometheus prometheus]# systemctl daemon-reload
[root@prometheus prometheus]#  service prometheus start
[root@prometheus system]# ps aux|grep prometheu
root       7193  2.0  4.4 782084 44752 ?        Ssl  13:16   0:00 /prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
root       7201  0.0  0.0 112824   972 pts/1    S+   13:16   0:00 grep --color=auto prometheu

3.执行Prometheus程序

[root@prometheus prometheus]# nohup prometheus  --config.file=/prom/prometheus/prometheus.yml &
[1] 1543
[root@prometheus prometheus]# nohup: 忽略输入并把输出追加到"nohup.out"

[root@prometheus prometheus]# 
[root@prometheus prometheus]# service prometheus restart
Redirecting to /bin/systemctl restart prometheus.service
[root@prometheus prometheus]# netstat -anplut|grep prome
tcp6       0      0 :::9090                 :::*                    LISTEN      1543/prometheus     
tcp6       0      0 ::1:9090                ::1:42776               ESTABLISHED 1543/prometheus     
tcp6       0      0 ::1:42776               ::1:9090                ESTABLISHED 1543/prometheus 

4.在node节点服务器上安装exporter程序
下载node_exporter-1.4.0-rc.0.linux-amd64.tar.gz源码,上传到节点服务器上:

 wget https://github.com/prometheus/node_exporter/releases/download/v1.4.0/node_exporter-1.4.0.linux-amd64.tar.gz

解压,单独存放到/node_exporter文件夹:

1.下载node_exporter-1.4.0-rc.0.linux-amd64.tar.gz源码,上传到节点服务器上
2.解压

[root@mysql-master ~]# ls
anaconda-ks.cfg  node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
[root@mysql-master ~]# tar xf node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
[root@mysql-master ~]# ls
node_exporter-1.4.0-rc.0.linux-amd64         
node_exporter-1.4.0-rc.0.linux-amd64.tar.gz  
单独存放到/node_exporter文件夹
[root@mysql-master ~]# mv node_exporter-1.4.0-rc.0.linux-amd64 /node_exporter
[root@mysql-master ~]# cd /node_exporter/
[root@mysql-master node_exporter]# ls
LICENSE  node_exporter  NOTICE
[root@mysql-master node_exporter]#
# 修改PATH变量
[root@mysql-master node_exporter]# PATH=/node_exporter/:$PATH
[root@mysql-master node_exporter]# vim /root/.bashrc 
[root@mysql-master node_exporter]# tail -1 /root/.bashrc 
PATH=/node_exporter/:$PATH
# 执行node exporter 代理程序agent
[root@mysql-master node_exporter]# nohup node_exporter --web.listen-address 0.0.0.0:8090  &
[root@mysql-master node_exporter]# ps aux | grep node_exporter 
root      64281  0.0  2.1 717952 21868 pts/0    Sl   19:03   0:04 node_exporter --web.listen-address 0.0.0.0:8090
root      82787  0.0  0.0 112824   984 pts/0    S+   20:46   0:00 grep --color=auto node_exporter
[root@mysql-master node_exporter]# netstat -anplut | grep 8090
tcp6       0      0 :::8090                 :::*                    LISTEN      64281/node_exporter 
tcp6       0      0 192.168.17.152:8090     192.168.17.156:43576    ESTABLISHED 64281/node_exporter 

5.在prometheus server里添加exporter程序

[root@prometheus prometheus]# vim prometheus.yml 
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["localhost:9090"]

  - job_name: "master"
    static_configs:
      - targets: ["192.168.1.200:8090"]
  - job_name: "node1"
    static_configs:
      - targets: ["192.168.1.201:8090"]
  - job_name: "node2"
    static_configs:
      - targets: ["192.168.1.203:8090"]
  - job_name: "harbor"
    static_configs:
      - targets: ["192.168.1.233:8090"]
  - job_name: "nfs"
    static_configs:
      - targets: ["192.168.1.231:8090"]

[root@prometheus prometheus]# service  prometheus restart
Redirecting to /bin/systemctl restart prometheus.service

八.使用测试软件ab对整个k8s集群进行压力测试

# 1.运行php-apache服务器并暴露服务
[root@k8smaster hpa]# ls
php-apache.yaml
[root@k8smaster hpa]# cat php-apache.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      run: php-apache
  template:
    metadata:
      labels:
        run: php-apache
    spec:
      containers:
      - name: php-apache
        image: k8s.gcr.io/hpa-example
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
  name: php-apache
  labels:
    run: php-apache
spec:
  ports:
  - port: 80
  selector:
    run: php-apache
 
[root@k8smaster hpa]# kubectl apply -f php-apache.yaml 
deployment.apps/php-apache created
service/php-apache created
[root@k8smaster hpa]# kubectl get deploy
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
php-apache   1/1     1            1           93s
[root@k8smaster hpa]# kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
php-apache-567d9f79d-mhfsp   1/1     Running   0          44s
 
# 创建HPA功能
[root@k8smaster hpa]# kubectl autoscale deployment php-apache --cpu-percent=10 --min=1 --max=10
horizontalpodautoscaler.autoscaling/php-apache autoscaled
[root@k8smaster hpa]# kubectl get hpa
NAME         REFERENCE               TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   <unknown>/10%   1         10        0          7s
 
# 测试,增加负载
[root@k8smaster hpa]# kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
If you don't see a command prompt, try pressing enter.
OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK
[root@k8smaster hpa]# kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   0%/10%    1         10        1          3m24s
[root@k8smaster hpa]# kubectl get hpa
NAME         REFERENCE               TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   238%/10%   1         10        1          3m41s
[root@k8smaster hpa]# kubectl get hpa
NAME         REFERENCE               TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   250%/10%   1         10        4          3m57s
# 一旦CPU利用率降至0,HPA会自动将副本数缩减为 1。自动扩缩完成副本数量的改变可能需要几分钟的时间
# 2.对web服务进行压力测试,观察promethues和dashboard
# ab命令访问web:192.168.2.112:30001 同时进入prometheus和dashboard观察pod
# 四种方式观察
kubectl top pod 
http://192.168.2.117:3000/ 
http://192.168.2.117:9090/targets
https://192.168.2.104:32571/
[root@nfs ~]# yum install httpd-tools -y
[root@nfs data]# ab -n 1000000 -c 10000 -g output.dat http://192.168.2.112:30001/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.2.112 (be patient)
apr_socket_recv: Connection reset by peer (104)
Total of 3694 requests completed
# 1000个请求,10并发数 ab -n 1000 -c 10 -g output.dat http://192.168.2.112:30001/
-t 60 在60秒内发送尽可能多的请求

猜你喜欢

转载自blog.csdn.net/zheng_long_/article/details/132880178
今日推荐