Centos7 builds Kubernetes cluster environment practice

1. Front master and backup

Four installation methods:
1. Use kubeadmin to install through offline mirroring (recommended)
2. Use Alibaba Cloud public cloud platform k8s, pay
3. Install through yum official warehouse, the ancient version is relatively old
4. Install in the form of binary package, kubeasz( github)

This article adopts the kubeadmin method to install, and the number of CPU cores of the virtual machine is set to 2, because the minimum requirement is 2

Prepare three virtual machines, which are expected to be a master node and two node nodes
node1 master

node2  slave

node3 slave

# 统一三台虚拟机时区
timedatectl set-timezone Asia/Shanghai
# 分别设置三台虚拟机主机名便于操作
hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3
# 设置主机名对照关系(三台都要修改)
vi /etc/hosts
ip node1 # node1设置
ip node2 # node2设置
ip node3 # node3设置
# 关闭防火墙,三台虚拟机都要设置,生产环境跳过这一步
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
systemctl disable firewalld
systemctl stop firewalld

2. Install Docker (no special, all three nodes must be operated)

Installation package installation

# 把安装包上传服务器docker-ce-18.09.tar.gz,并解压
tar -zxvf docker-ce-18.09.tar.gz
# 进入安装文件夹
cd docker
# 安装
yum localinstall -y *.rpm
# 启动docker
systemctl start docker
# 设置开机启动(必须)
systemctl enable docker

Yum installation method (temporarily not used)

# 查询以前是否安装了docker
yum list installed | grep docker
或者
rpm -qa docker*
# 如果以前有安装,移除上一步查询出来的所有文件
yum remove docker-xx
# 重新安装docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast # 让yum自动检测哪个安装源是最快的
yum list docker-ce --showduplicates | sort -r  # 查看docker库版本
# 安装docker(1.14.1版本只支持18.09版本)
yum install docker-ce-18.09.5 docker-ce-cli-18.09.5 containerd.io

# 启动docker
systemctl start docker
# 设置开机启动(必须)
systemctl enable docker

Make sure the slave cgroups are in the same slave groupfs

Cgroups is the abbreviation of control groups, which provides a mechanism for the Linux kernel to gather and divide tasks, and organize some tasks into one or more subsystems through a set of parameter sets.   
Cgroups is the underlying basis for realizing the resource management control part of IaaS virtualization (kvm, lxc, etc.), PaaS container sandbox (Docker, etc.).
The subsystem is a group that divides tasks into a group according to a specified attribute according to the task division function of cgroup, and is mainly used to realize resource control.
In cgroup, the divided task groups are organized in a hierarchical structure, and multiple subsystems form a structure similar to multiple trees in a data structure. cgroup contains multiple isolated subsystems, each subsystem represents a single resource

docker info | grep cgroup 

The following means that they are not in the same group

# 新建文件夹,避免创建组cgroup命令出错
mkdir /etc/docker

# 执行下面命令使得,docker处于cgroup组
cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"]
}
EOF
systemctl daemon-reload && systemctl restart docker

The following figure appears, and the output indicates that the setting is successful

 3. Install kubeadm, kubeadm is a cluster deployment tool (no special, all three nodes must be operated)

# 创建文件夹
mkdir /usr/local/k8s
cd /usr/local/k8s
# 将镜像包上传至服务器,然后解压
tar -zxvf kube114-rpm.tar.gz
# 进入解压文件夹
cd kube114-rpm
# 安装文件夹中rpm
yum localinstall -y *.rpm

# 关闭交换区
swapoff -a
vi /etc/fstab 
#swap一行注释
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

# 配置网桥

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

4. Install k8s through mirroring (no special, all three nodes must be operated)

# 上传k8s-114-images.tar.gz(k8s镜像)、flannel-dashboard.tar.gz到/usr/local/k8s
cd /usr/local/k8s
docker load -i k8s-114-images.tar.gz
docker load -i flannel-dashboard.tar.gz

5. Use Kubeadm to deploy k8s cluster

1. Master master server configuration

# 指定kubernetes版本以及虚拟ip段
kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=192.168.0.0/16

Note: The points required after the node execution is completed, as shown in the figure, the 1 mark point needs to be executed manually, and the 2 mark point needs to be executed by the salve node and is also the focus of building the cluster. You can copy and reserve it first

  kubeadm join 192.168.2.167:6443 --token hmwaai.hy98osxiyvx33u41 \
    --discovery-token-ca-cert-hash sha256:578be171f98c29f1b6be09992f24c4e644144f655b368c56ef2f7ca37b4c10c1 

# 手动执行1标注点命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# 查看当前nodes状态
kubectl get nodes
#查看存在问题的pod
kubectl get pod --all-namespaces

It is found that the two podes are in the pending state. The reason is that the bottom layer of k8s uses the flannel component for inter-node communication, so flannel needs to be installed to make it in the running state

# 先把kube-flannel.yml上传到/usr/local/k8s
# 安装flannel网络组件
kubectl create -f kube-flannel.yml
# 再查看节点状态,确保所有节点都处于running状态才行
kubectl get pod --all-namespaces

2. Slave node configuration

# 执行刚才master节点构建时输出的命令,先执行node2,node2成功后再执行node3
 kubeadm join 192.168.2.167:6443 --token hmwaai.hy98osxiyvx33u41 \
    --discovery-token-ca-cert-hash sha256:578be171f98c29f1b6be09992f24c4e644144f655b368c56ef2f7ca37b4c10c1 

# 如果上边命令忘记了,可在master节点上查看,然后node节点上执行构造语句
kubeadm token list
kubeadm join 192.168.2.167:6443 --token aoeout.9k0ybvrfy09q1jf6 --discovery-token-unsafe-skip-ca-verification

# 执行完上边命令去到master节点查看节点状态
kubectl get nodes

# 发现从节点状态为NotReady,此时进入从节点查看执行日志(如果发现是Ready代表从节点状态正常)
journalctl -f -u kubelet

Check the log information and find that the cni file is missing, as shown in the following figure

# 解决方法,从master节点复制cni(在从节点上执行该命令)
scp -r master1:/etc/cni /etc/cni
# 重启从节点kubelet服务
systemctl restart kubelet

# 设置开机启动命令
systemctl enable kubelet

# 再次回到master节点确认状态,确认所有从节点处于Ready状态
kubectl get node

Make sure that both the master and node nodes are in the Ready state, and all nodes are in the Running state to indicate that the cluster is normal. 

 

3. Master opens the dashboard

Opening the dashboard requires three files: kubernetes-dashboard.yaml, admin-role.yaml, and kubernetes-dashboard-admin.rbac.yaml, which are first uploaded to the master node virtual machine

# kubernetes-dashboard.yaml仪表盘的配置文件
kubectl apply -f kubernetes-dashboard.yaml
# admin-role.yaml定义仪表盘有哪些功能
kubectl apply -f admin-role.yaml
# kubernetes-dashboard-admin.rbac.yaml编译了系统应用权限所在
kubectl apply -f kubernetes-dashboard-admin.rbac.yaml
# 卸载仪表盘
kubectl delete -f kubernetes-dashboard.yaml

# 查看部署节点
kubectl get deployment
kubectl delete deployment 节点名称
kubectl get service
kubectl delete service 服务名称

# 获取系统命名空间
kubectl -n kube-system get svc

# 查看仪表盘状态
kubectl get po -n kube-system |grep dashboard

Port 32000 for external access of the dashboard 

If you find an error, you can troubleshoot it in the following ways

# 查看仪表盘节点日志信息(节点不正常时)
kubectl describe pod kubernetes-dashboard-6647f9f49-6bqh9 --namespace=kube-system

# 查看cni日志(节点不正常时)
sudo journalctl -xe | grep cni

4. Use the dashboard to deploy the tomcat container

Deployment successful display 

5. Realize cluster file sharing based on NFS

Operate on the master node and use the master node as a file sharing server

# master节点安装nfs-utils和rpcbind
yum install -y nfs-utils rpcbind
# 进入/usr/local,并创建data目录
cd /usr/local
mkdir data
# 进入data,并创建 www-data
cd data
mkdir www-data
cd www-data
# 设置哪个文件夹对外暴漏
vim /etc/exports

# 设置读写文件夹,保存退出
/usr/local/data/www-data 192.168.2.167/24(rw,sync)

# 启动nfs服务和rpcbind服务
systemctl start nfs.service
systemctl start rpcbind.service

# 设置开机启动
systemctl enable nfs.service
systemctl enable rpcbind.service

# 检查是否设置成功
exportfs

node node settings

# 安装nfs-utils
yum install -y nfs-utils

# 查看master节点共享的文件夹(ip为master的ip)
showmount -e 192.168.2.167
# 挂载本地mnt文件夹到共享文件夹(ip为master的ip)
mount 192.168.2.167:/usr/local/data/www-data /mnt

Command to deploy tomcat

Under the k8s folder, create tomcat-service and tomcat-deploy folders respectively, and create tomcat-service.yml and tomcat-deploy.yml files under the two folders respectively

The content of the tomcat-service.yml file is as follows

apiVersion: v1
kind: Service
metadata:
  name: tomcat-service
  labels:
    app: tomcat-service
spec:
  type: NodePort
  selector:
    app: tomcat-cluster
  ports:
  - port: 8000
    targetPort: 8080
    nodePort: 32500

The content of tomcat-deploy.yml is as follows

apiVersion: extensions/v1beta1
kind: Deployment
metadata: 
  name: tomcat-deploy
spec:
  replicas: 2 
  template: 
    metadata:
      labels:
        app: tomcat-cluster
    spec:
      volumes: 
      - name: web-app
        hostPath:
          path: /mnt
      containers:
      - name: tomcat-cluster
        image: tomcat:latest
        resources:
          requests:
            cpu: 0.5
            memory: 200Mi
          limits:
            cpu: 1
            memory: 512Mi
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: web-app
          mountPath: /usr/local/tomcat/webapps

Command to create tomcat container

# 部署tomcat
kubectl create -f tomcat-deploy.yml
# 查看布署是否成功
kubectl get deployment
# 查看下pod
kubectl get pod -o wide

6. Use Rinted to provide external Service load balancing support

# 前提确保刚才启动的tomcat-deploy启动
# 删除已部署的服务
kubectl get deployment
kubectl delete deployment 节点名称
kubectl get service
kubectl delete service 服务名称

Modify tomcat-service.yml to leak the same port

apiVersion: v1
kind: Service
metadata:
  name: tomcat-service
  labels:
    app: tomcat-service
spec:
#  type: NodePort
  selector:
    app: tomcat-cluster
  ports:
  - port: 8000
    targetPort: 8080
#    nodePort: 32500

Create tomcat service

kubectl create -f tomcat-service.yml
# 查看服务
kubectl get service

# 查看服务详情
kubectl describe service tomcat-service

 Create index.jsp in the shared directory and fill it (obtain server ip)

<%=request.getLocalAddr()%>

Use the tomcat-service service ip to access, and found that the display ip will switch between the two servers

curl 10.108.28.103:8000/index.jsp

 Install the port forwarding tool Rinted in the linux environment

cd /usr/local
# 下载源码包,下载不了的话采用离线包安装
wget http://www.boutell.com/rinetd/http/rinetd.tar.gz
# 解压
tar -zxvf rinetd.tar.gz
cd rinetd
# 设置允许端口范围
sed -i 's/65536/65535/g' rinetd.c
# 在usr下创建man目录
mkdir -p /usr/man/
yum install -y gcc
# 编译安装
make && make install

If you see the following information, the installation is successful

# 增加端口映射
vim /etc/rinetd.conf
# 增加端口映射(10.108.28.103为tomcat-service容器ip,8000为端口号,0.0.0.0代表允许所有ip发送请求)
0.0.0.0 8000 10.108.28.103 8000

# 加载配置文件
rinetd -c /etc/rinetd.conf

 At this point, you can use the host ip to access

Guess you like

Origin blog.csdn.net/qq_21875331/article/details/79442826