k1.第一章 基于kubeadm安装kubernetes v1.20 -- 基本环境配置和内核配置(一)

1.k8s高可用架构解析

在这里插入图片描述

2.基本环境配置

Kubeadm安装方式自1.14版本以后,安装方法几乎没有任何变化,此文档可以尝试安装最新的k8s集群,centos采用的是7.x版本

K8S官网:https://kubernetes.io/docs/setup/

最新版高可用安装:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

表1-1 高可用Kubernetes集群规划

角色 机器名 机器配置 ip地址 安装软件
master1 k8s-master01.example.local 2C4G 172.31.3.101 chrony-client、docker、kubeadm 、kubelet、kubectl
master2 k8s-master02.example.local 2C4G 172.31.3.102 chrony-client、docker、kubeadm 、kubelet、kubectl
master3 k8s-master03.example.local 2C4G 172.31.3.103 chrony-client、docker、kubeadm 、kubelet、kubectl
ha1 k8s-ha01.example.local 2C2G 172.31.3.104 172.31.3.188(vip) chrony-server、haproxy、keepalived
ha2 k8s-ha02.example.local 2C2G 172.31.3.105 chrony-server、haproxy、keepalived
harbor1 k8s-harbor01.example.local 2C2G 172.31.3.106 chrony-client、docker、docker-compose、harbor
harbor2 k8s-harbor02.example.local 2C2G 172.31.3.107 chrony-client、docker、docker-compose、harbor
node1 k8s-node01.example.local 2C4G 172.31.3.108 chrony-client、docker、kubeadm 、kubelet
node2 k8s-node02.example.local 2C4G 172.31.3.109 chrony-client、docker、kubeadm 、kubelet
node3 k8s-node03.example.local 2C4G 172.31.3.110 chrony-client、docker、kubeadm 、kubelet

软件版本信息和Pod、Service网段规划:

配置信息 备注
支持的操作系统版本 CentOS 7.9/stream 8、Rocky 8、Ubuntu 18.04/20.04
Docker版本 19.03.15
kubeadm版本 1.20.14
Pod网段 192.168.0.0/12
Service网段 10.96.0.0/12

注意:

集群安装时会涉及到三个网段:

宿主机网段:就是安装k8s的服务器

Pod网段:k8s Pod的网段,相当于容器的IP

Service网段:k8s service网段,service用于集群容器通信。

service网段会设置为10.96.0.0/12

Pod网段会设置成192.168.0.0/12

宿主机网段可能是172.31.0.0/21

需要注意的是这三个网段不能有任何交叉。

比如如果宿主机的IP是10.105.0.x

那么service网段就不能是10.96.0.0/12,因为10.96.0.0/12网段可用IP是:

10.96.0.1 ~ 10.111.255.255

所以10.105是在这个范围之内的,属于网络交叉,此时service网段需要更换,

可以更改为192.168.0.0/16网段(注意如果service网段是192.168开头的子网掩码最好不要是12,最好为16,因为子网掩码是12他的起始IP为192.160.0.1 不是192.168.0.1)。

同样的道理,技术别的网段也不能重复。可以通过http://tools.jb51.net/aideddesign/ip_net_calc/计算:
在这里插入图片描述
所以一般的推荐是,直接第一个开头的就不要重复,比如你的宿主机是192开头的,那么你的service可以是10.96.0.0/12.

如果你的宿主机是10开头的,就直接把service的网段改成192.168.0.0/16

如果你的宿主机是172开头的,就直接把pod网段改成192.168.0.0/12

注意搭配,均为10网段、172网段、192网段的搭配,第一个开头数字不一样就免去了网段冲突的可能性,也可以减去计算的步骤。

VIP(虚拟IP)不要和公司内网IP重复,首先去ping一下,不通才可用。VIP需要和主机在同一个局域网内!

公有云上搭建VIP是公有云的负载均衡的IP,比如阿里云的内网SLB的地址,腾讯云内网ELB的地址

各节点设置主机名:

hostnamectl set-hostname k8s-master01.example.local
hostnamectl set-hostname k8s-master02.example.local
hostnamectl set-hostname k8s-master03.example.local
hostnamectl set-hostname k8s-ha01.example.local
hostnamectl set-hostname k8s-ha02.example.local
hostnamectl set-hostname k8s-harbot01.example.local
hostnamectl set-hostname k8s-harbor02.example.local
hostnamectl set-hostname k8s-node01.example.local
hostnamectl set-hostname k8s-node02.example.local
hostnamectl set-hostname k8s-node03.example.local

各节点设置ip地址格式如下:

#CentOS
[root@k8s-master01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
NAME=eth0
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.31.3.101
PREFIX=21
GATEWAY=172.31.0.2
DNS1=223.5.5.5
DNS2=180.76.76.76

#Ubuntu
root@k8s-master01:~# cat /etc/netplan/01-netcfg.yaml 
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      addresses: [172.31.3.101/21] 
      gateway4: 172.31.0.2
      nameservers:
        addresses: [223.5.5.5, 180.76.76.76]

所有节点配置hosts,修改/etc/hosts如下:

cat >> /etc/hosts <<EOF
172.31.3.101 k8s-master01.example.local k8s-master01
172.31.3.102 k8s-master02.example.local k8s-master02
172.31.3.103 k8s-master03.example.local k8s-master03
172.31.3.104 k8s-ha01.example.local k8s-ha01
172.31.3.105 k8s-ha02.example.local k8s-ha02
172.31.3.106 k8s-harbor01.example.local k8s-harbor01
172.31.3.107 k8s-harbor02.example.local k8s-harbor02
172.31.3.108 k8s-node01.example.local k8s-node01
172.31.3.109 k8s-node02.example.local k8s-node02
172.31.3.110 k8s-node03.example.local k8s-node03
172.31.3.188 k8s-lb
172.31.3.188 harbor.raymonds.cc
EOF

CentOS 7所有节点配置 yum源如下:

[root@k8s-master01 ~]# rm -f /etc/yum.repos.d/*.repo
[root@k8s-master01 ~]# cat > /etc/yum.repos.d/base.repo <<EOF
[base]
name=base
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever

[extras]
name=extras
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever

[updates]
name=updates
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever

[centosplus]
name=centosplus
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever

[epel]
name=epel
baseurl=https://mirrors.cloud.tencent.com/epel/$releasever/$basearch/
gpgcheck=1
gpgkey=https://mirrors.cloud.tencent.com/epel/RPM-GPG-KEY-EPEL-$releasever
EOF

Rocky 8所有节点配置 yum源如下:

[root@k8s-master01 ~]# cat /etc/yum.repos.d/base.repo 
[BaseOS]
name=BaseOS
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/BaseOS/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial

[AppStream]
name=AppStream
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/AppStream/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial

[extras]
name=extras
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/extras/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
enabled=1

[plus]
name=plus
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/plus/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial

[PowerTools]
name=PowerTools
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/PowerTools/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial

CentOS stream 8所有节点配置 yum源如下:

[root@k8s-master01 ~]# cat /etc/yum.repos.d/base.repo 
[BaseOS]
name=BaseOS
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/BaseOS/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

[AppStream]
name=AppStream
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/AppStream/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

[extras]
name=extras
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/extras/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

[centosplus]
name=centosplus
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/centosplus/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

[PowerTools]
name=PowerTools
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/PowerTools/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

Ubuntu 所有节点配置 apt源如下:

root@k8s-master01:~# cat > /etc/apt/sources.list <<EOF
deb http://mirrors.cloud.tencent.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-security main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-updates main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-proposed main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-backports main restricted universe multiverse
EOF

必备工具安装:

#CentOS安装
yum -y install vim tree lrzsz wget jq psmisc net-tools telnet yum-utils device-mapper-persistent-data lvm2 git 

#Ubuntu安装
apt -y install tree lrzsz jq

所有节点关闭防火墙、selinux、swap。服务器配置如下:

#CentOS
systemctl disable --now firewalld

#CentOS 7
systemctl disable --now NetworkManager

#Ubuntu
systemctl disable --now ufw

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

关闭swap分区

sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a

#Ubuntu 20.04,执行下面命令
sed -ri 's/.*swap.*/#&/' /etc/fstab
SD_NAME=`lsblk|awk -F"[ └─]" '/SWAP/{printf $3}'`
systemctl mask dev-${SD_NAME}.swap
swapoff -a

ha01和ha02上安装chrony-server:

[root@k8s-ha01 ~]# cat install_chrony_server.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2021-11-22
#FileName:      install_chrony_server.sh
#URL:           raymond.blog.csdn.net
#Description:   install_chrony_server for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

os(){
    
    
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}

install_chrony(){
    
    
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        yum -y install chrony &> /dev/null
        sed -i -e '/^pool.*/d' -e '/^server.*/d' -e '/^# Please consider .*/a\server ntp.aliyun.com iburst\nserver time1.cloud.tencent.com iburst\nserver ntp.tuna.tsinghua.edu.cn iburst' -e 's@^#allow.*@allow 0.0.0.0/0@' -e 's@^#local.*@local stratum 10@' /etc/chrony.conf
        systemctl enable --now chronyd &> /dev/null
        systemctl is-active chronyd &> /dev/null ||  {
    
     ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
        ${COLOR}"chrony安装完成"${END}
    else
        apt -y install chrony &> /dev/null
        sed -i -e '/^pool.*/d' -e '/^# See http:.*/a\server ntp.aliyun.com iburst\nserver time1.cloud.tencent.com iburst\nserver ntp.tuna.tsinghua.edu.cn iburst' /etc/chrony/chrony.conf
        echo "allow 0.0.0.0/0" >> /etc/chrony/chrony.conf
        echo "local stratum 10" >> /etc/chrony/chrony.conf
        systemctl enable --now chronyd &> /dev/null
        systemctl is-active chronyd &> /dev/null ||  {
    
     ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
        ${COLOR}"chrony安装完成"${END}
    fi
}

main(){
    
    
    os
    install_chrony
}

main

[root@k8s-ha01 ~]# bash install_chrony_server.sh 
chrony安装完成

[root@k8s-ha02 ~]# bash install_chrony_server.sh 
chrony安装完成

[root@k8s-ha01 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6    17    39  -1507us[-8009us] +/-   37ms
^- 139.199.215.251               2   6    17    39    +10ms[  +10ms] +/-   48ms
^? 101.6.6.172                   0   7     0     -     +0ns[   +0ns] +/-    0ns

[root@k8s-ha02 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6    17    40    +90us[-1017ms] +/-   32ms
^+ 139.199.215.251               2   6    33    37    +13ms[  +13ms] +/-   25ms
^? 101.6.6.172                   0   7     0     -     +0ns[   +0ns] +/-    0ns

master、node、harbor上安装chrony-client:

[root@k8s-master01 ~]# cat install_chrony_client.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2021-11-22
#FileName:      install_chrony_client.sh
#URL:           raymond.blog.csdn.net
#Description:   install_chrony_client for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
SERVER1=172.31.3.104
SERVER2=172.31.3.105

os(){
    
    
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}

install_chrony(){
    
    
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        yum -y install chrony &> /dev/null
        sed -i -e '/^pool.*/d' -e '/^server.*/d' -e '/^# Please consider .*/a\server '${SERVER1}' iburst\nserver '${SERVER2}' iburst' /etc/chrony.conf
        systemctl enable --now chronyd &> /dev/null
        systemctl is-active chronyd &> /dev/null ||  {
    
     ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
        ${COLOR}"chrony安装完成"${END}
    else
        apt -y install chrony &> /dev/null
        sed -i -e '/^pool.*/d' -e '/^# See http:.*/a\server '${SERVER1}' iburst\nserver '${SERVER2}' iburst' /etc/chrony/chrony.conf
        systemctl enable --now chronyd &> /dev/null
        systemctl is-active chronyd &> /dev/null ||  {
    
     ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
        systemctl restart chronyd
        ${COLOR}"chrony安装完成"${END}
    fi
}

main(){
    
    
    os
    install_chrony
}

main

[root@k8s-master01 ~]# for i in k8s-master02 k8s-master03 k8s-harbor01 k8s-harbor02 k8s-node01 k8s-node02 k8s-node03;do scp install_chrony_client.sh $i:/root/ ; done

[root@k8s-master01 ~]# bash install_chrony_client.sh 
chrony安装完成

[root@k8s-master01 ~]# chronyc sources -nv
210 Number of sources = 2
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^+ k8s-ha01                      3   6    17     8    +84us[  +74us] +/-   55ms
^* k8s-ha02                      3   6    17     8    -82us[  -91us] +/-   45ms

所有节点设置时区。时间同步配置如下:

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone

所有节点配置limit:

ulimit -SHn 65535

cat >>/etc/security/limits.conf <<EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF

Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:

[root@k8s-master01 ~]# cat ssh_key_push.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2021-11-19
#FileName:      ssh_key_push.sh
#URL:           raymond.blog.csdn.net
#Description:   ssh_key_push for CentOS 7/8 & Ubuntu 18.04/24.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

export SSHPASS=123456
HOSTS="
172.31.3.101
172.31.3.102
172.31.3.103
172.31.3.104
172.31.3.105
172.31.3.106
172.31.3.107
172.31.3.108
172.31.3.109
172.31.3.110"

os(){
    
    
    OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}

ssh_key_push(){
    
    
    rm -f ~/.ssh/id_rsa*
    ssh-keygen -f /root/.ssh/id_rsa -P '' &> /dev/null
    if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
        rpm -q sshpass &> /dev/null || {
    
     ${COLOR}"安装sshpass软件包"${END};yum -y install sshpass &> /dev/null; }
    else
        dpkg -S sshpass &> /dev/null || {
    
     ${COLOR}"安装sshpass软件包"${END};apt -y install sshpass &> /dev/null; }
    fi
    for i in $HOSTS;do
        {
    
    
            sshpass -e ssh-copy-id -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa.pub $i &> /dev/null
            [ $? -eq 0 ] && echo $i is finished || echo $i is false
        }&
    done
    wait
}

main(){
    
    
    os
    ssh_key_push
}

main

[root@k8s-master01 ~]# bash ssh_key_push.sh 
安装sshpass软件包
172.31.3.105 is finished
172.31.3.108 is finished
172.31.3.109 is finished
172.31.3.106 is finished
172.31.3.101 is finished
172.31.3.110 is finished
172.31.3.104 is finished
172.31.3.107 is finished
172.31.3.102 is finished
172.31.3.103 is finished

所有节点升级系统并重启,此处升级没有升级内核,下节会单独升级内核:

yum update -y --exclude=kernel* && reboot #CentOS7需要升级,CentOS8可以按需升级系统

[root@k8s-master01 ~]# cat /etc/redhat-release 
CentOS Linux release 7.9.2009 (Core)

3.内核配置

CentOS7 需要升级内核至4.18+,本地升级的版本为4.19

在master01节点下载内核:

[root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm

[root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

从master01节点传到其他节点:

[root@k8s-master01 ~]# for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done

所有节点安装内核

cd /root && yum localinstall -y kernel-ml*

所有节点更改内核启动顺序

grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg

grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

检查默认内核是不是4.19

grubby --default-kernel

[root@k8s-master01 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64

所有节点重启,然后检查内核是不是4.19

reboot

uname -a

[root@k8s-master01 ~]# uname -a
Linux k8s-master01 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

master和node安装ipvsadm:

#CentOS
yum -y install ipvsadm ipset sysstat conntrack libseccomp

#Ubuntu
apt -y install ipvsadm ipset sysstat conntrack libseccomp-dev

所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack #内核小于4.18,把这行改成nf_conntrack_ipv4

cat >> /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack #内核小于4.18,把这行改成nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

然后执行systemctl enable --now systemd-modules-load.service即可

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:

cat > /etc/sysctl.d/k8s.conf <<EOF 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

sysctl --system

Kubernetes内核优化常用参数详解:

net.ipv4.ip_forward = 1 #其值为0,说明禁止进行IP转发;如果是1,则说明IP转发功能已经打开。
net.bridge.bridge-nf-call-iptables = 1 #二层的网桥在转发包时也会被iptables的FORWARD规则所过滤,这样有时会出现L3层的iptables rules去过滤L2的帧的问题
net.bridge.bridge-nf-call-ip6tables = 1 #是否在ip6tables链中过滤IPv6包 
fs.may_detach_mounts = 1 #当系统有容器运行时,需要设置为1

vm.overcommit_memory=1  
#0, 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
#1, 表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
#2, 表示内核允许分配超过所有物理内存和交换空间总和的内存

vm.panic_on_oom=0 
#OOM就是out of memory的缩写,遇到内存耗尽、无法分配的状况。kernel面对OOM的时候,咱们也不能慌乱,要根据OOM参数来进行相应的处理。
#值为0:内存不足时,启动 OOM killer。
#值为1:内存不足时,有可能会触发 kernel panic(系统重启),也有可能启动 OOM killer。
#值为2:内存不足时,表示强制触发 kernel panic,内核崩溃GG(系统重启)。

fs.inotify.max_user_watches=89100 #表示同一用户同时可以添加的watch数目(watch一般是针对目录,决定了同时同一用户可以监控的目录数量)

fs.file-max=52706963 #所有进程最大的文件数
fs.nr_open=52706963 #单个进程可分配的最大文件数
net.netfilter.nf_conntrack_max=2310720 #连接跟踪表的大小,建议根据内存计算该值CONNTRACK_MAX = RAMSIZE (in bytes) / 16384 / (x / 32),并满足nf_conntrack_max=4*nf_conntrack_buckets,默认262144

net.ipv4.tcp_keepalive_time = 600  #KeepAlive的空闲时长,或者说每次正常发送心跳的周期,默认值为7200s(2小时)
net.ipv4.tcp_keepalive_probes = 3 #在tcp_keepalive_time之后,没有接收到对方确认,继续发送保活探测包次数,默认值为9(次)
net.ipv4.tcp_keepalive_intvl =15 #KeepAlive探测包的发送间隔,默认值为75s
net.ipv4.tcp_max_tw_buckets = 36000 #Nginx 之类的中间代理一定要关注这个值,因为它对你的系统起到一个保护的作用,一旦端口全部被占用,服务就异常了。 tcp_max_tw_buckets 能帮你降低这种情况的发生概率,争取补救时间。
net.ipv4.tcp_tw_reuse = 1 #只对客户端起作用,开启后客户端在1s内回收
net.ipv4.tcp_max_orphans = 327680 #这个值表示系统所能处理不属于任何进程的socket数量,当我们需要快速建立大量连接时,就需要关注下这个值了。

net.ipv4.tcp_orphan_retries = 3
#出现大量fin-wait-1
#首先,fin发送之后,有可能会丢弃,那么发送多少次这样的fin包呢?fin包的重传,也会采用退避方式,在2.6.358内核中采用的是指数退避,2s,4s,最后的重试次数是由tcp_orphan_retries来限制的。

net.ipv4.tcp_syncookies = 1 #tcp_syncookies是一个开关,是否打开SYN Cookie功能,该功能可以防止部分SYN攻击。tcp_synack_retries和tcp_syn_retries定义SYN的重试次数。
net.ipv4.tcp_max_syn_backlog = 16384 #进入SYN包的最大请求队列.默认1024.对重负载服务器,增加该值显然有好处.
net.ipv4.ip_conntrack_max = 65536 #表明系统将对最大跟踪的TCP连接数限制默认为65536
net.ipv4.tcp_max_syn_backlog = 16384 #指定所能接受SYN同步包的最大客户端数量,即半连接上限;
net.ipv4.tcp_timestamps = 0 #在使用 iptables 做 nat 时,发现内网机器 ping 某个域名 ping 的通,而使用 curl 测试不通, 原来是 net.ipv4.tcp_timestamps 设置了为 1 ,即启用时间戳
net.core.somaxconn = 16384	#Linux中的一个kernel参数,表示socket监听(listen)的backlog上限。什么是backlog呢?backlog就是socket的监听队列,当一个请求(request)尚未被处理或建立时,他会进入backlog。而socket server可以一次性处理backlog中的所有请求,处理后的请求不再位于监听队列中。当server处理请求较慢,以至于监听队列被填满后,新来的请求会被拒绝。

所有节点配置完内核后,重启服务器,保证重启后内核依旧加载

reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

[root@k8s-master01 ~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
ip_vs_ftp              16384  0 
nf_nat                 32768  1 ip_vs_ftp
ip_vs_sed              16384  0 
ip_vs_nq               16384  0 
ip_vs_fo               16384  0 
ip_vs_sh               16384  0 
ip_vs_dh               16384  0 
ip_vs_lblcr            16384  0 
ip_vs_lblc             16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs_wlc              16384  0 
ip_vs_lc               16384  0 
ip_vs                 151552  24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack          143360  2 nf_nat,ip_vs
nf_defrag_ipv6         20480  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs

猜你喜欢

转载自blog.csdn.net/qq_25599925/article/details/122472395
今日推荐