k8s离线安装部署教程X86(一)

k8s离线安装部署教程

文件名称 版本号 linux核心
docker版本 20.10.9 x86
k8s版本 v1.22.4 x86
kuboard v3 x86

一、k8s(x86)

1.docker环境安装

1.1.下载

下载docker-20.10.9-ce.tgz,下载地址:地址,这里选择centos7 x86_64版本的。

注:可参考官方文档进行安装

1.2.上传

将docker-20.10.9-ce.tgz上传到/opt/tools下面。

1.3.解压

tar  -zxvf  docker-20.10.9-ce.tgz

cp docker/*  /usr/bin/
复制代码

1.4.创建docker.service

vi /usr/lib/systemd/system/docker.service
复制代码
[Unit]
 
Description=Docker Application Container Engine
 
Documentation=https://docs.docker.com
 
After=network-online.target firewalld.service
 
Wants=network-online.target
 
 
[Service]
 
Type=notify
 
# the default is not to use systemd for cgroups because the delegate issues still
 
# exists and systemd currently does not support the cgroup feature set required
 
# for containers run by docker

# 开启远程连接 
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
 
ExecReload=/bin/kill -s HUP $MAINPID
 
# Having non-zero Limit*s causes performance problems due to accounting overhead
 
# in the kernel. We recommend using cgroups to do container-local accounting.
 
LimitNOFILE=infinity
 
LimitNPROC=infinity
 
LimitCORE=infinity
 
# Uncomment TasksMax if your systemd version supports it.
 
# Only systemd 226 and above support this version.
 
#TasksMax=infinity
 
TimeoutStartSec=0
 
# set delegate yes so that systemd does not reset the cgroups of docker containers
 
Delegate=yes
 
# kill only the docker process, not all processes in the cgroup
 
KillMode=process
 
# restart the docker process if it exits prematurely
 
Restart=on-failure
 
StartLimitBurst=3
 
StartLimitInterval=60s
 
 
[Install]
 
WantedBy=multi-user.target
复制代码

1.5.指定harbor

vi /etc/docker/daemon.json

{
	"insecure-registries":["192.168.xx.xx"] 
}
复制代码

修改后,重启docker服务

systemctl daemon-reload 

service docker restart 或 systemctl restart docker
复制代码

重启docker后,login到harbor

docker login harbor的ip地址
账号
密码
复制代码

2.k8s安装准备

安装教程

  • 允许 iptables 检查桥接流量(下面也有)

确保 br_netfilter 模块被加载。这一操作可以通过运行 lsmod | grep br_netfilter 来完成。若要显式加载该模块,可执行 sudo modprobe br_netfilter

为了让你的 Linux 节点上的 iptables 能够正确地查看桥接流量,你需要确保在你的 sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1。例如:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
复制代码
  • hostname,selinux,swap,iptables
#########################################################################
#关闭防火墙: 如果是云服务器,需要设置安全组策略放行端口
# https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
systemctl stop firewalld
systemctl disable firewalld

# 修改 hostname
hostnamectl set-hostname k8s-01
# 查看修改结果
hostnamectl status
# 设置 hostname 解析
echo "127.0.0.1   $(hostname)" >> /etc/hosts

#关闭 selinux: 
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

#关闭 swap:
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab 

#允许 iptables 检查桥接流量
#https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#%E5%85%81%E8%AE%B8-iptables-%E6%A3%80%E6%9F%A5%E6%A1%A5%E6%8E%A5%E6%B5%81%E9%87%8F
## 开启br_netfilter
## sudo modprobe br_netfilter
## 确认下
## lsmod | grep br_netfilter

## 修改配置


#####这里用这个,不要用课堂上的配置。。。。。。。。。
#将桥接的 IPv4 流量传递到 iptables 的链:
# 修改 /etc/sysctl.conf
# 如果有配置,则修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf

# 可能没有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf


# 执行命令以应用
sysctl -p

#################################################################
复制代码
  • 防火墙端口

控制平面节点(master)

协议 方向 端口范围 作用 使用者
TCP 入站 6443 Kubernetes API 服务器 所有组件
TCP 入站 2379-2380 etcd 服务器客户端 API kube-apiserver, etcd
TCP 入站 10250 Kubelet API kubelet 自身、控制平面组件
TCP 入站 10251 kube-scheduler kube-scheduler 自身
TCP 入站 10252 kube-controller-manager kube-controller-manager 自身

工作节点(worker)

协议 方向 端口范围 作用 使用者
TCP 入站 10250 Kubelet API kubelet 自身、控制平面组件
TCP 入站 30000-32767 NodePort 服务† 所有组件

注:正式环境,建议开通端口访问。

  • 安装 CNI 插件(大多数 Pod 网络都需要):
CNI_VERSION="v0.8.2"
ARCH="amd64"
sudo mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz" | sudo tar -C /opt/cni/bin -xz
复制代码

离线部署:下载好cni-plugins-linux-amd64-v0.8.2.tgz,上传到/opt/tools/k8s下面

解压安装:

mkdir /opt/cni/bin
tar -zxvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin
复制代码
  • 安装 crictl(kubeadm/kubelet 容器运行时接口(CRI)所需)
DOWNLOAD_DIR=/usr/local/bin
sudo mkdir -p $DOWNLOAD_DIR
复制代码
CRICTL_VERSION="v1.17.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
复制代码

离线部署:下载好crictl-v1.17.0-linux-amd64.tar.gz,上传到/opt/tools/k8s下面

解压安装:

tar -zxvf crictl-v1.17.0-linux-amd64.tar.gz -C /usr/local/bin
复制代码

3.k8s服务安装

官网教程

安装版本:v1.22.4

安装 kubeadmkubeletkubectl 并添加 kubelet 系统服务:

DOWNLOAD_DIR=/usr/local/bin
sudo mkdir -p $DOWNLOAD_DIR
复制代码
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"

ARCH="amd64"

cd $DOWNLOAD_DIR

sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}

sudo chmod +x {kubeadm,kubelet,kubectl}

RELEASE_VERSION="v0.4.0"

curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service

sudo mkdir -p /etc/systemd/system/kubelet.service.d

curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
复制代码

离线部署:

下载好kubeadmkubelet,kubectl,上传到/opt/tools/k8s下面

chmod +x kubeadm kubectl kubelet
cp kube* /usr/local/bin/
kubeadm version
kubectl version
kubelet version
复制代码

下载好kubelet.service10-kubeadm.conf,上次到/opt/tools/k8s下面

DOWNLOAD_DIR=/usr/local/bin
sed -i "s:/usr/bin:${DOWNLOAD_DIR}:g" kubelet.service  
cp kubelet.service /etc/systemd/system/kubelet.service
复制代码
mkdir -p /etc/systemd/system/kubelet.service.d
sed -i "s:/usr/bin:${DOWNLOAD_DIR}:g" 10-kubeadm.conf
cp 10-kubeadm.conf /etc/systemd/system/kubelet.service.d
复制代码

激活并启动 kubelet

systemctl enable --now kubelet
复制代码

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环。

  • 配置 cgroup 驱动程序

注意:

由于docker默认是使用cgroupfs,kubelet是使用systemd,这里要一致,否则启动不了kubelet。

修改docker的为systemd

#编辑或新增daemon.json文件
vi /etc/docker/daemon.json

#添加如下配置
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
复制代码

重启docker:

systemctl restart docker
systemctl status docker
复制代码

4.kubeadm创建集群

4.1.准备所需的容器镜像

这个步骤是可选的,只适用于你希望 kubeadm initkubeadm join 不去下载存放在 k8s.gcr.io 上的默认的容器镜像的情况。

离线下运行kubeadm

要在没有互联网连接的情况下运行 kubeadm,你必须提前拉取所需的控制平面镜像。

你可以使用 kubeadm config images 子命令列出并拉取镜像:

kubeadm config images list
kubeadm config images pull
复制代码

在这里插入图片描述

所需镜像如下:可以找一台能上面的服务器,将这些镜像下载下来。

k8s.gcr.io/kube-apiserver:v1.22.4 k8s.gcr.io/kube-controller-manager:v1.22.4 k8s.gcr.io/kube-scheduler:v1.22.4 k8s.gcr.io/kube-proxy:v1.22.4 k8s.gcr.io/pause:3.5 k8s.gcr.io/etcd:3.5.0-0 k8s.gcr.io/coredns/coredns:v1.8.4

将下载下来的镜像,使用docker save -o xxx.tar 镜像,的方式导出,然后上传到

/opt/tools/k8s/images下面。再使用docker load -i xxx.tar的方式,将镜像加载到本地docker环境下。

注意,这里会有个奇葩的问题,通过这种方式,coredns这个镜像的名称,变成了k8s.gcr.io/coredns:v1.8.4,少了一层coredns,这个需要重新打标签tag。

docker tag k8s.gcr.io/coredns:v1.8.4  k8s.gcr.io/coredns/coredns:v1.8.4
复制代码

4.2.执行init

kubeadm init \
--apiserver-advertise-address=192.168.4.45 \
--kubernetes-version v1.22.4 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16
复制代码

apiserver-advertise-address:为master的 API server 设置广播地址,即master主机的ip。

kubernetes-version:为k8s的版本号

service-cidr:为负载均衡可达的ip范围

pod-network-cidr:为pod的服务可达的ip范围

注意:pod-cidr与service-cidr

cidr 无类别域间路由(Classless Inter-Domain Routing、CIDR)指定一个网络可达范围 pod的子网范围+service负载均衡网络的子网范围+本机ip的子网范围不能有重复域

4.3.按照提示继续

kubeadm init xxx,打印的日志如下:

[init] Using Kubernetes version: v1.22.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.503100 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: jokbeq.logz5fixljdrna6r
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.4.45:6443 --token jokbeq.logz5fixljdrna6r \
	--discovery-token-ca-cert-hash sha256:16ba5133d2ca72714c4a7dd864a5906baa427dc53d8eb7cf6d890388300a052a 
复制代码
  • 第一步:复制相关文件夹
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
复制代码
  • 第二步:导出环境变量
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
复制代码
  • 第三步:部署一个pod网络(这里选择:calico)
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
复制代码

calico安装教程,版本:v3.21.1

使用 Kubernetes API 数据存储进行安装 - 50 个或更少节点

在线部署:

1.下载 Kubernetes API 数据存储的 Calico 网络清单。

curl https://docs.projectcalico.org/manifests/calico.yaml -O
复制代码

2.如果您使用 pod CIDR 192.168.0.0/16,请跳到下一步。如果您使用不同的 pod CIDR,请使用以下命令设置一个名为的环境变量,POD_CIDR其中包含您的 pod CIDR,并192.168.0.0/16在清单中替换为您的 pod CIDR。(由于我们使用的就是这个,这步不需要操作)

POD_CIDR="<your-pod-cidr>" \
sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml
复制代码

3.使用以下命令应用清单。

kubectl apply -f calico.yaml
复制代码

离线环境下部署:

由于执行calico.yaml,需要用到镜像,无法上网的,可以在有网的服务器先下载对应的镜像。

cat calico.yaml | grep image: | awk '{print $2}'
复制代码

docker.io/calico/cni:v3.21.1 docker.io/calico/pod2daemon-flexvol:v3.21.1 docker.io/calico/node:v3.21.1 docker.io/calico/kube-controllers:v3.21.1

# 拉取全部镜像
cat calico.yaml \
    | grep image: \
    | awk '{print "docker pull " $2}' \
    | sh

# 当然你也可以一个一个pull    

# 在当前目录导出镜像为压缩包
docker save -o calico-cni-v3.21.1.tar calico/cni:v3.21.1
docker save -o calico-pod2daemon-flexvol-v3.21.1.tar calico/pod2daemon-flexvol:v3.21.1
docker save -o calico-node-v3.21.1.tar calico/node:v3.21.1
docker save -o calico-kube-controllers-v3.21.1.tar calico/kube-controllers:v3.21.1

# 加载到docker环境
docker load -i calico-cni-v3.21.1.tar
docker load -i calico-pod2daemon-flexvol-v3.21.1.tar
docker load -i calico-node-v3.21.1.tar
docker load -i calico-kube-controllers-v3.21.1.tar

# 安装calico
kubectl apply -f calico.yaml 

# 删除calico
kubectl delete -f calico.yaml 
复制代码

安装成功:

在这里插入图片描述

同时,coredns,也会变成Running状态。

  • 第四步:创建worker
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.4.45:6443 --token jokbeq.logz5fixljdrna6r \
	--discovery-token-ca-cert-hash sha256:16ba5133d2ca72714c4a7dd864a5906baa427dc53d8eb7cf6d890388300a052a 
复制代码
#token过期怎么办
kubeadm token create --print-join-command

kubeadm join 192.168.4.45:6443 --token l7smzu.ujy68m80prq526nh --discovery-token-ca-cert-hash sha256:16ba5133d2ca72714c4a7dd864a5906baa427dc53d8eb7cf6d890388300a052a 
复制代码

k8s安装效果:

在这里插入图片描述

5.验证集群

#获取所有节点
kubectl get nodes

#给节点打标签
## k8s中万物皆对象。node:机器  Pod:应用容器
###加标签
kubectl label node k8s-worker1 node-role.kubernetes.io/worker=''

###去标签
kubectl label node k8s-worker1 node-role.kubernetes.io/worker-

## k8s集群,机器重启了会自动再加入集群,master重启了会自动再加入集群控制中心
复制代码

接下来,告诉 Kubernetes 清空节点:

kubectl drain <node name>
复制代码

一旦它返回(没有报错), 你就可以下线此节点(或者等价地,如果在云平台上,删除支持该节点的虚拟机)。 如果要在维护操作期间将节点留在集群中,则需要运行:

kubectl uncordon <node name>
复制代码

Supongo que te gusta

Origin juejin.im/post/7067702149613879332
Recomendado
Clasificación