k8s-简单集群部署安装

1、安装 k8s 的节点必须是大于 1 核心的 CPU
2、安装节点的网络信息 192.168.66.0/24 10 20 21 100
3、koolshare 软路由的默认密码是 koolshare

设置centos1的IP

IPADDR=192.168.66.20
NETMASK=255.255.255.0
GATEWAY=192.168.66.1
DNS1=192.168.66.1
DNS2=114.114.114.114

kube-proxy开启ipvs的前置条件

modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules &&
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安装 Docker 软件

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum update -y && yum install -y docker-ce

## 创建 /etc/docker 目录
mkdir /etc/docker
# 配置 daemon.
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# 重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

安装 Kubeadm (主从配置)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
systemctl enable kubelet.service

初始化主节点

kubeadm config print init-defaults > kubeadm-config.yaml

localAPIEndpoint:
advertiseAddress: 192.168.66.10
kubernetesVersion: v1.15.1
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs



# 下面是初始化的命令
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log

#使用kubene 新版本kubernetesVersion: v1.18.0 需要用 --upload-certs
kubeadm init --config=kubeadm-config.yaml  --upload-certs | tee kubeadm-init.log


教程:https://jingyan.baidu.com/article/93f9803f73fa7de0e46f552b.html

kubeadm init --config=kubeadm-config.yaml  --upload-certs | tee kubeadm-init.log

注意:
用国内的云镜像进行下载k8s镜像
kube镜像库已在1.8版本后无效,需要改为配置文件中配置 imageRepository。为kubelet添加一个额外的参数,
这样kubelet就不会在启动pod的时候去墙外的k8s仓库拉取镜像了

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: “10.20.79.10”
networking:
podSubnet: “10.244.0.0/16”
kubernetesVersion: “v1.10.3”
imageRepository: “registry.cn-hangzhou.aliyuncs.com/google_containers”

查看错误日志:

journalctl -xefu kubelet

journalctl -f -u kubelet

kubelet重启

systemctl restart kubelet.service

子节点load k8s 镜像脚本

#!/bin/bash

ls /root/kubeadm-basic.images > /tmp/image-list.txt
cd /root/kubeadm-basic.images

for i in $( cat /tmp/image-list.txt )
do
    docker load -i $i
done

echo "docker load k8s镜像完成----"

手动下载k8s镜像

拉取镜像:
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0        
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0            
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0  
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0        
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.3-0        
docker pull registry.aliyuncs.com/google_containers/coredns:1.6.7                                                  
docker pull registry.aliyuncs.com/google_containers/pause:3.2

修改tag

docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0 k8s.gcr.io/kube-proxy
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag registry.aliyuncs.com/google_containers/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2

加入主节点以及其余工作节点

执行安装日志中的加入命令即可

#在子几点这个时候要执行 kubeadm init 初始化的命令
kubeadm init
#重置
kubeadm rest

一定要在master节点中复制 那个join指令到各个子节点执行
kubeadm join 192.168.66.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:cc37bc29e9ee92c34983d2a32f3fce84f2d384a0c1e88a51875d67d8a1db22fc

部署网络:在master节点,join到主节点后,自动拥有flanner网络服务

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

这一步如果卡住了, 可以把文件下载到本地
然后执行
kubectl apply -f kube-flannel.yml

执行查询flannel状态是否正常的命令:

kubectl get pod -n kube-system


#查看主节点是否正常
kubectl get node

#手动拉取镜像
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64
#更改为要求的tag
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-amd64

遇到的问题:

1.在node节点,问题:error: open /var/lib/kubelet/config.yaml: no such file or directory

 解决:关键文件缺失,多发生于没有做 kubeadm init就运行了systemctl start kubelet。 要先成功运行kubeadm init

2.[ERROR DirAvailable–etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[ERROR FileAvailable–etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable–etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists

解决:执行 kubeadm reset

猜你喜欢

转载自blog.csdn.net/weixin_40558290/article/details/105961193