关于k8s1.20.0安装详细步骤

关于k8s1.20.0安装详细步骤

准备环境

名称      IP                 系统
master    192.168.1.43       Centos7.9
node1     192.168.1.44       Centos7.9
node2     192.168.1.45       Centos7.9
NFS       192.168.1.46       Centos7.9

安装docker
这里docker的安装用脚本快速安装,也可以yum安装,这里就不多说了(:
或者看看这里(Harbor安装还有docker脚本):https://blog.csdn.net/weixin_44657145/article/details/119763246
根据规划设置主机名

hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

在添加hosts

cat >> /etc/hosts << EOF
192.168.1.43 master
192.168.1.44 node1
192.168.1.45 node2
EOF

关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0

关闭swap

swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab

时间同步

yum install ntpdate -y && ntpdate time.windows.com

配置内核参数,将桥接的IPv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

使配置的内核参数生效(以上步骤在所有节点执行)

sysctl -p

添加kubernetes阿里YUM源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubectl、kubelet、kubeadm并设置开机启动

yum install kubectl-1.20.0 kubelet-1.20.0 kubeadm-1.20.0 -y && systemctl enable kubelet && systemctl start kubelet

初始化k8s集群

kubeadm init --kubernetes-version=1.20.0  \
--apiserver-advertise-address=192.168.1.43   \
--image-repository registry.aliyuncs.com/google_containers  \
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

至此kubenetes已经安装完成,初始化成功后根据生成kubectl jion样式命令在node的节点执行,可将该节点加入集群(注:初始化成功之后会有两串命令一个是将改节点以master身份加入,另一个是作为node加入),添加完成之后输入命令:kubectl get node 会显示该集群的所有节点,但是此时的节点都为NotReady状态,在添加网络插件之后,节点状态为Ready
安装calico网络插件

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

发送admin.conf到所有节点,可以在任何一个节点管理集群

scp /etc/kubernetes/admin.conf root@node1:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf root@node2:/etc/kubernetes/admin.conf

在node节点上加入环境变量

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile

安装NFS

安装nfs-utils、rpcbind(master和node都安装)

yum install nfs-utils rpcbind -y

编辑exports,设置权限(这里的权限一定要给NFS的目录写的权限)

vim /etc/exports
/data/nfs *(rw,no_root_squash,sync)

启动服务(注意服务启动顺序)

systemctl start rpcbind
systemctl start nfs

挂载目录(在宿主机创建相应目录)

mount -t nfs 192.168.1.46:/data/nfs /mnt

查看是否挂载

showmount -e 192.168.1.43

生成stronge-class
编辑1.yaml文件

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
   name: nfs-provisioner-runner
   namespace: default
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

编辑2.yaml文件

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  selector:
    matchLabels:
      app: nfs-client-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-client-provisioner
          image: vbouchaud/nfs-client-provisioner
          volumeMounts:
            - name: nfs-client
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.1.46
            - name: NFS_PATH
              value: /data/nfs
      volumes:
        - name: nfs-client
          nfs:
            server: 192.168.1.46
            path: /data/nfs


编辑3.yaml文件

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
provisioner: fuseim.pri/ifs
reclaimPolicy: Retain

按顺序在master上执行

kubectl apply -f 1.yaml
kubectl apply -f 2.yaml
kubectl apply -f 3.yaml

设置默认存储(检查是否设置为默认存储:kubectl get sc)

kubectl patch storageclass nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

以上步骤都成功执行之后,开始安装kubesphere(可以在本机先拉取文件下来,根据自己需要修改集群配置)

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/cluster-configuration.yaml

检查日志

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

如果日志没有失败,在日志最后会输出kubesphere的访问地址以及登录的账号地址

参考文献

##安装Kubernetes:https://blog.csdn.net/u013218587/article/details/111186022
##nfs相关:https://www.cnblogs.com/whych/p/9196537.html
##设置默认存储::https://blog.csdn.net/u011943534/article/details/100887530
##安装kubesphere:https://kubesphere.com.cn/docs/quick-start/minimal-kubesphere-on-k8s/
##启用可插拔热键:https://kubesphere.com.cn/docs/quick-start/enable-pluggable-components/#%E5%9C%A8%E5%AE%89%E8%A3%85%E5%90%8E%E5%90%AF%E7%94%A8%E5%8F%AF%E6%8F%92%E6%8B%94%E7%BB%84%E4%BB%B6

Guess you like

Origin blog.csdn.net/weixin_44657145/article/details/119294078