搭建k8s集群并在k8s上安装kubesphere


k8s:
在这里插入图片描述

基础配置(获取三台服务器)

方法一、使用一台Windows电脑创建三台centos虚拟机:

Windows 环境:
  1. cpu:16个
  2. 内存:16G
  3. windows 10

vagrant 组合virtualbox 快速搭建虚拟机:

  1. 下载最新版vagrant : Vagrant by HashiCorp (vagrantup.com)

  2. 下载最新版virtualbox:Oracle VM VirtualBox

  3. 之后进行搭建虚拟机: 由于我先创建好了,才做的笔记 请忽略这三台正在运行的机子

    1. 检查我们的 virtualbox 的网卡设置在这里插入图片描述

    2. 之后进行全局设置一下我们的虚拟机的存放的位置:[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IYtSAIYG-1685331533243)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230417202233851.png)]

    3. 来到我们保存vagrant 的文件的位置[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ph9CnL8f-1685331533243)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230417202251723.png)]

    4. 打开cmd命令尽心初始化文件

      vagrant init centos/7
      
    5. 执行完上面的命令后,会在用户的家目录下生成Vagrantfile文件a

    6. VagrantFile文件:

      Vagrant.configure("2") do |config|
      #遍历3次。生成3个
      (1..3).each do |i|
      config.vm.define "k8s-node#{i}" do |node|
      # 设置虚拟机的Box。指定本地的box文件
      node.vm.box = "centos/7"
      # 设置虚拟机的主机名
      node.vm.hostname="k8s-node#{i}"
      # 设置虚拟机的IP
      node.vm.network "private_network", ip: "192.168.56.#{99+i}",
      netmask: "255.255.255.0"
      # 设置主机与虚拟机的共享目录
      # node.vm.synced_folder "~/Documents/vagrant/share",
      "/home/vagrant/share"
      # VirtaulBox相关配置
      node.vm.provider "virtualbox" do |v|
      # 设置虚拟机的名称
      v.name = "k8s-node#{i}"
      # 设置虚拟机的内存大小
      v.memory = 4096
      # 设置虚拟机的CPU个数
      v.cpus = 4
      end
      end
      end
      end
      
  4. 之后就开始操作

  5. 由于我们要去远程下载centos/7 等待 试讲就会很长 所以我们可以直接先将文件下载下来 去 centos 官网下载 centos镜像(按理说会自动下载的,但是我的找不到)

    1. 官网地址:Download (centos.org)

    2. 选择vagrant 的[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-H74Sk8Pv-1685331533244)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230529112804541.png)]

    3. 下载好后放入同级目录下即可[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-uqTn0CnD-1685331533244)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230529113420492.png)]

  6. 之后将centos/7 添加进 vagrant中

    vagrant box add centos/7 D:\developsoft\CentOS-7-x86_64-Vagrant1905_01.VirtualBox.box
    
  7. 来到安装目录下 (确保启动 virtualbox ) 打开cmd命令框 输入命令

    vagrant up
    
  8. . 进行虚拟机的创建 :[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-spgmCPhj-1685331533245)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230417202514010.png)]

修改访问 设置ssh也可以访问:

使用vagrant 的ssh 进行连接 即可 使用

vagrant ssh 虚拟机名称

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-KAT6Bjsl-1685331533245)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230417202550087.png)]

之后使用指令进行登录

su root

vagrant

之后进入文件 进行修改:

vi /etc/ssh/sshd_config

在这里插入图片描述

之后重启虚拟机:

service sshd restart

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-YG1tS0tP-1685331533246)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230417202711082.png)]

之后其他三台流水线式 设置即可 设置完后 我们就可以使用 XShell 来进行连接了:

配置网络(三台虚拟机的私有网络):

首先将三台虚拟机的电源全部关闭:

  1. 来到 vir 工具中进行设置网络创建一个net 网络[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BEWsfqBr-1685331533246)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230417202810239.png)]

  2. 选择net 网络:[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ZBicDHRM-1685331533246)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230417202820996.png)]

3.在这里插入图片描述

  1. 之后重启虚拟机 在检查每个虚拟机使用的网路地址是否还是相同,之后就可以进行 ping 一下我们 的网络查看网络是否可以ping的通

  2. 之后把每个虚拟机的防火墙都给关闭即可

    systemctl stop firewalld
    systemctl disable firewalld
    

之后查看 我们的三台服务器使用的是那个网卡 因为笔记本电脑都是双网卡模式

ip route show

如果三个服务器的默认网卡地址都为一样的就需要进行修改

不一样就不用了哦

方法二、去qingCloud申请三个服务器:

来到青云 申请三个公网ip:[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-V6onDFWf-1685331533247)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230416082818797.png)]

创建vpc私有网络:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-FG1momUY-1685331533247)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230416083131547.png)]

之后就直接创建即可 下面的都默认

创建三台服务器

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-SOe8tULg-1685331533247)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230416084107671.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-eVKhCLSW-1685331533247)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230416084119432.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2dGAuiS8-1685331533247)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230416084146249.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-lYcugsY6-1685331533248)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230416084320205.png)]
在这里插入图片描述

要创建三台哦 当然两台也行 只是效果不是很好 :

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-PjGycqqc-1685331533248)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230416085131008.png)]
之后使用xshell连接上

配置前置环境:

安装docker:

sudo yum remove docker*
sudo yum install -y yum-utils

#配置docker的yum地址
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo


#安装指定版本
sudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6

#	启动&开机启动docker
systemctl enable docker --now

# docker加速配置
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

重新设置 :

为每个机器设置名称 好方便下面操作(三台服务器都要设置):

#设置每个机器自己的hostname
hostnamectl set-hostname xxx

eg:
作为主节点的服务器设置为:  hostnamectl set-hostname master
其余两台工作服务器  
第一台 :hostnamectl set-hostname node1
第一台 :hostnamectl set-hostname node2

安装基本环境:


# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#关闭swap
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab

#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

安装kubelet、kubeadm、kubectl

#配置k8s的yum源地址
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


#安装 kubelet,kubeadm,kubectl
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9

#启动kubelet
sudo systemctl enable --now kubelet

#所有机器配置master域名
echo "10.13.111.175  k8s-master" >> /etc/hosts

注意点需要更改的地点为: 更换为 自己master 的主机地址 是vpc虚拟网络的 不是公网ip

#所有机器配置master域名
echo "10.13.111.175  k8s-master" >> /etc/hosts

配置完后就进行ping 一下 看是否都能找到:

ping k8s-master

arm架构的k8s 阿里源:
这个是针对服务器的系统,如果是arm系统的服务器才需要配置这个镜像地址,否者直接跳过者一步。

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

初始化master 节点:

kubeadm init \
--apiserver-advertise-address=10.13.111.175 \
--control-plane-endpoint=k8s-master \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16

更改地方为: 还是将地址改为master 的主机地址

--apiserver-advertise-address=10.13.111.175 \

表示撤销初始化master节点(错误或者失败时使用)

sudo kubeadm reset

记录关键信息:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-master:6443 --token yodfqt.bz8x38s644631ek6 \
    --discovery-token-ca-cert-hash sha256:198d4c123c7772ed3956d82fa8d556cd6e8109283441c4a5feaaede7d6440967 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token yodfqt.bz8x38s644631ek6 \
    --discovery-token-ca-cert-hash sha256:198d4c123c7772ed3956d82fa8d556cd6e8109283441c4a5feaaede7d6440967 

之后再master节点上执行: 也就是关键信息中的

 mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络插件:

curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O

kubectl apply -f calico.yaml

加入worker 节点:

在子节点 也就是node1和node2中执行 这个命令是关键信息中的

kubeadm join k8s-master:6443 --token yodfqt.bz8x38s644631ek6 \
    --discovery-token-ca-cert-hash sha256:198d4c123c7772ed3956d82fa8d556cd6e8109283441c4a5feaaede7d6440967 

nfs文件系统

# 在每个机器。
yum install -y nfs-utils
之后再master节点:
# 在master 执行以下命令 
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports


# 执行以下命令,启动 nfs 服务;创建共享目录
mkdir -p /nfs/data


# 在master执行
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server

# 使配置生效
exportfs -r


#检查配置是否生效
exportfs
在子节点 也就是node1和node2节点进行 :
showmount -e 10.0.2.15

mkdir -p /nfs/data

mount -t nfs 10.0.2.15:/nfs/data /nfs/data

注意点: 修改地址:改为 master节点的地址

配置默认存储:

在master节点上

vi sc.yaml

粘贴以下内容

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 10.0.2.15 ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /nfs/data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.2.15
            path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

应用:

kubectl apply -f sc.yaml

#查看
 kubectl get sc

安装 metrics-server:

vi metrics.yaml

粘贴:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --kubelet-insecure-tls
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {
    
    }
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

应用:

kubectl apply -f metrics.yaml

查看测试:

kubectl top nodes

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VUXReHdK-1685331533249)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230416102833435.png)]

安装metrics-server后,pod启动时成功的就是一直报这个错。网上找了很多放法,不管用,可以试试。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-1yvxeQVY-1685331533249)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230417212443151.png)]

解决方法:新增hostNetwork: true到metries-server.yaml:

参考地址:kubernetes - Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io) - Stack Overflow

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      hostNetwork: true ## 新增

安装KubeSphere

https://kubesphere.com.cn/

1、下载核心文件

如果下载不到,请复制附录的内容

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

2、修改cluster-configuration (重要)

也就是将cluster-configuration.yaml 文件中的所有false 改为true (除了 metrics-server )还有ip地址 改为master,指定网络

具体可以参照附录中的文件进行:

执行安装:

kubectl apply -f kubesphere-installer.yaml

kubectl apply -f cluster-configuration.yaml

查看进度:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{
    
    .items[0].metadata.name}') -f

如果是青云记得开放端口:[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-HGmZeADq-1685331533249)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230416105136757.png)]

访问任意机器的 30880端口

账号 : admin

密码 : P@88w0rd

解决etcd监控证书找不到问题

根据情况来执行:否则适得其反

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

安装完成效果:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-E4iNMLMI-1685331533249)(C:\Users\ACER\AppData\Roaming\Typora\typora-user-images\image-20230416110138529.png)]

附录:

kubesphere-installer.yaml

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterconfigurations.installer.kubesphere.io
spec:
  group: installer.kubesphere.io
  versions:
  - name: v1alpha1
    served: true
    storage: true
  scope: Namespaced
  names:
    plural: clusterconfigurations
    singular: clusterconfiguration
    kind: ClusterConfiguration
    shortNames:
    - cc

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - policy
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - autoscaling
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - config.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - iam.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - notification.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - auditing.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - events.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - core.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - installer.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - security.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kubeedge.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - types.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-install
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-install
  template:
    metadata:
      labels:
        app: ks-install
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: kubesphere/ks-installer:v3.1.1
        imagePullPolicy: "Always"
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 20m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/localtime
          name: host-time
      volumes:
      - hostPath:
          path: /etc/localtime
          type: ""
        name: host-time

cluster-configuration.yaml

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
  local_registry: ""        # Add your private registry address if it is needed.
  etcd:
    monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    endpointIps: 172.31.0.4  # etcd cluster EndpointIps. It can be a bunch of IPs here.
    port: 2379              # etcd port.
    tlsEnable: true
  common:
    redis:
      enabled: true
    openldap:
      enabled: true
    minioVolumeSize: 20Gi # Minio PVC size.
    openldapVolumeSize: 2Gi   # openldap PVC size.
    redisVolumSize: 2Gi # Redis PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
    es:   # Storage backend for logging, events and auditing.
      # elasticsearchMasterReplicas: 1   # The total number of master nodes. Even numbers are not allowed.
      # elasticsearchDataReplicas: 1     # The total number of data nodes.
      elasticsearchMasterVolumeSize: 4Gi   # The volume size of Elasticsearch master nodes.
      elasticsearchDataVolumeSize: 20Gi    # The volume size of Elasticsearch data nodes.
      logMaxAge: 7                     # Log retention time in built-in Elasticsearch. It is 7 days by default.
      elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  console:
    enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
    port: 30880
  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true         # Enable or disable the KubeSphere Alerting System.
    # thanosruler:
    #   replicas: 1
    #   resources: {
    
    }
  auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
    enabled: true         # Enable or disable the KubeSphere Auditing Log System. 
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true             # Enable or disable the KubeSphere DevOps System.
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true         # Enable or disable the KubeSphere Events System.
    ruler:
      enabled: true
      replicas: 2
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true         # Enable or disable the KubeSphere Logging System.
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: false                   # Enable or disable metrics-server.
  monitoring:
    storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    # prometheusReplicas: 1          # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
    prometheusMemoryRequest: 400Mi   # Prometheus request memory.
    prometheusVolumeSize: 20Gi       # Prometheus PVC size.
    # alertmanagerReplicas: 1          # AlertManager Replicas.
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
      # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
      enabled: true # Enable or disable network policies.
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: calico # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
      enabled: true # Enable or disable the KubeSphere App Store.
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: true     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
  kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: true   # Enable or disable KubeEdge.
    cloudCore:
      nodeSelector: {
    
    "node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
          - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {
    
    "node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {
    
    "node-role.kubernetes.io/worker": ""}
        tolerations: []

猜你喜欢

转载自blog.csdn.net/qq_63946922/article/details/130925131