k8s离线安装部署教程X86(二)

k8s离线安装部署教程

文件名称 版本号 linux核心
docker版本 20.10.9 x86
k8s版本 v1.22.4 x86
kuboard v3 x86

6.设置ipvs模式

k8s整个集群为了访问通;默认是用iptables,性能下降(kube-proxy在集群之间同步iptables的内容)

每个pod,都需要分配一个ip,每个节点,kube-proxy,会同步其他节点的pod的ip,以保证iptables一致,才能让各节点访问得通,则会不断得同步iptables,这样十分影响性能。

#1、查看默认kube-proxy 使用的模式
kubectl get pod -A|grep kube-proxy
kubectl logs -n kube-system kube-proxy-xxxx

#2、需要修改 kube-proxy 的配置文件,修改mode 为ipvs。默认iptables,但是集群大了以后就很慢
kubectl edit cm kube-proxy -n kube-system
修改如下
   ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: "ipvs"
    
###修改了kube-proxy的配置,为了让重新生效,需要杀掉以前的Kube-proxy
kubectl get pod -A|grep kube-proxy
kubectl delete pod kube-proxy-xxxx -n kube-system
### 修改完成后可以重启kube-proxy以生效
复制代码

7.安装kuboard

kuboard安装:版本:v3

在线安装:

kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
复制代码

离线安装:

先下载kuboard-v3.yaml文件

由于执行kuboard-v3.yaml,需要用到镜像,无法上网的,可以在有网的服务器先下载对应的镜像。

cat kuboard-v3.yaml | grep image: | awk '{print $2}'
复制代码

eipwork/etcd-host:3.4.16-1 eipwork/kuboard:v3

下面这两个镜像,通过这行命令,无法获取到,是官网文档说明,需要pull

eipwork/kuboard-agent:v3 questdb/questdb:6.0.4

# 拉取全部镜像
cat kuboard-v3.yaml \
    | grep image: \
    | awk '{print "docker pull " $2}' \
    | sh

# pull另外两个镜像 
docker pull eipwork/kuboard-agent:v3
docker pull questdb/questdb:6.0.4

# 在当前目录导出镜像为压缩包
docker save -o kuboard-v3.tar eipwork/kuboard:v3
docker save -o etcd-host-3.4.16-1.tar eipwork/etcd-host:3.4.16-1
docker save -o kuboard-agent-v3.tar eipwork/kuboard-agent:v3
docker save -o questdb-6.0.4.tar questdb/questdb:6.0.4

# 加载到docker环境
docker load -i kuboard-v3.tar
docker load -i etcd-host-3.4.16-1.tar
docker load -i kuboard-agent-v3.tar
docker load -i questdb-6.0.4.tar

# 安装kuboard
kubectl apply -f kuboard-v3.yaml

# 删除kuboard
kubectl delete -f kuboard-v3.yaml
复制代码

注意,这里,kuboard-v3.yaml,imagePullPolicy:需要将Always,改为IfNotPresent

    
          image: 'eipwork/etcd-host:3.4.16-1'
          # 这里需要将Always,改为IfNotPresent(表示本地有镜像时,不从仓库下)
          imagePullPolicy: IfNotPresent
         


---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations: {}
  labels:
    k8s.kuboard.cn/name: kuboard-v3
  name: kuboard-v3
  namespace: kuboard
    
          image: 'eipwork/kuboard:v3'
          # 这里需要将Always,改为IfNotPresent(表示本地有镜像时,不从仓库下)
          imagePullPolicy: IfNotPresent
复制代码

启动效果如下:

# 启动kuboard-v3

kubectl apply -f kuboard-v3.yaml

# 查看kuborad是否启动成功:

kubectl get pods -n kuboard

在这里插入图片描述

如果只有三个,没有启动kuboard-agent-xxx容器。请继续行往下操作:

8.访问Kuboard

  • 在浏览器中打开链接 http://your-node-ip-address:30080
  • 输入初始用户名和密码,并登录
    • 用户名: admin
    • 密码: Kuboard123

在这里插入图片描述

首页,默认情况下,这里会显示导入中(agent没有启动情况下),点击进入到default

在这里插入图片描述

导出:kuboard-agent.yaml文件

在这里插入图片描述

注意:这个kuboard-agent.yaml文件,镜像拉取方式,需要修改imagePullPolicy:需要将Always,改为IfNotPresent

kubectl apply -f ./kuboard-agent.yaml
复制代码

最终效果:

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

9.安装metrics-server

为了可以在kuboard中,能看到服务器的资源,可以监控

metrics-server.yaml,版本0.5.0

在这里插入图片描述

在这里选择安装metrics-server,一步一步,操作下去,最后预览生成的yaml文件,保存为metrics-server.yaml文件

离线安装:

metrics-server.yaml文件

由于执行metrics-server.yaml,需要用到镜像,无法上网的,可以在有网的服务器先下载对应的镜像。

swr.cn-east-2.myhuaweicloud.com/kuboard-dependency/metrics-server:v0.5.0

# 拉取全部镜像
docker pull swr.cn-east-2.myhuaweicloud.com/kuboard-dependency/metrics-server:v0.5.0

# 在当前目录导出镜像为压缩包
docker save -o metrics-server-v0.5.0.tar swr.cn-east-2.myhuaweicloud.com/kuboard-dependency/metrics-server:v0.5.0

# 加载到docker环境
docker load -i metrics-server-v0.5.0.tar

# 安装kuboard
kubectl apply -f metrics-server.yaml

# 删除kuboard
kubectl delete -f metrics-server.yaml
复制代码

metrics-server.yaml文件如下:

 ---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
    - name: https
      port: 443
      protocol: TCP
      targetPort: 443
  selector:
    k8s-app: metrics-server

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: 'true'
    rbac.authorization.k8s.io/aggregate-to-edit: 'true'
    rbac.authorization.k8s.io/aggregate-to-view: 'true'
  name: 'system:aggregated-metrics-reader'
  namespace: kube-system
rules:
  - apiGroups:
      - metrics.k8s.io
    resources:
      - pods
      - nodes
    verbs:
      - get
      - list
      - watch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: 'system:metrics-server'
  namespace: kube-system
rules:
  - apiGroups:
      - ''
    resources:
      - pods
      - nodes
      - nodes/stats
      - namespaces
      - configmaps
    verbs:
      - get
      - list
      - watch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: 'metrics-server:system:auth-delegator'
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: 'system:auth-delegator'
subjects:
  - kind: ServiceAccount
    name: metrics-server
    namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: 'system:metrics-server'
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: 'system:metrics-server'
subjects:
  - kind: ServiceAccount
    name: metrics-server
    namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
  - kind: ServiceAccount
    name: metrics-server
    namespace: kube-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system

---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
  namespace: kube-system
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  replicas: 2
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - preference:
                matchExpressions:
                  - key: node-role.kubernetes.io/master
                    operator: Exists
              weight: 100
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  k8s-app: metrics-server
              namespaces:
                - kube-system
              topologyKey: kubernetes.io/hostname
      containers:
        - args:
            - '--cert-dir=/tmp'
            - '--secure-port=443'
            - '--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname'
            - '--kubelet-use-node-status-port'
            - '--kubelet-insecure-tls=true'
            - '--authorization-always-allow-paths=/livez,/readyz'
            - '--metric-resolution=15s'
          image: >-
            swr.cn-east-2.myhuaweicloud.com/kuboard-dependency/metrics-server:v0.5.0
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /livez
              port: https
              scheme: HTTPS
            periodSeconds: 10
          name: metrics-server
          ports:
            - containerPort: 443
              name: https
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /readyz
              port: https
              scheme: HTTPS
            initialDelaySeconds: 20
            periodSeconds: 10
          resources:
            requests:
              cpu: 100m
              memory: 200Mi
          securityContext:
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            runAsUser: 1000
          volumeMounts:
            - mountPath: /tmp
              name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      tolerations:
        - effect: ''
          key: node-role.kubernetes.io/master
          operator: Exists
      volumes:
        - emptyDir: {}
          name: tmp-dir

---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: metrics-server
  namespace: kube-system
spec:
  minAvailable: 1
  selector:
    matchLabels:
      k8s-app: metrics-server
复制代码

启动效果如下:

# 启动metrics-server

kubectl apply -f metrics-server.yaml

在这里插入图片描述

可以查看到服务器的内存,cpu等资源信息


Guess you like

Origin juejin.im/post/7067702580431831071