kubernetes (K8s)-Sealos privatized deployment complete guide
1. Deploy K8S cluster through sealos CLI
All hosts must be configured with host name, IP address, host name and IP address resolution.
1.1 Get sealos cli tool
[root@k8s-master01 ~]# wget https://github.com/labring/sealos/releases/download/v4.3.0/sealos_4.3.0_linux_amd64.rpm
[root@k8s-master01 ~]# yum -y install sealos_4.3.0_linux_amd64.rpm
[root@k8s-master01 ~]# sealos -h
2.1 Use sealos cli to deploy K8S cluster
[root@k8s-master01 ~]# vim sealos-cli-install-k8s.sh
[root@k8s-master01 ~]# cat sealos-cli-install-k8s.sh
sealos gen labring/kubernetes:v1.25.6 \
labring/helm:v3.12.0 \
labring/calico:v3.24.1 \
labring/cert-manager:v1.8.0 \
labring/openebs:v3.4.0 \
--masters 192.168.10.140 \
--nodes 192.168.10.141,192.168.10.142 \
-p centos > Clusterfile
[root@k8s-master01 ~]# sh sealos-cli-install-k8s.sh
[root@k8s-master01 ~]# ls
sealos-cli-install-k8s.sh
Clusterfile
[root@k8s-master01 ~]# sealos apply -f Clusterfile
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 34m v1.25.6
k8s-worker01 Ready <none> 34m v1.25.6
k8s-worker02 Ready <none> 34m v1.25.6
[root@k8s-master01 ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-cf974b99d-k57xr 1/1 Running 0 35m
calico-apiserver calico-apiserver-cf974b99d-p8r7l 1/1 Running 0 35m
calico-system calico-kube-controllers-85666c5b94-rghh2 1/1 Running 0 35m
calico-system calico-node-64zcc 1/1 Running 0 35m
calico-system calico-node-887vb 1/1 Running 0 35m
calico-system calico-node-hccfd 1/1 Running 0 35m
calico-system calico-typha-fc74db775-dn47v 1/1 Running 0 35m
calico-system calico-typha-fc74db775-kkqz2 1/1 Running 0 35m
calico-system csi-node-driver-pp75r 2/2 Running 0 35m
calico-system csi-node-driver-q4z7j 2/2 Running 0 35m
calico-system csi-node-driver-q7ld9 2/2 Running 0 35m
cert-manager cert-manager-655bf9748f-wjxxh 1/1 Running 0 35m
cert-manager cert-manager-cainjector-7985fb445b-pl7hv 1/1 Running 0 35m
cert-manager cert-manager-webhook-6dc9656f89-wxtbq 1/1 Running 0 35m
kube-system coredns-565d847f94-c7s4p 1/1 Running 0 36m
kube-system coredns-565d847f94-wf4hz 1/1 Running 0 36m
kube-system etcd-k8s-master01 1/1 Running 0 36m
kube-system kube-apiserver-k8s-master01 1/1 Running 0 36m
kube-system kube-controller-manager-k8s-master01 1/1 Running 0 36m
kube-system kube-proxy-bl67f 1/1 Running 0 35m
kube-system kube-proxy-gn2qf 1/1 Running 0 35m
kube-system kube-proxy-kcrg5 1/1 Running 0 36m
kube-system kube-scheduler-k8s-master01 1/1 Running 0 36m
kube-system kube-sealos-lvscare-k8s-worker01 1/1 Running 0 35m
kube-system kube-sealos-lvscare-k8s-worker02 1/1 Running 0 35m
openebs openebs-localpv-provisioner-79f4c678cd-fvjt4 1/1 Running 0 35m
tigera-operator tigera-operator-6675dc47f4-jdxxt 1/1 Running 0 35m
2. Deploy related dependent components through sealos CLI
Use the following script to deploy sealos dependencies with one click. Before deploying sealos dependency components, please deploy the load balancer metallb first.
[root@k8s-master01 ~]# cat sealos-dep.sh
#!/bin/bash
set -e
cat << EOF > ingress-nginx-config.yaml
apiVersion: apps.sealos.io/v1beta1
kind: Config
metadata:
creationTimestamp: null
name: ingress-nginx-config
spec:
data: |
controller:
service:
type: LoadBalancer
match: docker.io/labring/ingress-nginx:v1.5.1
path: charts/ingress-nginx/values.yaml
strategy: merge
EOF
sealos run docker.io/labring/kubernetes-reflector:v7.0.151\
docker.io/labring/ingress-nginx:v1.5.1\
docker.io/labring/zot:v1.4.3\
docker.io/labring/kubeblocks:v0.5.3\
--env policy=anonymousPolicy\
--config-file ingress-nginx-config.yaml
echo "patch ingress-nginx-controller tolerations to allow run on master node, if you don't want to run on master node, please ignore this step"
kubectl -n ingress-nginx patch ds ingress-nginx-controller -p '{"spec":{"template":{"spec":{"tolerations":[{"key":"node-role.kubernetes.io/control-plane","operator":"Exists","effect":"NoSchedule"}]}}}}'
echo "waitting for kubeblocks crd created, this may take a while"
while ! kubectl get clusterdefinitions.apps.kubeblocks.io redis >/dev/null 2>&1; do
sleep 5
done
echo "start patch redis clusterdefinition"
kubectl patch clusterdefinitions.apps.kubeblocks.io redis --type='json' -p '[{"op": "add", "path": "/spec/componentDefs/0/podSpec/containers/1/resources/limits", "value": {"cpu":"100m", "memory":"100Mi"}}]'
echo "patch redis success"
echo "wait for all pod to be ready then install Sealos"
kubectl get po -A
[root@k8s-master01 ~]# sh sealos-dep.sh
[root@k8s-master01 ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-cf974b99d-k57xr 1/1 Running 0 65m
calico-apiserver calico-apiserver-cf974b99d-p8r7l 1/1 Running 0 65m
calico-system calico-kube-controllers-85666c5b94-rghh2 1/1 Running 0 65m
calico-system calico-node-64zcc 1/1 Running 0 65m
calico-system calico-node-887vb 1/1 Running 0 65m
calico-system calico-node-hccfd 1/1 Running 0 65m
calico-system calico-typha-fc74db775-dn47v 1/1 Running 0 65m
calico-system calico-typha-fc74db775-kkqz2 1/1 Running 0 65m
calico-system csi-node-driver-pp75r 2/2 Running 0 65m
calico-system csi-node-driver-q4z7j 2/2 Running 0 65m
calico-system csi-node-driver-q7ld9 2/2 Running 0 65m
cert-manager cert-manager-655bf9748f-wjxxh 1/1 Running 0 65m
cert-manager cert-manager-cainjector-7985fb445b-pl7hv 1/1 Running 0 65m
cert-manager cert-manager-webhook-6dc9656f89-wxtbq 1/1 Running 0 65m
ingress-nginx ingress-nginx-controller-m5lcx 1/1 Running 0 4m30s
ingress-nginx ingress-nginx-controller-tlnrq 1/1 Running 0 5m2s
ingress-nginx ingress-nginx-controller-tvqm8 1/1 Running 0 4m7s
kb-system kubeblocks-8d66dc669-j4k65 1/1 Running 0 5m2s
kube-system coredns-565d847f94-c7s4p 1/1 Running 0 66m
kube-system coredns-565d847f94-wf4hz 1/1 Running 0 66m
kube-system etcd-k8s-master01 1/1 Running 0 66m
kube-system kube-apiserver-k8s-master01 1/1 Running 0 66m
kube-system kube-controller-manager-k8s-master01 1/1 Running 0 66m
kube-system kube-proxy-bl67f 1/1 Running 0 66m
kube-system kube-proxy-gn2qf 1/1 Running 0 66m
kube-system kube-proxy-kcrg5 1/1 Running 0 66m
kube-system kube-scheduler-k8s-master01 1/1 Running 0 66m
kube-system kube-sealos-lvscare-k8s-worker01 1/1 Running 0 65m
kube-system kube-sealos-lvscare-k8s-worker02 1/1 Running 0 65m
openebs openebs-localpv-provisioner-79f4c678cd-fvjt4 1/1 Running 0 65m
reflector-system reflector-7979f4b985-88ph9 1/1 Running 0 5m43s
tigera-operator tigera-operator-6675dc47f4-jdxxt 1/1 Running 0 65m
zot zot-55dbc7598b-cszlw 1/1 Running 0 5m19s
3. Deploy Sealos Cloud through sealos CLI
[root@k8s-master01 ~]# vim sealos-cloud-install.sh
[root@k8s-master01 ~]# cat sealos-cloud-install.sh
sealos run docker.io/labring/sealos-cloud:latest \
--env cloudDomain="www.kubemsb.com"
or
Custom domain name and domain name certificate
[root@k8s-master01 ~]# mkdir kubemsbcert
[root@k8s-master01 ~]# cd kubemsbcert/
[root@k8s-master01 kubemsbcert]# pwd
/root/kubemsbcert
[root@k8s-master01 kubemsbcert]# ls
kubemsb.com.key kubemsb.com.pem
[root@k8s-master01 ~]# vim sealos-cloud-install-script.sh
[root@k8s-master01 ~]# cat sealos-cloud-install-script.sh
#!/bin/bash
# 读取原始证书及密钥文件
tls_crt_file="/root/kubemsbcert/kubemsb.com.pem"
tls_key_file="/root/kubemsbcert/kubemsb.com.key"
# 使用base64进行转换
tls_crt_base64=$(cat $tls_crt_file | base64 | tr -d '\n')
tls_key_base64=$(cat $tls_key_file | base64 | tr -d '\n')
# 定义YAML文件
yaml_content="
apiVersion: apps.sealos.io/v1beta1
kind: Config
metadata:
name: secret
spec:
path: manifests/tls-secret.yaml
match: docker.io/labring/sealos-cloud:latest
strategy: merge
data: |
data:
tls.crt: $tls_crt_base64
tls.key: $tls_key_base64
"
# 创建新的cloud-config.yaml文件
echo "$yaml_content" > tls-secret.yaml
sealos run docker.io/labring/sealos-cloud:latest \
--env cloudDomain="www.kubemsb.com" \
--config-file tls-secret.yaml
[root@k8s-master01 ~]# sh sealos-cloud-install.sh
[root@k8s-master01 ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
account-system account-controller-manager-688db77bc6-7xs5q 2/2 Running 0 2m25s
app-system app-controller-manager-7679d46bff-47g7m 2/2 Running 0 2m30s
applaunchpad-frontend applaunchpad-frontend-7c67d4dc7f-6xbqv 1/1 Running 0 2m15s
calico-apiserver calico-apiserver-cf974b99d-k57xr 1/1 Running 0 150m
calico-apiserver calico-apiserver-cf974b99d-p8r7l 1/1 Running 0 150m
calico-system calico-kube-controllers-85666c5b94-rghh2 1/1 Running 0 151m
calico-system calico-node-64zcc 1/1 Running 0 151m
calico-system calico-node-887vb 1/1 Running 0 151m
calico-system calico-node-hccfd 1/1 Running 0 151m
calico-system calico-typha-fc74db775-dn47v 1/1 Running 0 151m
calico-system calico-typha-fc74db775-kkqz2 1/1 Running 0 151m
calico-system csi-node-driver-pp75r 2/2 Running 0 151m
calico-system csi-node-driver-q4z7j 2/2 Running 0 151m
calico-system csi-node-driver-q7ld9 2/2 Running 0 151m
cert-manager cert-manager-655bf9748f-wjxxh 1/1 Running 0 151m
cert-manager cert-manager-cainjector-7985fb445b-pl7hv 1/1 Running 0 151m
cert-manager cert-manager-webhook-6dc9656f89-wxtbq 1/1 Running 0 151m
costcenter-frontend costcenter-frontend-58c55df9f-qgvql 1/1 Running 0 2m1s
dbprovider-frontend dbprovider-frontend-65ff995c74-rtt5g 1/1 Running 0 2m4s
ingress-nginx ingress-nginx-controller-m5lcx 1/1 Running 0 90m
ingress-nginx ingress-nginx-controller-tlnrq 1/1 Running 0 90m
ingress-nginx ingress-nginx-controller-tvqm8 1/1 Running 0 89m
kb-system kubeblocks-8d66dc669-j4k65 1/1 Running 0 90m
kube-system coredns-565d847f94-c7s4p 1/1 Running 0 151m
kube-system coredns-565d847f94-wf4hz 1/1 Running 0 151m
kube-system etcd-k8s-master01 1/1 Running 0 152m
kube-system kube-apiserver-k8s-master01 1/1 Running 0 152m
kube-system kube-controller-manager-k8s-master01 1/1 Running 0 152m
kube-system kube-proxy-bl67f 1/1 Running 0 151m
kube-system kube-proxy-gn2qf 1/1 Running 0 151m
kube-system kube-proxy-kcrg5 1/1 Running 0 151m
kube-system kube-scheduler-k8s-master01 1/1 Running 0 152m
kube-system kube-sealos-lvscare-k8s-worker01 1/1 Running 0 151m
kube-system kube-sealos-lvscare-k8s-worker02 1/1 Running 0 151m
openebs openebs-localpv-provisioner-79f4c678cd-fvjt4 1/1 Running 0 151m
reflector-system reflector-7979f4b985-88ph9 1/1 Running 0 91m
resources-system resources-controller-manager-869f6cdfbc-tjr7b 2/2 Running 0 2m28s
resources-system resources-metering-manager-6775996cdf-rzkbm 1/1 Running 1 (110s ago) 2m26s
sealos-system licenseissuer-controller-manager-84df9dfcb6-smfr9 2/2 Running 0 2m22s
sealos desktop-frontend-7c9f4fb54d-5z7bw 1/1 Running 0 2m17s
sealos sealos-mongodb-mongodb-0 3/3 Running 0 2m40s
terminal-frontend terminal-frontend-7744ffd5d8-z9vnr 1/1 Running 0 2m6s
terminal-system terminal-controller-manager-74f9f5dcf4-t7dzw 2/2 Running 0 2m32s
tigera-operator tigera-operator-6675dc47f4-jdxxt 1/1 Running 0 151m
user-system user-controller-manager-5d978fb884-nmdq5 2/2 Running 0 2m34s
zot zot-55dbc7598b-cszlw 1/1 Running 0 90m
4. Access Sealos Cloud
[root@k8s-master01 ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.1.210 <none> 80:31296/TCP,443:30690/TCP 103m
ingress-nginx-controller-admission ClusterIP 10.96.2.60 <none> 443/TCP 103m
[root@k8s-master01 ~]# kubectl get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
applaunchpad-frontend applaunchpad-frontend <none> applaunchpad.www.kubemsb.com 10.96.1.210 80, 443 14m
costcenter-frontend sealos-costcenter <none> costcenter.www.kubemsb.com 10.96.1.210 80, 443 14m
dbprovider-frontend dbprovider-frontend <none> dbprovider.www.kubemsb.com 10.96.1.210 80, 443 14m
sealos sealos-desktop <none> www.kubemsb.com 10.96.1.210 80, 443 14m
terminal-frontend sealos-terminal <none> terminal.www.kubemsb.com 10.96.1.210 80, 443 14m
5. Deploy applications using Sealos Cloud
5.1 Browser settings before access
When using chrome to access, if the certificate cannot be verified, it will be reported as unsafe. You can set –ignore-certificate-errors for chrome to ignore unsafe access.
5.2 Browser access
5.3 Application deployment
[root@k8s-master01 ~]# kubectl get all -n ns-9yqndhll
NAME READY STATUS RESTARTS AGE
pod/nginxweb-786fcf6c9c-7dscc 1/1 Running 0 3m24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginxweb ClusterIP 10.96.0.149 <none> 80/TCP 3m24s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginxweb 1/1 1 1 3m24s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginxweb-786fcf6c9c 1 1 1 3m24s