Open Cluster Management 多集群管理

什么是 Open Cluster Management

Open Cluster Management 组成

Open Cluster Management 发展历史

Open Cluster Management 快速安装

准备

  • 确保安装了kubectlkustomize
  • 确保已安装kind(大于v0.9.0+,或首选最新版本)。
  • 中心(hub)集群应该是v1.19+. v1.16(要在 [ , ]之间的 hub 集群版本上运行v1.18,请手动启用功能门“V1beta1CSRAPICompatibility”)。
  • 目前,引导过程依赖于通过 CSR 进行的客户端身份验证。因此,不支持它的 Kubernetes 发行版不能用作 hub。例如:EKS。

安装 hub 集群与托管集群

下载并安装最新版本的clusteradm命令行工具

curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash

或者使用go 安装

# Installing clusteradm to $GOPATH/bin/
GO111MODULE=off go get -u open-cluster-management.io/clusteradm/...

快速设置一个 hub 集群和 2 个托管集群

curl -L https://raw.githubusercontent.com/open-cluster-management-io/OCM/main/solutions/setup-dev-environment/local-up.sh | bash

脚本内容如下:

#!/bin/bash

cd $(dirname ${
     
     BASH_SOURCE})

set -e

hub=${CLUSTER1:-hub}
c1=${CLUSTER1:-cluster1}
c2=${CLUSTER2:-cluster2}

hubctx="kind-${hub}"
c1ctx="kind-${c1}"
c2ctx="kind-${c2}"

kind create cluster --name "${hub}"
kind create cluster --name "${c1}"
kind create cluster --name "${c2}"


kubectl config set-context ${hubctx}

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \--namespace cert-manager \--create-namespace \--version v1.10.0 \--set ingressShim.defaultIssuerName=letsencrypt-prod \--set ingressShim.defaultIssuerKind=ClusterIssuer \--set ingressShim.defaultIssuerGroup=cert-manager.io \--set featureGates="ExperimentalCertificateSigningRequestControllers=true" \--set installCRDs=true

cat <<EOF > cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: "[email protected]"
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: nginx
EOF

kubectl apply -f cluster-issuer.yaml


echo "Initialize the ocm hub cluster\n"
clusteradm init --wait --context ${hubctx}
joincmd=$(clusteradm get token --context ${
     
     hubctx} | grep clusteradm)

echo "Join cluster1 to hub\n"
$(echo ${
     
     joincmd} --force-internal-endpoint-lookup --wait --context ${
     
     c1ctx} | sed "s/<cluster_name>/$c1/g")

echo "Join cluster2 to hub\n"
$(echo ${
     
     joincmd} --force-internal-endpoint-lookup --wait --context ${
     
     c2ctx} | sed "s/<cluster_name>/$c2/g")

echo "Accept join of cluster1 and cluster2"
clusteradm accept --context ${hubctx} --clusters ${c1},${c2} --wait

kubectl get managedclusters --all-namespaces --context ${hubctx}

安装 OCM 组件并托管集群

在将 OCM 组件实际安装到您的集群之前,请在运行我们的命令行工具之前在您的终端中导出以下环境变量,clusteradm以便它可以正确区分 hub 集群。

export CTX_HUB_CLUSTER=<your hub cluster context>

clusteradm init:

 # By default, it installs the latest release of the OCM components.
 # Use e.g. "--bundle-version=latest" to install latest development builds.
 # NOTE: For hub cluster version between v1.16 to v1.19 use the parameter: --use-bootstrap-token
 clusteradm init --wait --context ${CTX_HUB_CLUSTER}

clusteradm init命令在 hub 集群上安装 registration-operator ,它负责为 OCM 环境持续安装和升级一些核心组件。

命令完成后init,将在控制台上输出生成的命令以注册您的托管集群。生成命令的示例如下所示。

clusteradm join \
    --hub-token <your token data> \
    --hub-apiserver <your hub kube-apiserver endpoint> \
    --wait \
    --cluster-name <cluster_name>

建议将命令保存在安全的地方以备将来使用。如果丢失了,可以使用 clusteradm get token重新获取生成的命令。

$ kind get clusters
enabling experimental podman provider
cluster1
cluster2
hub
kind

$ kubectl get ns --context kind-hub
NAME                          STATUS   AGE
default                       Active   24h
kube-node-lease               Active   24h
kube-public                   Active   24h
kube-system                   Active   24h
local-path-storage            Active   24h
open-cluster-management       Active   23h
open-cluster-management-hub   Active   23h

$ kubectl get clustermanager --context kind-hub
NAME              AGE
cluster-manager   23h


$ kubectl -n open-cluster-management get pod --context kind-hub
NAME                               READY   STATUS    RESTARTS   AGE
cluster-manager-79dcdf496f-mfv72   1/1     Running   0          23h


$ kubectl -n open-cluster-management-hub get pod --context kind-hub
NAME                                                       READY   STATUS    RESTARTS   AGE
cluster-manager-placement-controller-6597644b5b-crcmp      1/1     Running   0          23h
cluster-manager-registration-controller-7d774d4866-vtqwc   1/1     Running   0          23h
cluster-manager-registration-webhook-f549cb5bd-lmgmx       2/2     Running   0          23h
cluster-manager-work-webhook-64f95b566d-drtv8              2/2     Running   0          23h


$ kubectl -n open-cluster-management-agent get pod --context  ${CTX_HUB_CLUSTER}
NAME                                             READY   STATUS    RESTARTS   AGE
klusterlet-registration-agent-57d7bf7749-4rck7   1/1     Running   0          28m
klusterlet-work-agent-5848786fdc-rzgrx           1/1     Running   0          27m

查看已成功创建cluster1 ManagedCluster对象:
$ kubectl get managedcluster --context ${CTX_HUB_CLUSTER}
NAME      HUB ACCEPTED   MANAGED CLUSTER URLS         JOINED   AVAILABLE   AGE
default   true           https://10.168.110.21:6443   True     True        28m

#整体安装信息在自定义资源上可见clustermanager:
 kubectl get clustermanager cluster-manager -o yaml --context kind-hub
apiVersion: operator.open-cluster-management.io/v1
kind: ClusterManager
metadata:
  creationTimestamp: "2023-03-14T03:10:16Z"
  finalizers:
  - operator.open-cluster-management.io/cluster-manager-cleanup
  generation: 2
  name: cluster-manager
  resourceVersion: "3178"
  uid: cd60535e-7264-4760-ad46-edca6e617da5
spec:
  deployOption:
    mode: Default
  nodePlacement: {
    
    }
  placementImagePullSpec: quay.io/open-cluster-management/placement:v0.10.0
  registrationConfiguration:
    featureGates:
    - feature: DefaultClusterSet
      mode: Enable
  registrationImagePullSpec: quay.io/open-cluster-management/registration:v0.10.0
  workImagePullSpec: quay.io/open-cluster-management/work:v0.10.0
status:
  conditions:
  - lastTransitionTime: "2023-03-14T03:10:43Z"
    message: Registration is managing credentials
    observedGeneration: 2
    reason: RegistrationFunctional
    status: "False"
    type: HubRegistrationDegraded
  - lastTransitionTime: "2023-03-14T03:11:03Z"
    message: Placement is scheduling placement decisions
    observedGeneration: 2
    reason: PlacementFunctional
    status: "False"
    type: HubPlacementDegraded
  - lastTransitionTime: "2023-03-14T03:10:22Z"
    message: Feature gates are all valid
    reason: FeatureGatesAllValid
    status: "True"
    type: ValidFeatureGates
  - lastTransitionTime: "2023-03-14T03:11:20Z"
    message: Components of cluster manager are up to date
    reason: ClusterManagerUpToDate
    status: "False"
    type: Progressing
  - lastTransitionTime: "2023-03-14T03:10:22Z"
    message: Components of cluster manager are applied
    reason: ClusterManagerApplied
    status: "True"
    type: Applied
  generations:
  - group: apps
    lastGeneration: 1
    name: cluster-manager-registration-controller
    namespace: open-cluster-management-hub
    resource: deployments
    version: v1
  - group: apps
    lastGeneration: 1
    name: cluster-manager-registration-webhook
    namespace: open-cluster-management-hub
    resource: deployments
    version: v1
  - group: apps
    lastGeneration: 1
    name: cluster-manager-work-webhook
    namespace: open-cluster-management-hub
    resource: deployments
    version: v1
  - group: apps
    lastGeneration: 1
    name: cluster-manager-placement-controller
    namespace: open-cluster-management-hub
    resource: deployments
    version: v1
  observedGeneration: 2
  relatedResources:
  - group: apiextensions.k8s.io
    name: clustermanagementaddons.addon.open-cluster-management.io
    namespace: ""
    resource: customresourcedefinitions
    version: v1
  - group: apiextensions.k8s.io
    name: managedclusters.cluster.open-cluster-management.io
    namespace: ""
    resource: customresourcedefinitions
    version: v1
  - group: apiextensions.k8s.io
    name: managedclustersets.cluster.open-cluster-management.io
    namespace: ""
    resource: customresourcedefinitions
    version: v1
  - group: apiextensions.k8s.io
    name: manifestworks.work.open-cluster-management.io
    namespace: ""
    resource: customresourcedefinitions
    version: v1
  - group: apiextensions.k8s.io
    name: managedclusteraddons.addon.open-cluster-management.io
    namespace: ""
    resource: customresourcedefinitions
    version: v1
  - group: apiextensions.k8s.io
    name: managedclustersetbindings.cluster.open-cluster-management.io
    namespace: ""
    resource: customresourcedefinitions
    version: v1
  - group: apiextensions.k8s.io
    name: placements.cluster.open-cluster-management.io
    namespace: ""
    resource: customresourcedefinitions
    version: v1
  - group: apiextensions.k8s.io
    name: addondeploymentconfigs.addon.open-cluster-management.io
    namespace: ""
    resource: customresourcedefinitions
    version: v1
  - group: apiextensions.k8s.io
    name: placementdecisions.cluster.open-cluster-management.io
    namespace: ""
    resource: customresourcedefinitions
    version: v1
  - group: apiextensions.k8s.io
    name: addonplacementscores.cluster.open-cluster-management.io
    namespace: ""
    resource: customresourcedefinitions
    version: v1
  - group: ""
    name: open-cluster-management-hub
    namespace: ""
    resource: namespaces
    version: v1
  - group: rbac.authorization.k8s.io
    name: open-cluster-management:cluster-manager-registration:controller
    namespace: ""
    resource: clusterroles
    version: v1
  - group: rbac.authorization.k8s.io
    name: open-cluster-management:cluster-manager-registration:controller
    namespace: ""
    resource: clusterrolebindings
    version: v1
  - group: ""
    name: cluster-manager-registration-controller-sa
    namespace: open-cluster-management-hub
    resource: serviceaccounts
    version: v1
  - group: rbac.authorization.k8s.io
    name: open-cluster-management:cluster-manager-registration:webhook
    namespace: ""
    resource: clusterroles
    version: v1
  - group: rbac.authorization.k8s.io
    name: open-cluster-management:cluster-manager-registration:webhook
    namespace: ""
    resource: clusterrolebindings
    version: v1
  - group: ""
    name: cluster-manager-registration-webhook-sa
    namespace: open-cluster-management-hub
    resource: serviceaccounts
    version: v1
  - group: rbac.authorization.k8s.io
    name: open-cluster-management:cluster-manager-work:webhook
    namespace: ""
    resource: clusterroles
    version: v1
  - group: rbac.authorization.k8s.io
    name: open-cluster-management:cluster-manager-work:webhook
    namespace: ""
    resource: clusterrolebindings
    version: v1
  - group: ""
    name: cluster-manager-work-webhook-sa
    namespace: open-cluster-management-hub
    resource: serviceaccounts
    version: v1
  - group: rbac.authorization.k8s.io
    name: open-cluster-management:cluster-manager-placement:controller
    namespace: ""
    resource: clusterroles
    version: v1
  - group: rbac.authorization.k8s.io
    name: open-cluster-management:cluster-manager-placement:controller
    namespace: ""
    resource: clusterrolebindings
    version: v1
  - group: ""
    name: cluster-manager-placement-controller-sa
    namespace: open-cluster-management-hub
    resource: serviceaccounts
    version: v1
  - group: ""
    name: cluster-manager-registration-webhook
    namespace: open-cluster-management-hub
    resource: services
    version: v1
  - group: ""
    name: cluster-manager-work-webhook
    namespace: open-cluster-management-hub
    resource: services
    version: v1
  - group: apps
    name: cluster-manager-registration-controller
    namespace: open-cluster-management-hub
    resource: deployments
    version: v1
  - group: apps
    name: cluster-manager-registration-webhook
    namespace: open-cluster-management-hub
    resource: deployments
    version: v1
  - group: apps
    name: cluster-manager-work-webhook
    namespace: open-cluster-management-hub
    resource: deployments
    version: v1
  - group: apps
    name: cluster-manager-placement-controller
    namespace: open-cluster-management-hub
    resource: deployments
    version: v1
  - group: admissionregistration.k8s.io
    name: managedclustervalidators.admission.cluster.open-cluster-management.io
    namespace: ""
    resource: validatingwebhookconfigurations
    version: v1
  - group: admissionregistration.k8s.io
    name: managedclustermutators.admission.cluster.open-cluster-management.io
    namespace: ""
    resource: mutatingwebhookconfigurations
    version: v1
  - group: admissionregistration.k8s.io
    name: managedclustersetbindingvalidators.admission.cluster.open-cluster-management.io
    namespace: ""
    resource: validatingwebhookconfigurations
    version: v1
  - group: admissionregistration.k8s.io
    name: managedclustersetbindingv1beta1validators.admission.cluster.open-cluster-management.io
    namespace: ""
    resource: validatingwebhookconfigurations
    version: v1
  - group: admissionregistration.k8s.io
    name: manifestworkvalidators.admission.work.open-cluster-management.io
    namespace: ""
    resource: validatingwebhookconfigurations
    version: v1

接受加入请求并验证

OCM 代理在您的托管集群上运行后,它将向您的 hub 集群发送“握手”并等待 hub 集群管理员的批准。在本节中,我们将从 OCM 中心管理员的角度逐步接受注册请求。

等待 CSR 对象的创建,该对象将由您的托管集群的 OCM 代理在 hub 集群上创建:

扫描二维码关注公众号,回复: 14594727 查看本文章
# or the previously chosen cluster name
kubectl get csr -w --context ${CTX_HUB_CLUSTER} | grep cluster1  

待处理 CSR 请求的示例如下所示:

cluster1-tqcjj   33s   kubernetes.io/kube-apiserver-client   system:serviceaccount:open-cluster-management:cluster-bootstrap   Pending

使用工具接受加入请求clusteradm

clusteradm accept --clusters cluster1 --context ${CTX_HUB_CLUSTER}

运行该accept命令后,来自名为“cluster1”的托管集群的 CSR 将获得批准。此外,它将指示 OCM hub 控制平面自动设置相关对象(例如 hub 集群中名为“cluster1”的命名空间)和 RBAC 权限。

通过运行以下命令验证托管集群上 OCM 代理的安装:

kubectl -n open-cluster-management-agent get pod --context ${CTX_MANAGED_CLUSTER}
NAME                                             READY   STATUS    RESTARTS   AGE
klusterlet-registration-agent-598fd79988-jxx7n   1/1     Running   0          19d
klusterlet-work-agent-7d47f4b5c5-dnkqw           1/1     Running   0          19d

从控制平面卸载 OCM

在从您的集群中卸载 OCM 组件之前,请将受管集群与控制平面分离。

clusteradm clean --context ${CTX_HUB_CLUSTER}

检查 OCM 的集线器控制平面的实例是否已删除。

$ kubectl -n open-cluster-management-hub get pod --context ${CTX_HUB_CLUSTER}
No resources found in open-cluster-management-hub namespace.
$ kubectl -n open-cluster-management get pod --context ${CTX_HUB_CLUSTER}
No resources found in open-cluster-management namespace.

检查clustermanager资源是否已从控制平面中删除。

$ kubectl get clustermanager --context ${CTX_HUB_CLUSTER}
error: the server doesn't have a resource type "clustermanager"

注销已管理集群

删除向中心群集注册时生成的资源。
格式:

clusteradm unjoin --cluster-name "cluster1" --context ${CTX_MANAGED_CLUSTER}

例如:

$ clusteradm unjoin --cluster-name "default" --context ${CTX_HUB_CLUSTER}
Remove applied resources in the managed cluster default ...
Applied resources have been deleted during the default joined stage. The status of mcl default will be unknown in the hub cluster.

检查OCM代理的安装是否已从受管集群中删除。

kubectl -n open-cluster-management-agent get pod --context ${CTX_MANAGED_CLUSTER}
No resources found in open-cluster-management-agent namespace.

检查已注销的集群对应的 klusterlet 是否被删除

$ kubectl get klusterlet --context  ${CTX_HUB_CLUSTER}
error: the server doesn't have a resource type "klusterlet"

Open Cluster Management 部署

Open Cluster Management 如何管理 k8s

Open Cluster Management 如何开发

参考:
https://open-cluster-management.io/getting-started/installation/register-a-cluster/
https://open-cluster-management.io/getting-started/installation/start-the-control-plane/
https://cert-manager.io/docs/usage/kube-csr/

猜你喜欢

转载自blog.csdn.net/xixihahalelehehe/article/details/129548346