Karmada combines coreDNS plug-in to realize cross-cluster unified domain name access

This article is shared from HUAWEI CLOUD community " Karmada combined with coreDNS plug-in to realize cross-cluster unified domain name access ", author: cloud container big future.

Today, when multi-cloud and hybrid cloud are becoming more and more standard for enterprises, the deployment and access of services are often not in a K8s cluster. How to make service access independent of the cluster has become a problem that all cloud service providers must face. Based on Karmada v1.6.1, this article explores the method of using consistent domain names to access services across clusters to solve this problem.

1. Practice official examples

According to the official website example (configure multi-cluster service discovery) [1], the detailed operation is as follows:

1. Deploy business

Take deploying deployment and service as an example. Create deployment and service on the control plane and send them to cluster member1 through PropagationPolicy. The yaml merged in this step is as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: serve
spec:
  replicas: 2
  selector:
    matchLabels:
      app: serve
  template:
    metadata:
      labels:
        app: serve
    spec:
      containers:
      - name: serve
        image: jeremyot/serve:0a40de8
        args:
        - "--message='hello from cluster member1 (Node: {
   
   {env \"NODE_NAME\"}} Pod: {
   
   {env \"POD_NAME\"}} Address: {
   
   {addr}})'"
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
      dnsPolicy: ClusterFirst   # 优先使用集群的coredns来解析
---
apiVersion: v1
kind: Service
metadata:
  name: serve
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: serve
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: mcs-workload
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: serve
    - apiVersion: v1
      kind: Service
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member1

2. Create CPP rules for ServiceExport and ServiceImport

It is necessary to create its propagation rules in the cluster before creating the two CRDs of ServiceExport and ServiceImport on the control plane. In this example, the two CRDs are installed into member1 and member2 through ClusterPropagationPolicy. The yaml for this step is as follows:

# propagate ServiceExport CRD
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
  name: serviceexport-policy
spec:
  resourceSelectors:
    - apiVersion: apiextensions.k8s.io/v1
      kind: CustomResourceDefinition
      name: serviceexports.multicluster.x-k8s.io
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2
---
# propagate ServiceImport CRD
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
  name: serviceimport-policy
spec:
  resourceSelectors:
    - apiVersion: apiextensions.k8s.io/v1
      kind: CustomResourceDefinition
      name: serviceimports.multicluster.x-k8s.io
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2

3. Create a service exported from a member cluster

Export service from member1. That is, create the serviceExport of the server on the control plane of Karmada, and create the PropagationPolicy of the serviceExport. Enables Karmada to manage member1's serviceExport.

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
  name: serve
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: serve-export-policy
spec:
  resourceSelectors:
    - apiVersion: multicluster.x-k8s.io/v1alpha1
      kind: ServiceExport
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member1

4. Export the service to the member cluster

Import service into member2. Also create ServiceImport and PropagationPolicy on the control plane of Karmada. Make Karmada manage ServiceImport in member2.

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
  name: serve
spec:
  type: ClusterSetIP
  ports:
  - port: 80
    protocol: TCP
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: serve-import-policy
spec:
  resourceSelectors:
    - apiVersion: multicluster.x-k8s.io/v1alpha1
      kind: ServiceImport
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member2

5. Test results

In member2, request the pods in member1 by creating a test pod. In the process, it will be transferred through the derived-serve service

Switch to member2:

root@zishen:/home/btg/yaml/mcs# export KUBECONFIG="$HOME/.kube/members.config"
root@zishen:/home/btg/yaml/mcs# kubectl config use-context member2
Switched to context "member2".

Use the ip test of the service in member2

root@zishen:/home/btg/yaml/mcs# kubectl get svc -A
NAMESPACE     NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       derived-serve   ClusterIP   10.13.166.120   <none>        80/TCP                   4m37s
default       kubernetes      ClusterIP   10.13.0.1       <none>        443/TCP                  6d18h
kube-system   kube-dns        ClusterIP   10.13.0.10      <none>        53/UDP,53/TCP,9153/TCP   6d18h
#使用ip测试服务
root@zishen:/home/btg/yaml/mcs# kubectl --kubeconfig ~/.kube/members.config --context member2 run -i --rm --restart=Never --image=jeremyot/request:0a40de8 request -- --duration=3s --address=10.13.166.120
If you don't see a command prompt, try pressing enter.
2023/06/12 02:58:03 'hello from cluster member1 (Node: member1-control-plane Pod: serve-5899cfd5cd-6l27j Address: 10.10.0.17)'
2023/06/12 02:58:04 'hello from cluster member1 (Node: member1-control-plane Pod: serve-5899cfd5cd-6l27j Address: 10.10.0.17)'
pod "request" deleted
root@zishen:/home/btg/yaml/mcs#

The test is successful, use k8s default domain name serve.default.svc.cluster.local in member1

root@zishen:/home/btg/yaml/mcs# kubectl get svc -A
NAMESPACE     NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       derived-serve   ClusterIP   10.13.166.120   <none>        80/TCP                   4m37s
default       kubernetes      ClusterIP   10.13.0.1       <none>        443/TCP                  6d18h
kube-system   kube-dns        ClusterIP   10.13.0.10      <none>        53/UDP,53/TCP,9153/TCP   6d18h
#使用ip测试服务
root@zishen:/home/btg/yaml/mcs# kubectl --kubeconfig ~/.kube/members.config --context member2 run -i --rm --restart=Never --image=jeremyot/request:0a40de8 request -- --duration=3s --address=10.13.166.120
If you don't see a command prompt, try pressing enter.
2023/06/12 02:58:03 'hello from cluster member1 (Node: member1-control-plane Pod: serve-5899cfd5cd-6l27j Address: 10.10.0.17)'
2023/06/12 02:58:04 'hello from cluster member1 (Node: member1-control-plane Pod: serve-5899cfd5cd-6l27j Address: 10.10.0.17)'
pod "request" deleted

The test is successful, and the shadow domain name derived-serve.default.svc.cluster.local is used in member2.

root@zishen:/home/btg/yaml/mcs# kubectl --kubeconfig ~/.kube/members.config --context member2 run -i --rm --restart=Never --image=jeremyot/request:0a40de8 request -- --duration=3s --address=derived-serve.default.svc.cluster.local
If you don't see a command prompt, try pressing enter.
2023/06/12 03:30:41 'hello from cluster member1 (Node: member1-control-plane Pod: serve-5899cfd5cd-6l27j Address: 10.10.0.17)'
2023/06/12 03:30:42 'hello from cluster member1 (Node: member1-control-plane Pod: serve-5899cfd5cd-6l27j Address: 10.10.0.17)'
pod "request" deleted

The test is successful, so far the official website example test has passed.

As can be seen from the official website example, that is, the current cross-cluster access method of Karmada v1.6.1 is: deploy ServiceImport in the required cluster to generate a shadow service derived-service name; and then  pass derived  -service name.default.svc.cluster .local  to achieve cross-cluster service access.

There is still a gap between this and the goal of using a unified domain name for access, and visitors still need to perceive the access method.

2. Explore the use of consistent domain names for each cluster

1. Description of each program

At present, the proposal KEP-1645 [2] for mcs mainly has three types of implementation methods :

  • Use shadow services

That is, create a service with the same name in the client cluster, but the ENDPOINTS of the endpointSlice points to the target pods. This kind of method needs to solve the conflict of the local service with the same name, and the pr[4] of bivas[3] in the Karmada community is discussing this solution

  • Adopt the method of expanding coreDNS

Similar to the way of SERVICE DISCOVERY【5】of submariner. It is necessary to develop an extended dns resolver, and install the corresponding plug-in in coreDNS to transfer the domain name with the specified suffix (such as clusterset.local) to this extended dns resolver. In order to realize the resolution of the domain name.

▲The flow chart is as above

The advantages of this approach are as follows:

  • Priority access to local services can be achieved. Since the extended dns resolver is customized, customers can implement priority policies for service access here.
  • It can be accessed in the original way. Rules for local services and remote services can be determined in custom plug-ins. Therefore, the original access mode can be maintained, in this case: foo.default.svc.cluster.local.
  • No other configuration of dns is required. Since there is a custom dns resolver, there is no need to configure dns search. Therefore, there is no configuration of dns search in solution one.

The disadvantages of this method are as follows:

  • The coredns component also needs to be modified. It is also necessary to recompile coredns in order to install the corresponding plug-ins, and also need to modify the corresponding coredns configuration.
  • New components need to be deployed in member clusters. This solution requires a dns extension resolver to be installed.
  • Use SeviseImport

Its principle is 1645-multi-cluster-services-api [6] or [Multi-Cluster DNS [7] The coredns plug-in multicluster [8] introduced in this article is such an implementation. The realization of it will be explored and analyzed below.

2. Exploring the multicluster plug-in of coredns

The principle of the multicluster plug-in is relatively simple. It requires the ServiceImport of the client cluster to have the name of the original service and the corresponding clusterIP that needs to be resolved (this ip can be the original cluster-need to get through, or it can be the cluster). multicluster generates coredns rr records from these information. When encountering the domain name that multicluster needs to resolve, the resolution can be completed.

2.1 Compile and install the multicluster plugin of coreDNS

Compile coredns with multicluster plugin

According to the document multicluster plug-in [9] on the official website, download coredns. The corresponding version of k8s 1.26 is v1.9.3 (the latest v1.10.1 is also available):

git clone https://github.com/coredns/coredns
cd coredns
git checkout v1.9.3

Add multicluster plugin, open plugin.cfg file, add multicluster

kubernetes:kubernetes
multicluster:github.com/coredns/multicluster

execute compile

cd ../ 
make

Make a mirror image:

Execute directly in the directory (use a version below coreDNS v1.10, otherwise you need to upgrade the docker version):

root@zishen:/usr/workspace/go/src/github.com/coredns# docker build -f Dockerfile -t registry.k8s.io/coredns/coredns:v1.9.3 ./

Check

root@zishen:/usr/workspace/go/src/github.com/coredns# docker images|grep core
registry.k8s.io/coredns/coredns                         v1.9.3             9a15fc60cfea   27 seconds ago   49.8MB

load into kind:

root@zishen:/usr/workspace/go/src/github.com/coredns# kind load docker-image registry.k8s.io/coredns/coredns:v1.9.3 --name member2
Image: "registry.k8s.io/coredns/coredns:v1.9.3" with ID "sha256:9a15fc60cfea3f7e1b9847994d385a15af6d731f86b7f056ee868ac919255dca" not yet present on node "member2-control-plane", loading...
root@zishen:/usr/workspace/go/src/github.com/coredns#

Configure coredns permissions in member2

kubectl edit clusterrole system:coredns

root@zishen:/usr/workspace/go/src/github.com/coredns# kind load docker-image registry.k8s.io/coredns/coredns:v1.9.3 --name member2
Image: "registry.k8s.io/coredns/coredns:v1.9.3" with ID "sha256:9a15fc60cfea3f7e1b9847994d385a15af6d731f86b7f056ee868ac919255dca" not yet present on node "member2-control-plane", loading...
root@zishen:/usr/workspace/go/src/github.com/coredns#

Configure multicluster zone rules

Add multicluster processing rules to the corefile in coredns (take the clusterset.local suffix as an example).

Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        multicluster clusterset.local
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
[Note] clusterset.local cannot be the system default cluster.local, otherwise it will be intercepted by the kubernetes plug-in before multicluster. If necessary, you need to put multicluster before the kubernetes plugin in the plugin.cfg file before compiling. However, the impact has not been fully tested and requires further analysis.

Then restart coredns:

root@zishen:/home/btg/yaml/mcs# kubectl delete pod -n kube-system          coredns-787d4945fb-mvsv4 
pod "coredns-787d4945fb-mvsv4" deleted
root@zishen:/home/btg/yaml/mcs# kubectl delete pod -n kube-system          coredns-787d4945fb-62nxv
pod "coredns-787d4945fb-62nxv" deleted

Add clusterIP in the serviceImport of member2

Since currently Karmada has not filled the ips field of serviceImport, we need to fill it in by ourselves. Delete the ServiceImport of member2, and delete the ServiceImport on the karmada control plane. For specific yaml, refer to (Import Service to member cluster) before this article. Create a new ServiceImport. Since there is no ServiceImport in member2, there is no shadow service (no clusterip). For debugging, ips temporarily uses the pod ip of the source.

The yaml of the new ServiceImport is as follows:

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
  name: serve
  namespace: default
spec:
  type: ClusterSetIP
  ports:
  - name: "http"
    port: 80
    protocol: TCP
  ips:
  - "10.10.0.5"

Switch to member2 to create the ServiceImport.

After creation, the effect is as follows:

root@zishen:/home/btg/yaml/mcs/test# kubectl get serviceImport -A
NAMESPACE   NAME    TYPE           IP              AGE
default     serve   ClusterSetIP   ["10.10.0.5"]   4h42m

verify

It can be used (the method introduced before: test results), but for the convenience of debugging, this article uses the method of directly creating the client pod.

Create a pod on member2

kubectl --kubeconfig ~/.kube/members.config --context member2 run -i --image=ubuntu:18.04 btg

After creating, ctrl+c to exit. enter this pod

kubectl exec -it -n default btg bas

install software

apt-get update
apt-get install dnsutils
apt install iputils-ping
apt-get install net-tools 
apt install curl
apt-get install vim

To test the configurable domain name, add the clusterset.local suffix to /etc/resolv.conf. (explain in detail)

root@btg:/# cat /etc/resolv.conf 
search default.svc.cluster.local svc.cluster.local cluster.local clusterset.local
nameserver 10.13.0.10
options ndots:5

Access domain name: serve.default.svc.clusterset.local

root@btg:/# curl serve.default.svc.clusterset.local:8080
'hello from cluster member1 (Node: member1-control-plane Pod: serve-5899cfd5cd-dvxz8 Address: 10.10.0.7)'root@btg:/#
[Note] Since the domain name we use is not clusterIP, we need to add port 8080

The test is successful, so add the ips of the cluster member ServiceImport in Karmada later.

Description of access method

Since the search function of dns is used, its access is consistent with it. If the service is foo, the access method is:

  • foo.default.svc access. This method will be parsed according to the search configuration order of coredns. When there is no local service with the same name, remote cluster services can be accessed; when local services exist (regardless of whether the service is available), local services can be accessed.
  • foo.default.svc.clusterset.local Full domain name access. The request will go directly to the remote server for processing.

shortcoming

  • Using foo.default.svc access does not give priority to local services. The remote end will only be accessed when there is no local service with the same name. Therefore, it cannot meet the scenario of using remote cluster services when local services are unavailable.
  • Using the foo.default.svc method also needs to modify the dns configuration. If you use the resolv.conf method, you need to modify the resolv.conf file of each possible server service. If you use the dnsConfig method, you need to modify the yaml or api of the delivered pod.
  • The multicluster plugin cannot be loaded dynamically. Customers need to recompile coredns and modify the coredns configuration of member clusters. The original access method will be changed. This solution uses different suffixes to distinguish local and remote services. Therefore, the original access method foo.default.svc.cluster.local will be changed to foo.default.svc.clusterset.local. Customers need to modify the code to adapt.

advantage

  • The impact is controllable. No new components will be added. The method of adding multicluster to coredns also follows the official proposal of coredns, and the code is open.
  • The amount of modification is small, and it can be completed by using the existing code and scheme of multicluster.

3. Problem record

1. Failed to watch *v1alpha1.Servicelmport

Phenomenon:

W0612 12:18:13.939070       1 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1alpha1.ServiceImport: serviceimports.multicluster.x-k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "serviceimports" in API group "multicluster.x-k8s.io" at the cluster scope
E0612 12:18:13.939096       1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1alpha1.ServiceImport: failed to list *v1alpha1.ServiceImport: serviceimports.multicluster.x-k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "serviceimports" in API group "multicluster.x-k8s.io" at the cluster scope

Workaround: Add RBAC permissions:

root@zishen:/home/btg/yaml/mcs# kubectl edit clusterrole system:coredns

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: "2023-06-12T07:50:29Z"
  name: system:coredns
  resourceVersion: "225"
  uid: 51e7d961-29a6-43dc-ac0f-dbca68271e46
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  - ServiceImport
  verbs:
  - list
  - watch
...
- apiGroups:
  - multicluster.x-k8s.io
  resources:
  - serviceimports
  verbs:
  - list
  - watch

2. Compile the mirror report of the master branch of coredns: invalid argument

After the binary is compiled, the specific error of compiling the image is as follows:

failed to parse platform : "" is an invalid component of "": platform specifier component must match "

Upgrade docker to solve. I solved it by upgrading to 24.0.2.

[Note] You cannot apt-get install docker-ce directly, otherwise the version 20 will be installed, and this problem still exists.
  • apt switch source

Write the file /etc/apt/sources.list and change the content as follows:

deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
  • Write the /etc/apt/sources.list.d/docker.list file (if not, add it), add the following content:
deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable
  • perform update
deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable apt-get update
sudo apt-get upgradeapt-get update
sudo apt-get upgrade
  • install docker
apt-get install docker-ce docker-ce-cli containerd.io

3. dig中报WARNING:recursion requested but not available

The phenomenon is as follows:

root@btg:/# dig @10.13.0.10 serve.default.svc.cluster.local  A

; <<>> DiG 9.11.3-1ubuntu1.18-Ubuntu <<>> @10.13.0.10 serve.default.svc.cluster.local A
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 57327
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 24c073ed74240563 (echoed)
;; QUESTION SECTION:
;serve.default.svc.cluster.local. IN A

Refer to the header plug-in, just add it in the corefile

header {
  response set ra # set RecursionAvailable flag
}

4. Requirements

1. The coredns version must be v1.9.3 or above, otherwise the multicluster feature is not supported. The matching K8s version is at least v1.21.0.

2. It is best to specify dnsPolicy: ClusterFirst in the service.

5. Reference

[1] The official website example, the detailed operation is as follows: https://karmada.io/zh/docs/userguide/service/multi-cluster-service

[2] Multi-ClusterServicesAPI:https://github.com/kubernetes/enhancements/blob/master/keps/sig-multicluster/1645-multi-cluster-services-api/README.md

[3] Karmada 社区bivas:https://github.com/bivas

[4] proposal for native service discovery:https://github.com/karmada-io/karmada/pull/3694

[5] SERVICE DISCOVERY:https://submariner.io/getting-started/architecture/service-discovery/

[6] 1645-multi-cluster-services-api:https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api

[7] Multi-Cluster DNS :https://docs.google.com/document/d/1-jy1WM4Tb4iz4opBTxviap5PnpfRWd-NZvwhmZWOpRk/edit#heading=h.uk21d87cd2uk

[8] multicluster:https://github.com/coredns/multicluster

[9] multicluster plugin: https://github.com/coredns/multicluster

[10] CoreDNS 2-Compile and install ExternalPlugins] CoreDNS 2-Compile and install External Plugins

[11] MCP multi-cloud cross-cluster pod mutual access

https://www.361way.com/mcp-multi-pod-access/6873.html

[12] Pit mining guide - K8s domain name resolution coredns troubleshooting process

Pit guide - k8s domain name resolution coredns problem troubleshooting process-Knowledge

▍Related learning

1. Principle description of Submariner: [Kubernetes CNI plug-in selection and application scenario discussion:

Discussion on Kubernetes CNI plug-in selection and application scenarios Develop Paper

2. Principle of coredns: [K8s service registration and discovery (3) CoreDNS:

https://cloud.tencent.com/developer/article/2126512

3. Service principle: [Talking about Kubernetes Service:

https://z.itpub.net/article/detail/A0463024894EF7FEEDBFC4DDF0E797C8

4. The principle of endpointSlice: [Re-understanding Cloud Native] Chapter 6 Container Basics Section 6.4.9.5 - Service Feature Endpoint Slices (Endpoint Slices):

https://blog.csdn.net/junbaozi/article/details/127857965

Extra!

Huawei will hold the 8th HUAWEI CONNECT 2023 at the Shanghai World Expo Exhibition Hall and Shanghai World Expo Center on September 20-22, 2023. With the theme of "accelerating industry intelligence", this conference invites thought leaders, business elites, technical experts, partners, developers and other industry colleagues to discuss how to accelerate industry intelligence from the aspects of business, industry, and ecology.

We sincerely invite you to come to the site, share the opportunities and challenges of intelligentization, discuss the key measures of intelligentization, and experience the innovation and application of intelligent technology. you can:

  • In 100+ keynote speeches, summits, and forums, collide with the viewpoint of accelerating industry intelligence
  • Visit the 17,000-square-meter exhibition area to experience the innovation and application of intelligent technology in the industry at close range
  • Meet face-to-face with technical experts to learn about the latest solutions, development tools, and hands-on
  • Seek business opportunities with customers and partners

Thank you for your support and trust as always, and we look forward to meeting you in Shanghai.

Official website of the conference: https://www.huawei.com/cn/events/huaweiconnect

Welcome to follow the "Huawei Cloud Developer Alliance" official account to get the conference agenda, exciting activities and cutting-edge dry goods.

Click to follow and learn about Huawei Cloud's fresh technologies for the first time~

Guess you like

Origin blog.csdn.net/devcloud/article/details/132566589