HUAWEI CLOUD-Container Engine CCE-Basic Concepts

Cloud Container Engine (CCE) provides a highly scalable and high-performance enterprise-level Kubernetes cluster that supports the operation of Docker containers. With the help of the cloud container engine, you can easily deploy, manage, and expand containerized applications on HUAWEI CLOUD.

The cloud container engine provides Kubernetes native API, supports the use of kubectl, and provides a graphical console, so that you can have a complete end-to-end experience. Before using the cloud container engine, it is recommended that you first understand the relevant basic concepts.

Cluster
refers to the combination of cloud resources required for container operation, and is associated with cloud resources such as cloud server nodes and load balancing. You can understand that a cluster is a computer group composed of "one or more elastic cloud servers (also known as nodes) in the same subnet" through related technologies, and provides a computing resource pool for container operation.

Node (Node)
Each node corresponds to a server (it can be a virtual machine instance or a physical server), and the container application runs on the node. The agent program (kubelet) is running on the node to manage the container instances running on the node. The number of nodes in the cluster can be scaled.

Node Pool (NodePool) A
node pool is a group of nodes with the same configuration in a cluster. A node pool contains one node or multiple nodes.

Virtual Private Cloud (VPC)
Virtual private cloud is a logical way of network isolation, providing a safe and isolated network environment. You can define a virtual network in a VPC that is indistinguishable from traditional networks, and provide advanced network services such as elastic IP and security groups.

Security group A
security group is a logical grouping that provides access policies for the elastic cloud servers that have the same security protection requirements and trust each other in the same VPC. After the security group is created, users can define various access rules in the security group. When the elastic cloud server joins the security group, it is protected by these access rules.

For details, see Security Group .

The relationship between clusters, virtual private clouds, security groups, and nodes

As shown in the figure below, there can be multiple virtual private clouds (VPCs) under the same Region. The virtual private cloud is composed of subnets, and the network interaction between the subnets and the subnets is completed through the subnet gateway, and the cluster is built in a certain subnet. Therefore, there are three scenarios:
different clusters can be created in different virtual private clouds.
Different clusters can be created in the same subnet.
Different clusters can be created in different subnets.

The relationship between clusters, VPCs, security groups, and nodes.
Insert picture description here
Instances (Pod)
Instances (Pod) are the smallest basic unit for deploying applications or services in Kubernetes. A Pod encapsulates multiple application containers (or only one container), storage resources, an independent network IP, and policy options for managing and controlling the operation of the container.

Instance (Pod)
Insert picture description here
Container (Container) is
a running instance created through a Docker image, and a node can run multiple containers. The essence of a container is a process, but unlike a process that executes directly on the host, a container process runs in its own independent namespace.

Workload
Workload is the abstract model of Kubernetes for a set of Pods, which is used to describe the operation carrier of the business, including Deployment, Statefulset, Daemonset, Job, CronJob and many other types.

  • Stateless workloads : the "Deployment" in kubernetes. Stateless workloads support elastic scaling and rolling upgrades, and are suitable for scenarios where instances are completely independent and have the same functions, such as nginx, wordpress, etc.
  • Stateful workload : the "StatefulSet" in kubernetes. Stateful workload supports orderly deployment and deletion of instances, supports persistent storage, and is suitable for scenarios where there is mutual access between instances, such as ETCD, mysql-HA, etc.
  • Create a daemon set : the "DaemonSet" in kubernetes. The daemon set ensures that all (or some) nodes run a Pod instance, supports dynamic addition of instances to new nodes, and is suitable for instances that need to be run on each node Scenarios, such as ceph, fluentd, Prometheus
    Node Exporter, etc.
  • Ordinary task : It is the "Job" in kubernetes. Ordinary task is a one-time short task that can be executed after deployment. The usage scenario is to perform common tasks and upload the image to the mirror warehouse before creating the workload.
  • Timing task : the "CronJob" in kubernetes. Timing tasks are short tasks that run in a specified time period. The usage scenario is to perform time synchronization for all running nodes at a fixed point in time.

Workload and Pod relationship
Insert picture description here
orchestration template The
orchestration template contains the definition of a set of container services and their correlations, which can be used for the deployment and management of multi-container applications.

Image
Docker image is a template, a standard format for packaging container applications, used to create Docker containers. In other words, the Docker image is a special file system. In addition to providing the programs, libraries, resources, configuration and other files required by the container runtime, it also contains some configuration parameters (such as anonymous volumes, environment variables, Users, etc.). The image does not contain any dynamic data, and its content will not be changed after it is built. You can specify an image when deploying a containerized application. The image can come from Docker Hub, Huawei Cloud Container Image Service, or the user's private registry. For example, a Docker image can contain a complete Ubuntu operating system environment, in which only the applications needed by the user and their dependent files are installed.

The relationship between the image (Image) and the container (Container) is like the class and instance in object-oriented programming. The image is a static definition, and the container is the entity of the image at runtime. Containers can be created, started, stopped, deleted, suspended, etc.

The
Insert picture description here
namespace of images, containers, and workloads (Namespace)
is an abstract integration of a set of resources and objects. Different namespaces can be created in the same cluster, and data in different namespaces are isolated from each other. This allows them to share the services of the same cluster without interfering with each other. E.g:

  • The business of the development environment and the test environment can be placed in different namespaces.
  • Common pods, services, replication
    controllers, deployments, etc. all belong to a certain namespace (default is default), while node,
    persistentVolumes, etc. do not belong to any namespace.

Service
Service is an abstract method that exposes applications running on a set of Pods as network services.

With Kubernetes, you can use unfamiliar service discovery mechanisms without modifying the application. Kubernetes provides Pods with its own IP address and a single DNS name for a group of Pods, and can load balance between them.

Kubernetes allows you to specify a required type of Service. The value and behavior of the type are as follows:

  • ClusterIP: access within the cluster. Expose the service through the internal IP of the cluster. If this value is selected, the service can only be accessed inside the cluster, which is also the default
    ServiceType.
  • NodePort: Node access. The service is exposed through the IP and static port (NodePort) on each Node. The NodePort service will be routed to the
    ClusterIP service, and this ClusterIP service will be created automatically. Through request
    :, a NodePort service can be accessed from outside the cluster.
  • LoadBalancer: Load balancing. Using the load balancer of the cloud provider, the service can be exposed to the outside. The external load balancer can be routed to the NodePort service and the
    ClusterIP service.
  • DNAT: DNAT gateway. It can provide network address translation service for cluster nodes, so that multiple nodes can share and use elastic IP. Compared with the elastic IP method, the reliability is enhanced. The elastic IP does not need to be bound to a single node, and any abnormality of the node status does not affect its access

Seven-layer load balancing (Ingress)
Ingress provides a collection of routing rules for requests entering the cluster. It can provide services with URLs for external access to the cluster, load balancing, SSL termination, HTTP routing, etc.

Network Policy (NetworkPolicy)
NetworkPolicy provides policy-based network control to isolate applications and reduce the attack surface. It uses tag selectors to simulate traditional segmented networks, and controls the traffic between them and the traffic from the outside through policies.

Configuration Item (Configmap)
ConfigMap is used to save key-value pairs of configuration data. It can be used to save a single attribute or a configuration file. ConfigMap is similar to secret, but it can handle strings that do not contain sensitive information more conveniently.

Secret
Secret solves the configuration problem of sensitive data such as passwords, tokens, and secret keys, without exposing these sensitive data to the mirror or Pod Spec. Secret can be used in the form of Volume or environment variables.

Label (Label) A
label is actually a pair of key/value, which is associated with an object, such as a Pod. The use of tags we tend to be able to mark the special characteristics of the object, and is meaningful to the user, but the tag has no direct meaning to the kernel system.

Selector (LabelSelector)
Label selector is the core grouping mechanism of Kubernetes. Through label selector, the client/user can identify a group of resource objects with common characteristics or attributes.

Annotation
Similar to Label, Annotation is also defined in the form of key/value pairs.
Label has strict naming rules. It defines the metadata of Kubernetes objects and is used for Label Selector.
Annotation is the “additional” information that the user arbitrarily defines to facilitate the search by external tools.

PersistentVolume
PersistentVolume (PV) is a piece of network storage in the cluster. Like Node, it is also a cluster resource.

Storage claim (PersistentVolumeClaim)
PV is a storage resource, and PersistentVolumeClaim (PVC) is a request for PV. PVC is similar to Pod: Pod consumes Node resources, while PVC consumes PV resources; Pod can request CPU and memory resources, while PVC requests data volumes of a specific size and access mode.

Elastic Scaling (HPA)
Horizontal Pod Autoscaling, referred to as HPA, is a function of Kubernetes that realizes the horizontal automatic scaling of POD. The Kubernetes cluster can expand or shrink the service through the scale mechanism of the Replication Controller to achieve scalable services.

Affinity and anti-affinity
Before applications were not containerized, multiple components were originally installed on a virtual machine, and there would be communication between processes. However, when doing containerized splitting, the containers are often split directly by process, such as a container for business processes, monitoring log processing or local data in another container, and there is an independent life cycle. At this time, if they are distributed at two distant points in the network, and the request is forwarded multiple times, the performance will be poor.

  • Affinity: It can realize nearby deployment, enhance network capability, realize nearby routing in communication, and reduce network loss. For example, application A and application B frequently interact with each other, so it is necessary to use affinity to make the two applications as close as possible, even on one node, to reduce the performance loss caused by network communication.
  • Anti-affinity: Mainly for high reliability considerations, try to disperse instances as much as possible. When a node fails, the impact on the application is only
    one- Nth or just one instance. For example, when the application is deployed with multiple copies, it is necessary to use anti-affinity to disperse each application instance on each node to improve HA.

Node affinity (NodeAffinity)
can restrict pods from being scheduled to specific nodes by selecting tags.

NodeAntiAffinity
can restrict pods from being scheduled to specific nodes by selecting labels.

Workload affinity (PodAffinity)
specifies that the workload is deployed on the same node. Users can deploy nearby workloads according to business requirements, and route communication between containers to reduce network consumption.

PodAntiAffinity
specifies that the workload is deployed on different nodes. Multiple instances of the same workload are deployed against affinity to reduce the impact of downtime; applications that interfere with each other are deployed against affinity to avoid interference.

Resource Quota
Resource Quotas are a mechanism used to limit user resource usage.

Resource Limit (Limit Range)
By default, all containers in K8S do not have any CPU and memory restrictions. LimitRange (referred to as limits) is used to add a resource limit to the Namespace, including minimum, maximum, and default resources. When the pod is created, it is mandatory to use the parameters of limits to allocate resources.

Environment variable An
environment variable refers to a variable set in the running environment of the container. You can set no more than 30 environment variables when creating a container template. Environment variables can be modified after the workload is deployed, providing great flexibility for the workload.

Setting environment variables in CCE has the same effect as "ENV" in Dockerfile.

Application Service Grid (Istio)
Istio is an open platform that provides connectivity, protection, control, and observation functions.

The cloud container engine deeply integrates the application service grid, provides non-intrusive microservice governance solutions, supports complete life cycle management and traffic governance capabilities, and is compatible with Kubernetes and Istio ecology. Once the application service grid is turned on with one key, a non-intrusive intelligent traffic management solution can be provided, and its functions include multiple management capabilities such as load balancing, fusing, and current limiting. The application service grid has built-in canary, blue-green and other gray-scale release processes to provide one-stop automated release management. Based on non-intrusive monitoring data collection, Huawei Cloud Application Performance Management (APM) capabilities are deeply integrated to provide real-time traffic topology, call chain and other service performance monitoring and operation diagnosis, and build a panoramic service operation view.

Guess you like

Origin blog.csdn.net/KH_FC/article/details/111373386