Kubernetes microservice architecture application practice

The Kubernetes open source project officially launched by Google in 2015 has attracted the attention of many IT companies, including well-known IT companies such as Redhat, CoreOS, IBM, and HP, as well as domestic companies such as Huawei and Speed ​​Cloud. Why is Kubernetes getting the attention of so many companies? The most fundamental reason is that Kubernetes is a new generation of microservice architecture platform based on advanced container technology. It perfectly integrates the two eye-catching technical points of the current popular container technology and microservice architecture, and practically solves the problem. It solves the long-standing pain points in the development process of traditional distributed systems.

This article assumes that you are already familiar with and mastered the Docker technology, and will not spend any more time introducing it here. It is through the lightweight container isolation technology that Kubernetes realizes the characteristics of "microservices", and at the same time, with the help of the basic capabilities provided by Docker, the automation capabilities of the platform can be realized.

Concepts and Principles

As an architect, we have been working on distributed systems for so many years. In fact, what we really care about is not servers, switches, load balancers, monitoring and deployment. What we really care about is the "service" itself, and in Deep down, we long to realize the following "vision" shown in Figure 1:

There are three services, ServiceA, ServiceB, and ServiceC in my system. Among them, ServiceA needs to deploy 3 instances, while ServiceB and ServiceC each need to deploy 5 instances. I hope there is a platform (or tool) to help me automatically complete the above 13 instances distributed deployments and continuously monitor them. When a server goes down or a service instance fails, the platform is able to repair itself, ensuring that at any point in time, the number of service instances running is what I expected. In this way, my team and I only need to focus on the service development itself, instead of worrying about the headache of infrastructure and operation and maintenance monitoring.

image description

 

Figure 1 Distributed system architecture vision

 

直到Kubernetes出现之前,没有一个公开的平台声称实现了上面的“愿景”,这一次,又是谷歌的神作惊艳了我们。Kubernetes让团队有更多的时间去关注与业务需求和业务相关的代码本身,从而在很大程度上提升了整个软件团队的工作效率与投入产出比。 
Kubernetes里核心的概念只有以下几个:

  • Service
  • Pod
  • Deployments(RC)

Service表示业务系统中的一个“微服务”,每个具体的Service背后都有分布在多个机器上的进程实例来提供服务,这些进程实例在Kubernetes里被封装为一个个Pod,Pod基本等同于Docker Container,稍有不同的是Pod其实是一组密切捆绑在一起并且“同生共死”的Docker Container,从模型设计的角度来说,的确存在一个服务实例需要多个进程来提供服务并且它们需要“在一起” 的情况。

Kubernetes的Service与我们通常所说的“Service”有一个明显的的不同,前者有一个虚拟IP地址,称之为“ClusterIP”,服务与服务之间“ClusterIP+服务端口”的方式进行访问,而无需一个复杂的服务发现的API。这样一来,只要知道某个Service的ClusterIP,就能直接访问该服务,为此,Kubernetes提供了两种方式来解决ClusterIP的发现问题:

  • 第一种方式是通过环境变量,比如我们定义了一个名称为ORDER_SERVICE 的Service ,分配的ClusterIP为10.10.0.3 ,则在每个服务实例的容器中,会自动增加服务名到ClusterIP映射的环境变量:ORDER_SERVICE_SERVICE_HOST=10.10.0.3,于是程序里可以通过服务名简单获得对应的ClusterIP。
  • The second way is through DNS. In this way, the mapping relationship between each service name and ClusterIP will be automatically synchronized to the built-in DNS component in the Kubernetes cluster, so the corresponding service name can be found directly through the DNS Lookup mechanism of the service name. ClusterIP, this way is more intuitive.

Due to the unique design and implementation idea of ​​Kubernetes Service, all distributed systems that communicate with TCP/IP can be easily migrated to the Kubernetes platform. As shown in Figure 2, when a client accesses a Service, the built-in component kube-proxy of Kubernetes transparently implements advanced features such as traffic load balancing, session retention, and automatic fault recovery to the backend Pod.

image description

 

Figure 2 Kubernetes load balancing principle

 

How does Kubernetes bind Service and Pod? How does it distinguish which Pods correspond to the same Service? The answer is also very simple - "labeling". Each Pod can be affixed with one or more different labels (Label), and each Service has a "label selector". The label selector (Label Selector) determines which label objects to select, such as the following paragraph The content in YAML format defines a Service called ku8-redis-master, and the content of its tag selector is "app: ku8-redis-master", indicating that the service with the tag "app= ku8-redis-master" Pods all serve it.

apiVersion: v1
kind: Service
metadata:
  name: ku8-redis-master
spec:
  ports:
    - port: 6379
  selector:
    app: ku8-redis-master

The following is the definition of the corresponding Pod, noting the contents of its labels property:

apiVersion: v1
kind: Pod
metadata:
  name: ku8-redis-master
  labels:
       app: ku8-redis-master
spec:
      containers:
        - name: server
          image: redis
          ports:
            - containerPort: 6379
      restartPolicy: Never       

Finally, let's take a look at the concept of Deployment/RC, which is used to tell Kubernetes that a certain type of Pod (a Pod with a specific label) needs to create several replica instances in the cluster, the definition of Deployment/RC In fact, it is the declaration of the Pod creation template (Template) + the number of Pod replicas (replicas):

apiVersion: v1
kind: ReplicationController
metadata:
  name: ku8-redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: ku8-redis-slave
    spec:
      containers:
        - name: server
          image: devopsbq/redis-slave
          env:
            - name: MASTER_ADDR
              value: ku8-redis-master
          ports:
            - containerPort: 6379

Kubernetes Development Guide

In this section, we take a traditional Java application as an example to illustrate how to transform and migrate it to the advanced microservice architecture platform of Kubernetes.

As shown in Figure 3, our sample program is a web application running in Tomcat. In order to simplify, no framework is used, and the database is directly operated through JDBC in the JSP page.

image description

 

Figure 3 Java Web application to be transformed

 

In the above system, we model the MySQL service and the web application as a Service in Kubernetes respectively, where the Service of the MySQL service is defined as follows:

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
    - port: 3306
  selector:
app: mysql_pod

The definition of Deployment/RC corresponding to the MySQL service is as follows:

apiVersion: v1
kind: ReplicationController 
metadata:
  name: mysql-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql_pod
    spec:
      containers:
        - name: mysql
          image: mysql
          imagePullPolicy: IfNotPresent 
          ports:
            - containerPort: 3306
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: "123456"

Next, we need to modify the code for obtaining the MySQL address in the web application, and obtain the IP and Port of the above MySQL service from the environment variables of the container:

       String ip=System.getenv("MYSQL_SERVICE_HOST");
       String port=System.getenv("MYSQL_SERVICE_PORT");
       ip=(ip==null)?"localhost":ip;
       port=(port==null)?"3306":port;  
      conn = java.sql.DriverManager.getConnection("jdbc:mysql://"+ip+":"+port+"?useUnicode=true&characterEncoding=UTF-8", "root","123456"); 

Next, package this web application into a standard Docker image named k8s_myweb_image. This image can directly add our web application directory demo from the official Tomcat image to the webapps directory. The Dockerfile is relatively simple, as shown below:

FROM tomcat
MAINTAINER bestme <bestme@hpe.com>
ADD demo /usr/local/tomcat/webapps/demo

Similar to the previous MySQL service definition, here is the service definition for this web application:


apiVersion: v1
kind: Service
metadata:
  name: hpe-java-web
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 31002 
  selector:
    app: hpe_java_web_pod

We see that a special syntax is used here: NodePort, and the value is 31002. The function of the syntax is to map the 8080 port in the web application container to the 31002 port on each Node in kuberntetes by NAT, so that we can use Node's IP and port 31002 to access Tomcat's port 8080. For example, my local machine can access this web application  through http://192.168.18.137:31002/demo/ .
The following is the definition of Deployment/RC corresponding to the Service of the Web application:

apiVersion: v1
kind: ReplicationController
metadata:
  name: hpe-java-web-deployement
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: hpe_java_web_pod
    spec:
      containers:
        - name: myweb
          image: k8s_myweb_image
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080

After defining all the Services and the corresponding Deployment/RC description files (4 yaml files in total), we can submit them to the cluster through the Kubernetes command-line tool kubectrl –f create xxx.yaml. If everything is normal, Kubernetes will be in a few minutes. After the deployment is completed automatically, you will see that the related resource objects have been created successfully:

-bash-4.2# kubectl get svc
NAME           CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
hpe-java-web   10.254.183.22   nodes         8080/TCP   36m
kubernetes     10.254.0.1      <none>        443/TCP    89d
mysql          10.254.170.22   <none>        3306/TCP   36m
-bash-4.2# kubectl get pods
NAME                             READY     STATUS    RESTARTS   AGE
hpe-java-web-deployement-q8t9k   1/1       Running   0          36m
mysql-deployment-5py34           1/1       Running   0          36m
-bash-4.2# kubectl get rc
NAME                       DESIRED   CURRENT   AGE
hpe-java-web-deployement   1         1         37m
mysql-deployment           1         1         37m

concluding remarks

From the above steps, it is relatively easy to migrate traditional applications to Kubernetes. With the advantages of Kubernetes, even a small development team can quickly approach the system architecture and operation and maintenance capabilities of a large R&D team. level.

In addition, in order to lower the application threshold of Kubernetes, we (HP China CMS R&D team) open sourced a Kubernetes management platform Ku8 eye, the project address is https://github.com/bestcloud/ku8eye , Ku8 eye is very suitable for small companies The internal PaaS application management platform, its functional architecture is similar to the Ku8 Manager Enterprise Edition shown in Figure 4. Ku8 eye is developed in Java and is currently the only open-source Kubernetes graphical management system. I also hope that more open source and capable users Peers participated and made it the best open source software in the field of cloud computing in China.

image description

 

Figure 4 Architecture of PaaS platform based on Kubernetes

 

 

http://geek.csdn.net/news/detail/99478

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326306507&siteId=291194637