New Oriental's exploration of using container technology in user self-service

Let me start with a little story:


The front-line support brother of the operation and maintenance team received a work order from a team and asked to deploy a set of TiDB for functional testing. The little brother applied for a machine on the cloud platform, installed and configured it according to the internal standard deployment document, and finally delivered it to the team. It was already after 3:00 pm after the little brother replied to the email, and the little brother's day passed like this. This is how bland operations work seems to be.


I believe this is a problem that most operations teams face, and operations teams face a lot of repetitive, transactional work. When the size of the organization becomes larger, some projects like this will come to the operation and maintenance team for support. The operation and maintenance team recruited a lot of people. After a busy year, they found that nothing seemed to be accomplished. The credit seems to be someone else's, and the cost is their own. Why is this happening? Because a lot of the time and energy of the operations team is wasted in the chores of serving other teams.


This is also one of the original intentions of our Information Management Department teacher Wang Wei who proposed the construction of New Oriental Cloud. We envision whether the resources in our hands and our services can be platformized? Through customization and development, what we ultimately deliver is not a project-specific labor service, but a service platform. Let users do most of the repetitive routine deployment and maintenance work themselves. We directly give the operation and maintenance capabilities to other teams and users, so that the operation and maintenance team can focus on improving SLA and technical improvement.
This is also the theme I share today. Today, I mainly share some explorations of New Oriental using container technology in user self-service.


First, a brief introduction to New Oriental Cloud and container cloud services:


New Oriental Cloud adopts a hybrid cloud architecture of self-built data center + IDC + public cloud, providing IaaS-level services including cloud server, object storage, distributed file system, etc., middleware-level services such as message queue, cache service, and operation and maintenance Big data solutions and video services, etc.


1.jpg


The container cloud service is the latest service provided by New Oriental Cloud, and it is still in the beta stage. Our container cloud platform is also based on mainstream technologies such as Docker and Harbor as the background, and uses the Cattle provided by Rancher as the orchestration engine.


You may have heard of Rancher, so let's briefly introduce Rancher.


Rancher is a lightweight container management platform with CNI-compatible network services, storage services, host management, load balancing and other functions. Rancher is now split into two branches:


  1. X: Using the Cattle orchestration engine developed by Rancher, it already has a complete platform and many fans around the world.


  2. X: It has now entered the beta stage, and the 2.X branch completely builds Rancher on Kubernetes, providing management of various Kubernetes platforms built by public cloud or self-built.


We chose Rancher mainly for the following reasons:


  • The project is 100% open source, with optional commercial support.

  • Provides rich APIs for users to customize.

  • The learning cost is low, and the Docker ecosystem is directly used, so it is easy to get started.

  • Built-in app store framework (Rancher Catalog).


The function of Rancher Catalog is similar to Kubernetes' Helm, which provides packaging customization of applications and provides a complete set of user UI. Users can deploy a set of applications with simple input. This happens to be the function that we value more. We try to package the daily basic software through the container cloud platform and publish it in the enterprise application store, so that when the team needs it, it only needs to simply set the interface and get the service immediately. 


Having said all that, what does an app store look like? The principle and implementation of the application store are described in detail below. Continuing the story of the opening operation and maintenance brother, the same job, he only needs to open the application store now:

2.jpg


Fill out the form:


3.jpg


After waiting for about 3 minutes, a complete TiDB environment will come out. During this period, the little brother completely ignores the creation process here and can do other things.

4.jpg


After creation, you can directly connect according to the exposed port and IP.


5.jpg


This is the process of creating an application in the app store. You can also see from the picture that we are still adding internal applications. 


For example, in the SQL Server 2017 on Docker in the picture, it takes less than half a day to customize such an application, and complex applications can be written in two or three days.


So the following is a brief description of the implementation principle of the application store:


Components of the App Store


mirror repository


We chose the Harbor project, which is widely used in the industry, and I believe everyone is familiar with it.


A brief introduction: Harbor is a set of mirror warehouse projects based on Docker Registry open sourced by the VMware China team. This project integrates CoreOS's Clair vulnerability check tool and Notary image signature tool, with LDAP integration, UI and other necessary functions for enterprise applications. It can be said that it is the preferred project of the current enterprise mirror warehouse. This project is an open source project. Users can download and customize it by themselves.


We have made a simple customization of Harbor, and our Harbor directly connects to Ceph object storage (through the Swift interface). In this way, the backend storage is guaranteed in terms of performance and reliability. At the same time, the front-end can also be expanded as needed. In fact, in addition to MySQL and Pg databases, other services in Harbor are stateless and can be automatically expanded and contracted. At present, Harbor has Kubernetes and Rancher (in the community Catalog). Implementation, we can refer to.


Our current practice is the independent deployment of offline installation through docker-compose. Later, we will consider incorporating it into the management of Rancher in an independent environment mode, and we will share with you further practical results later. I won't expand it here. Interested friends can take a look at my work notes: http://jiangjiang.space/2017/11/15/harbor-completely customized manual-making a better registry/ and http: //jiangjiang.space/2017/11/03/harborregistry set ceph object storage/.


Source code repository


The source code repository is used as the carrier of the app store, which is actually a Git project. All orchestration files are stored in the specified path in the form of directories, so that Rancher can read them as applications one by one.


As shown in the figure: 
The directory structure in GitLab: 

6.jpg

Below Templates are each application.


It is also possible to use privately deployed GitLab or to use GitHub directly.


Finally got to the point, the app store


As mentioned earlier, Rancher's Catalog is very similar to Kubernetes' Helm. For example, Helm's various Charts, Values, and Templates definitions help users solidify application customization. Rancher's Catalog is also the design idea. Helm revolves around Kubernetes, and Rancher is born around Cattle.


The following is an application in the Catalog to explain. This is the directory structure of the TiDB application just now:


7.png

The 0 directory and the 1 directory are version directories. Rancher can help maintain the version of the application, and can help you upgrade the application instance of the old version to the new version. The upgrade action is also done manually through the application store.


Enter the 0 directory to see the file, which is Rancher's service definition file.

8.png

Seeing the familiar docker-compose.yml and the unfamiliar rancher-compose.yml, the relationship between these two files starts with Cattle.


Rancher's Cattle orchestration engine is cleverly designed. Cattle directly uses docker-compose to define the relationship between services and services , such as which mirrors a service uses, which ports are exposed, storage volume definitions, and so on. We know that docker-compose is actually a single-machine container orchestration tool. If it is used as a cluster orchestration file, it still lacks many important elements:


  1. Missing replica definition for service.

  2. Lack of affiliation between containers (simulating Kubernetes Pods), the sequential startup of services can be controlled through deponds_on, but the relationship between multiple containers in a service is lost.

  3. Lack of orchestration control of services, such as which host a service is scheduled to on.

  4. Lack of lifecycle control of containers, for example, a container is executed only once when the service is started, and the service will not be restarted after restarting the service.


The missing definitions of docker-compose are supplemented by rancher-compose.yml. In fact, Cattle just throws the definition part of the container in the docker-compose file directly to the Docker of the corresponding host to create the container according to the orchestration rules. Use the label function of the Docker container to mark the container. After the container is up, Cattle obtains the status of the container from the label, and then compares it with the arrangement rules to see if it meets the requirements. With the definitions of these two files, Cattle can schedule and create containers between hosts. Here are a few scheduling examples:


a: If you want to schedule a container to a certain host, you only need to label the container as follows:

9.png


Cattle will send the container definition to the host with the cn.xdf.edge=true label, and then start the container.


b: A more complex scheduling implementation, such as starting only one container (singleton) of a service on each host, or similar to Kubernetes daemonset, how is this implemented in Rancher? 

10.png


This tag will start the container on each host and only one.


If I don't want to occupy all the hosts, I just schedule to the target host according to the number of Scale, and each host only runs one instance:

11.png


This means:


  • There must be no containers that match the expression after the colon on the host at startup, and it is a soft limit, and removing the soft is a strict limit.

  • The service name is the instance name and service name of this service.


That is to say, as long as this machine starts an instance of this service, it does not schedule another one.


And some other tags:


  • io.rancher.sidekicks: Container name, container name

    # Set up a slave container, such as a side car container, you want the first container to start to provide storage volume, then start the second container to run the service, and then the third container to receive and forward logs, etc.

  • io.rancher.container.pull_image:always

    # Always pull the container image when starting the service


For detailed scheduling and labeling instructions, please refer to: http://rancher.com/docs/rancher/latest/en/cattle/labels/.


In addition to defining schedules, rancher-compose also provides the functions of questions.


questions sets some questions, the user enters it at startup, and the input result is kept in answer.txt. This design is similar to helm's values_file, except that the value file is not predefined, but is generated after the user enters it at startup.

12.jpg

According to the above definition, Rancher UI will configure input boxes for different types of questions, such as enum is a drop-down list, int is an input box that can only input integers, and Rancher will do basic input checking.


The code in the image above will be automatically converted into the UI in the image below.

13.png


It may be understood by everyone here, this is the principle of the app store:


  • A set of service and container definitions (docker-compose)

  • A set of orchestration settings (rancher-compose)

  • A UI where the user can enter values

  • User fills in values ​​in UI


The orchestration engine is then handed over to Docker on the host for execution.


App store usage:


At present, the app store is mainly used internally by the operation and maintenance team. In the next step, we plan to use Rancher as the management background, and connect to the New Oriental cloud platform by calling Rancher's API. Consistent with the existing application and activation process of New Oriental Cloud, it simplifies user input, and finally forms a set of PaaS platforms that users can serve themselves.


I will make a series of summary of the practical work of Rancher 1.X recently. For the specific technical details, you can pay attention to my work log http://jiangjiang.space.


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324628807&siteId=291194637