How to move applications to Kubernetes

Ben Sears

Kubernetes is the most popular management and orchestration tool nowadays. It provides a configuration-driven framework that allows us to define and manipulate the entire network, disks, and applications in a scalable and easy-to-manage way.

If we haven't finished containerizing the application, then moving the application to Kubernetes will be a high-intensity work. The purpose of this article is to introduce the method of integrating the application with Kubernetes.

Step 1 — Containerize the application

Containers are basic operating units that can run independently. Unlike traditional virtual machines, which rely on simulated operating systems, containers utilize various kernel features to provide an environment isolated from the host.

For experienced technicians, the entire containerization process is not complicated - use docker, define a dockerfile containing installation steps and configuration (download packages, dependencies, etc.), and finally build an image that can be used by developers. Can.

Step 2 — Adopt a multi-instance architecture

Before we can migrate the application to Kubernetes, we need to confirm the delivery method to the end user.

The multi-tenant structure of traditional web applications is that all users share a single database instance and application instance. This form works fine in Kubernetes, but we recommend considering transforming the application into a multi-instance architecture to take full advantage of Kubernetes and containerized applications. Advantageous features.

The benefits of using a multi-instance architecture include:

Stability - single point of failure, does not affect other instances;

Scalable - with a multi-instance architecture, scaling is a matter of increasing computing resources; while for a multi-tenant architecture, the deployment of a clustered application architecture may be cumbersome;

Security - When you use a single database, all data is together, and in the event of a security breach, all users will be threatened, while with multiple data centers, only one user's data will be at risk;

Step 3 — Determine the resource consumption of the application

To be cost-effective, we need to determine the amount of CPU, memory, and storage required to run a single application instance.

By setting limits, we can fine-tune how much space a Kubernetes node needs, ensure nodes are overloaded or unavailable, and more.

This requires us to try and verify repeatedly, and there are some tools that can do this for us.

  • After determining the resource allocation, we can calculate the optimal resource size for Kubernetes nodes;

  • Multiply the memory or CPU required by each instance by 100 (the maximum number of Pods a node can hold), and we can roughly estimate how much memory and CPU your node should have;

  • Stress test the application to make sure it runs smoothly with full nodes.

Step 4 — Integrate with Kubernetes

Once the Kubernetes cluster is up and running, we will find that many DevOps practices come naturally -

Autoscale Kubernetes nodes

When nodes are full, it is often necessary to provision more nodes so that all power runs smoothly, and auto-scaling Kubernetes nodes come in handy.

Autoscale applications

Depending on the usage, some applications need to be scaled, and Kubernetes does this using triggers that automatically scale deployments, command:

kubectl autoscale deployment myapp --cpu-percent = 50 --min = 1 --max = 10

As above, set up myapp deployment to scale to 10 containers when the CPU percentage exceeds 50.

Automatically configure instances on user action

For multi-instance architecture, the end user deploys the application in Kubernetes, and in order to achieve this, we should consider integrating the application with the Kubernetes API, or use a third-party solution to provide an ingress requesting instance.

Define a custom hostname via user action

More and more end users are attaching their domain names to applications these days, and Kubernetes provides tools to make this process easier, even to the point of self-service (user pushes a button to set up a domain to point to), We can do this using a system like Nginx Ingress.

One last ad

A series of conceptual abstractions proposed by Kubernetes are very consistent with an ideal distributed scheduling system. However, a large number of difficult technical concepts have also formed a steep learning curve, which directly raises the threshold for the use of Kubernetes.

Rainbond, an open source PaaS of Nimbus Cloud, packages these technical concepts into "Production-Ready" applications, which can be used as a Kubernetes panel that developers can use without special learning.

In addition, Kubernetes itself is a container orchestration tool and does not provide management processes, while Rainbond provides ready-made management processes, including DevOps, automated operation and maintenance, microservice architecture and application market, etc., which can be used out of the box.

Learn more: https://www.goodrain.com/scene/k8s-docker

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325253262&siteId=291194637