Let’s talk about Kubernetes from a software engineer’s perspective

As software engineers, we should be familiar with K8s, and although it is a bit like DevOps, it gives us a better understanding of what is happening behind the scenes, making us more relevant and accountable to the deployment work. This article will discuss Kubernetes (K8s) from the perspective of a software engineer. We will introduce its motivation, principles and core components to help developers improve their professional knowledge of Kubernetes and embrace this cutting-edge technology with more confidence!
 

background

Before talking about Kubernetes, let us first understand what a container is .
 

The concept of containers becomes clear when we consider a scenario like this: after a developer has finished writing code that meets a specific need, the next step is to package it and install it seamlessly on another host, ensuring that we Customers can easily install and enjoy its benefits. How to package and install it on another host? Usually, we have many dependencies, such as binary code, dependent libraries and different operating systems, and we need to package them all into a package, a so-called "container".
 

In other words, we can put our code into a container along with all its dependencies and then easily run it on a remote machine, or in engineering terms, "deploy our service" .
 

Deployment challenges

Now that we know our services are shipped using containers, these main questions arise:

  • How do we know that our container service won't crash? We want to make sure that if one container goes down, another container will start.

  • How to ensure that this container has enough resources to run? Maybe it takes up more resources than it actually needs.

  • How do we manage version deployment, meaning that when we upgrade our code, it can be done without downtime? We want to ensure high availability of the service.

  • How do we get our containers to talk to each other?

  • How do we scale up or down as our requests increase or decrease?
     

AppsFlyer had encountered these issues before adopting K8s, and as a company with a strong platform team, we solved them through internal implementation. For example, in order to manage the life cycle of the service, we created a process called "Medic" to ensure that our service is always running normally by continuously sending GET requests to the health check API.
 

Another example is that most of our services are deployed on an ec2 instance via a docker container and an in-house tool ("Santa") for deploying and managing services. This is not shared with any other service, which would be a waste of resources, time, and more importantly, money.
 

K8s solution

As you can understand from the above, Kubernetes was implemented to solve the challenges I mentioned.
 

The definition of Kubernetes is: "It is an open source system for automating the deployment, scaling and management of containerized applications." In other words, Kubernetes provides us with a container orchestration system for properly managing our clusters, allowing us Can deploy, manage resources and scale applications. K8s wraps our containers and steers the ship for us.
 

Here are some of the benefits we gained from using K8s and solving the above challenges:

  • Self-healing of containers in case of crashes – Kubernetes provides a health check mechanism. This means that we no longer need to implement the inspection API to sample our services.

  • The automatic distribution and scheduling of application containers provides us with efficient utilization of node resources . Utilize resources wisely and efficiently by sharing node instances with multiple applications.

  • Automatic rollout and rollback without downtime .

  • Service discovery and load balancing help containers communicate with each other .

  • Horizontal scaling improves application performance by ensuring developers can use the application simultaneously under low or high load .
     

In summary, Kubernetes is the best solution for managing containerized applications at scale. With its powerful components and automation capabilities, Kubernetes simplifies deployment, scaling, and management of application lifecycles. Kubernetes saves time and effort compared to using Docker directly on an EC2 instance , and provides basic functionality for managing applications in production.
 

Most importantly, Kubernetes saves companies money . By automating the management of infrastructure, Kubernetes reduces the need for manual intervention and in-house tooling, which, as mentioned above, can lead to significant operational cost savings. Additionally, Kubernetes can help optimize resource utilization , making it possible to run more applications on the same hardware, thereby saving costs.
 

The basic components of K8s that every developer should know


 

The core components of Kubernetes fall into two broad categories: control plane components and nodes . Let's take a look at these high-level components:
 

API server

The API server is the core component of the control plane and is responsible for exposing the Kubernetes API and processing API requests. It is the primary way that other components in the cluster, such as the kubectl command line tool or the Kubernetes panel, interact with the cluster.
 

scheduler

The scheduler is responsible for scheduling pods to nodes in the cluster based on available resources and specified limits and rules. It ensures that pods are placed on nodes in a way that maximizes resource utilization and reduces resource contention.
 

control manager

The control manager is a process that runs on the control plane and is responsible for managing the state of the cluster and ensuring that it conforms to the desired state. It consists of several different controllers, each responsible for a specific aspect of cluster management, such as the deployment controller, which manages the deployment of applications in the cluster.
 

Cloud Control Manager

Cloud Control Manager is a special component used when running Kubernetes on a cloud platform. It is responsible for integrating the Kubernetes control plane with the cloud provider's API, allowing the cluster to use the cloud's specific features and resources.
 

etcd

etcd is a distributed key-value store used to store the configuration data of a Kubernetes cluster, including the current state and desired state of the cluster. It is used to store data that needs to be persisted across all nodes in the cluster, such as information about pods, services, and other objects in the cluster.
 

Kubelet

Kubelet is a daemon that runs on each node in the cluster and is responsible for managing the pods on that node. Kubelet is responsible for tasks such as starting and stopping pods, monitoring the health of pods, and restarting pods when necessary. It communicates with the Kubernetes control plane to receive instructions on which pods to run and how to manage them, and with container runtimes such as Docker to actually execute the containers.
 

It was a proxy

Kube-proxy is a daemon that runs on each node in the cluster and is responsible for implementing the virtual network infrastructure for the cluster. Kube-proxy uses network programming techniques to forward network traffic to the appropriate pod or service based on the rules defined in the cluster network configuration. Some of the main tasks performed by Kube-proxy include load balancing, service discovery, and network policy enforcement.
 

Summarize

As developers, it is crucial to fully understand the technologies we encounter, whether they are directly related to our direct responsibilities or managed by separate DevOps teams. This article will serve as a perfect starting point to promote a deeper understanding of K8s. world.
 

K8s has a steep learning curve and is too clunky for developers. Walrus, a new generation application management platform built based on the platform engineering concept, will separate the concerns of R&D and operation and maintenance. By providing flexible and powerful application and environment deployment management capabilities and shielding the upper-level abstraction of infrastructure, R&D personnel can work without understanding the underlying infrastructure . Build, deploy, and run applications self-service without technical details, reducing developers' cognitive load . Use Walrus to extend cloud native capabilities and best practices to non-containerized environments, support unified orchestration and deployment of any application form, reduce the complexity of using infrastructure, and provide R&D and operation and maintenance teams with easy-to-use, consistent application management and Deployment experience to build a seamless and collaborative software delivery process. Copy the project link below into your browser and try Walrus now.
 

Open source address: github.com/seal-io/walrus
 

Reference link :
https://medium.com/appsflyerengineering/hi-developer-meet-kubernetes-8652bdc210d9

Guess you like

Origin blog.csdn.net/SEAL_Security/article/details/132754441