[My Linux, I call the shots! ] Overview of DevOps culture and kubernetes

(1) DevOps culture
DevOps is the automation of operation and maintenance. Currently abroad, Internet giants such as Google, Amazon, Facebook, LinkedIn, Netflix, Airbnb, traditional software companies such as Adobe, IBM, Microsoft, SAP, etc., or network business is not the core Companies such as Apple, Wal-Mart, Coca-Cola, and Starbucks are all adopting DevOps or providing related support products. So what is DevOps all about?
The term DevOps comes from the combination of Development and Operations, which emphasizes the communication and cooperation between software developers and operation and maintenance personnel, and makes software construction, testing, and release faster, more frequent and reliable through automated processes. The concept of DevOps first warmed up in Europe in 2009, born out of the pain of the traditional model of operation and maintenance. DevOps is to fill the information gap between the development side and the operation and maintenance side and improve the collaborative relationship between teams. However, one thing that needs to be clarified is that there is a test link between development and operation and maintenance. DevOps actually consists of three parts: development, testing, and operation and maintenance. In other words, what DevOps hopes to achieve is to open up the IT tool chain in the software product delivery process, so that each team can reduce time loss and work together more efficiently.
The architecture of the application program was a monolithic architecture in the early days. In the early days, the application program was relatively simple, so it can still be warded off, but later people found that the monolithic application program is difficult to carry more and more complex systems, even if it can be horizontally expanded The internal business complexity of a monolithic application will also cause the expansion to easily reach the upper limit, so the next era of monolith is a layered architecture, allowing applications to be developed separately; then there will be microservices, not simple layering , But to disassemble each application into a tiny service and only do one thing, so a traditional Level 3 application needs to be disassembled into hundreds of microservices for collaboration with each other. Therefore, microservices are naturally very compatible with containers. Because containers are very convenient for distribution, construction, and deployment, the combination of services and corresponding containers quickly allows them to find a suitable implementation solution. Including the DevOps concept is the same. The interaction and deployment links in the early DevOps technology were difficult to deploy due to heterogeneous environmental factors. The emergence of docker technology just made up for this crack, making DevOps very easy to implement.
There are three concepts in DevOps. The first CI (CONTINUOUS INTEGRATION) refers to continuous integration, which is an automated process for developers. Successful CI means that application code updates will be regularly built, tested, and merged into the shared repository , This solution can solve the problem of conflicts caused by too many application branches in one development; the second CD (CONTINUOUS DELIVERY) refers to continuous delivery, usually means that the developers will automatically perform error tests on the changes of the application And upload it to the repository, and then the operation and maintenance team will deploy it to the real-time production environment. This aims to solve the problem of poor visibility and communication between development and operation and maintenance teams. Therefore, the purpose of continuous delivery is to ensure that the workload required to deploy new code is minimized; the third CD (CONTINUOUS DEPLOYMENT) is continuous Deployment refers to the automatic release of developer changes from the repository to the production environment for customers to use. It is mainly to solve the problem of manual processes that slow down application delivery and increase the overload of the operation and maintenance team. Continuous deployment Based on the advantages of continuous delivery, the automation of the subsequent stages of the pipeline is realized.
[My Linux, I call the shots!  ] Overview of DevOps culture and kubernetes
From the above, I believe everyone has a certain understanding of DevOps. But in addition to touching the tool chain, as a methodology of culture and technology, DevOps also requires changes in the company's organizational culture. Looking back at the R&D model of the software industry, we can find that there are roughly three stages: waterfall development, agile development, and DevOps. DevOps was put forward more than ten years ago, but why has it only recently begun to receive more and more enterprises' attention and practice? Because the development of DevOps is nothing but a forest, there is now more and more technical support. The concept of microservice architecture and container technology make the implementation of DevOps easier, and the development of computing power and cloud environment makes rapidly developed products immediately available for wider use.
So what are the benefits of DevOps? One of its huge benefits is that it can be delivered efficiently, which is exactly its original intention. Puppet and DevOps Research and Assessment (DORA) hosted the 2016 DevOps survey report. According to statistics submitted by 4,600 technical workers from various IT companies around the world, it is concluded that efficient companies can complete an average of 1,460 deployments per year. Compared with low-efficiency organizations, high-efficiency organizations are deployed 200 times more frequently, products are put into use 2555 times faster, and service recovery is 24 times faster. In the time allocation of work content, inefficient people spend 22% more time on planning or repetitive work, while efficient people can spend 29% more time on new work. Therefore, efficiency here not only refers to the improvement of the efficiency of the company's output, but also the improvement of the quality of the work of the employees. Another benefit of DevOps is that it will improve the company's organizational culture and increase employee participation. Employees become more efficient, more fulfilled and fulfilled; surveys show that highly effective employees have a higher employee net recommendation value, which means they agree with the company more.
So why does DevOps arise? First, the conditions are ripe, and the development of technology makes DevOps more compatible. In the early days, although everyone was aware of this problem, they suffered from the lack of complete and abundant technical tools at the time, which was an ideal situation. The implementation of DevOps can be based on emerging container technology; it can also be an extension of automated operation and maintenance tools Puppet, SaltStack, and Ansible; it can also be built on traditional PaaS vendors such as Cloud Foundry and OpenShift. Second, it is the external demand from the market. The IT industry has become more and more closely linked to the economic development of the market. Experts believe that IT will change from a support center to a profit-driven center. In fact, this change has already begun. This is not only reflected in large companies such as Google and Apple, but also in traditional industries, such as Uber in the taxi business, Airbnb in the hotel chain industry, Amazon, and so on. Whether the company's IT supporting programs can keep up with market demand in a timely manner is critical today. Third, for engineers, they are also beneficiaries of DevOps. The opening of the tool chain allows developers to complete the construction, testing and operation of the production environment when delivering software, just as the Amazon CTO’s impressive sentence: "Who develops it, runs it." (You build it, you run it )
(2) Kubernetes Overview
Kubernetes is a Google open source container orchestration engine, which supports automated deployment, large-scale scalable application container management. When deploying an application in a production environment, multiple instances of the application are usually deployed to load balance application requests.
Kubernetes, also known as k8s or "kube" for short, is an open source platform that can automatically implement Linux container operations. It can help users save many manual deployment and expansion operations in the application containerization process. In other words, you can gather multiple groups of hosts running Linux containers together, and Kubernetes can help you manage these clusters easily and efficiently. Moreover, these clusters can deploy hosts across public, private, or hybrid clouds. Therefore, for cloud-native applications that require rapid expansion (such as real-time data stream processing with Apache Kafka), Kubernetes is an ideal hosting platform.
Kubernetes was originally developed and designed by Google engineers. Google is one of the earliest companies to develop Linux container technology (components of cgroups), and once publicly shared how Google runs everything in containers (this is the technology behind Google cloud services). Google will launch more than 2 billion containers every week-all supported by the internal platform Borg. Borg is the predecessor of Kubernetes. The experience and lessons of developing Borg over the years have become the main factors affecting many technologies in Kubernetes.
Why do we need Kubernetes? A true production application involves multiple containers. These containers must be deployed across multiple server hosts. Container security requires multiple layers of deployment, so it may be more complicated. But Kubernetes helps solve this problem. Kubernetes can provide the required orchestration and management functions so that you can deploy containers on a large scale for these workloads. With the help of Kubernetes orchestration, you can build application services for multiple containers, schedule across clusters, expand these containers, and continuously manage the health of these containers for a long time. With Kubernetes, you can actually take some measures to improve IT security. Kubernetes also needs to integrate with networking, storage, security, telemetry, and other services to provide a comprehensive container infrastructure.
Redhat has already spent a lot of money on container orchestration tools, and this is also reflected in the fact that Kubernetes has become the PaaS in Red Hat products. Everyone knows that Kubernetes is an open source technology, so there is no formal technical support organization. Provide support for your commercial business. If there is an implementation problem with Kubernetes in the production process, you will be very worried, and your customers may be the same. This is the time for the enterprise Kubernetes container platform to show off. OpenShift is the enterprise version of Kubernetes, in addition, it also has more features. OpenShift introduces additional advanced technologies to make Kubernetes a powerful platform for enterprises to use. These technologies include: registry, networking, telemetry, security, automation, and services. With OpenShift's scalability, control and orchestration capabilities, your developers can build new containerized applications, host them, and deploy them in the cloud to easily and quickly turn all kinds of whimsical ideas into new businesses. These are developed by Red Hat, a leader in the open source field, and provide comprehensive support.
The Kubernetes code is hosted on GitHub. The official site is https://github.com/kubernetes. We can see the current iteration speed of the K8S version. At the same time, large cloud vendors such as Amazon's AWS, Microsoft's Azure, and Alibaba Cloud have also announced native support for K8S.
[My Linux, I call the shots!  ] Overview of DevOps culture and kubernetes
(3) Features of Kubernetes
The main features of Kubernetes are mainly reflected in that, first, it can realize automatic boxing, and can automatically complete the deployment of containers based on resource dependencies and other constraints without affecting their availability. The second is to be able to self-repair. Once a container crashes, it can be started in one second, and the missing service can be killed and restarted. So with the Kubernetes container orchestration platform, we are more concerned about the group , And not an individual anymore. The third is to automatically achieve horizontal expansion. If a container is not enough, you can start another one and continue to expand upward. As long as the resources of the physical platform are sufficient, it can also realize automatic service discovery and load balancing, and also realize automatic release. And rollback. The fourth is to support the configuration and management of keys. If we start a container and want to change to a different configuration to run, we define an ENTRYPOINT script, which can accept some variables passed by the user to the container, and change these variables The value is converted into configuration information that can be read by the application in the container to complete the configuration of the containerized application. This is because the early applications were not developed for cloud native, so those applications need to read the configuration file to obtain the configuration. Cloud native development is best to obtain configuration based on environment variables. If we use the container orchestration platform to automate the startup of the container, it is a troublesome thing to manually pass the value of the environment variable every time we start it, so we need one External components save these configuration information externally. When the image is started as a container, we only need to let the image load the configuration information in the configuration center to complete the configuration. The fifth is that Kubernetes can also implement storage orchestration, allowing storage volumes to be dynamically supplied, which means that when a container needs to use storage volumes, it can create storage volumes that can meet its needs according to the needs of the container itself, thereby achieving storage orchestration . The sixth is the batch operation that can realize the task.
(4) Kubernetes architecture
Kubernetes is actually a cluster. It is necessary to combine the resources of multiple hosts into a large resource pool to provide computing, storage and other capabilities to the outside world. We have installed Kubernetes-related applications on many hosts, and work collaboratively through the applications to use multiple hosts as one host. But in Kubernetes clusters are divided into roles. We know that the model is usually divided into two types. One is P2P. For example, redis does not have a central node. Each node can directly accept service requests and route requests. The second is a cluster with a central node, such as the master-slave replication of MySQL. One node is the master node and the others are synchronized with it. Then K8S is a cluster system with a central node architecture, that is, the master/nodes model. The master node generally doesn't need too much, and it is generally sufficient to have 3 redundant nodes, and the nodes we call workers are similar to the concepts of queen bee and worker bee. The client's request is first sent to the Master, and there will be a scheduler in the Master to analyze the existing available resource status of each node, find a node that is best suited to run the container requested by the user, and schedule it. This node The local container engine is responsible for starting docker. This node is responsible for checking whether there is a mirror locally when starting the container. If there is no need to drag the mirror down to the Registry, of course we can also build a private Registry, and the Registry itself can also be a container. , We can host the private Registry on Kubernetes itself.
[My Linux, I call the shots!  ] Overview of DevOps culture and kubernetes
The first component on Kubernetes is called API Server, which can be responsible for receiving requests, parsing requests, and processing requests. The second component is that if a container is created when the user requests it, then this container should not run on the Master, but on the Node. Then which Node is more suitable, this should be the component on the Master The scheduler decides that it is responsible for observing the total computing resources on each Node, and assessing which Node node is the most suitable according to the lower limit of the amount of resources required to create the container requested by the user. We cannot judge the health of the container based on whether the application in the container is running or not. We can detect the availability of the service based on the additional defined availability detection mechanism. If the application in the container hangs, we need the container to be always running , There is an application on Node, the third is kubelet, this application is to ensure that the container is always running. Kubernetes also runs a lot of controller applications, responsible for monitoring whether each container it manages is healthy. Once the container is found to be unhealthy, the controller sends a request to the Master’s API Server, and the scheduler schedules it from other nodes. Pick a suitable one and start a new container. The controller used to monitor the health of the container is unhealthy, so the health of the container cannot be guaranteed. Therefore, there is a fourth component controller manager (Controller-Manager) above the Master, which is monitored by the controller manager Whether each controller is healthy, if the controller is not healthy, the controller manager ensures that it is healthy, and the controller-manager (Controller-Manager) guarantees its own availability through redundancy. So from this perspective, Master is the brain of the cluster, and it has several core components as above.
The smallest unit running on K8S is no longer a container. Pod is the smallest logical unit scheduled on K8S. Containers can be run in a Pod. A Pod contains multiple containers. Many containers share the same three namespaces: UTS, Network, and IPC, while the other three User, Mount, and PID namespaces are isolated from each other. Moreover, the containers in the same Pod also share the second type of resource, the storage volume. The storage volume does not belong to the container, and the storage volume belongs to the Pod. Generally speaking, the same Pod only contains one container, unless the containers have a very close relationship and need to be placed in the same Pod. If you need to put multiple containers in a Pod, usually one container is the main container, and other containers exist to assist the application in the main container to perform more functions. For example, running Nginx in the main container will generate a lot of logs. You can run ELK and other log collection programs in the auxiliary container (sidecar). So the scheduler schedules a Pod, Node runs a Pod, and Pod is an atomic unit.
[My Linux, I call the shots!  ] Overview of DevOps culture and kubernetes
In theory, Node can be any form of computing device. As long as it has traditional CPU, memory, etc., and can be installed with the Kubernetes cluster agent, it can work as a member of Kubernetes. For example, in the figure below, all node nodes as a whole are regarded as a large computing pool. There are x CPUs and y capacity of memory, which are managed by Kube_Cluster. The Master has such a unified view. When the user requests When the Master creates resources, we can schedule and evaluate a unified resource pool, so that end users no longer need to care about which node our resources are running on, so cloud computing is implemented.
[My Linux, I call the shots!  ] Overview of DevOps culture and kubernetes
If we want to categorize and manage a certain type of Pod in the future, how should we select a certain type of Pod with unified functions? In order to be able to recognize Pod, we need to attach some metadata to the Pod, that is, a key-value type label. When the Pod is created, you can tag the Pod so that people can identify the Pod based on the value of the tag. For example, we create 4 Nginx Pods, and we add a tag to each Pod called App, and its value is nginx, so that we can sort out a class of Pods based on the key and value information in the future, so this filtering mechanism is implemented by a component label selector (Label Selector).

—————— This concludes this article, thanks for reading——————

Guess you like

Origin blog.51cto.com/13613726/2620258