A New Aspect KubeCon EU 2019 in the field of application management

Author | Ali cloud smart business group of technical experts Deng Hong Ultra

Kubecon EU 2019 in Barcelona, just pull down the curtain, a lecturer from the group Alibaba economies, share the experience and lessons of large-scale landing Kubernetes cluster in the Internet scene in the General Assembly. The so-called "lone speed while Public Bank far" from the continuous development and expansion of the community, we see more and more people embrace open source, standards to evolve to catch this tour high-speed train bound for cloud-native.


As we all know, the original cloud architecture student center project is Kubernetes, and Kubernetes is around the "Apply" to expand. Let better application deployment, allowing developers to more efficiently in order to bring tangible benefits to the team and the organization, in order to cloud the native technological change play a greater role.


Momentum for change both a flood engulfed the old closed internal system, and if the spring rain gave birth to more new developer tools. In this KubeCon in, there have been a lot of new knowledge about application management and deployment. What are these ideas knowledge and ideas worth learning, let us avoid detours? Behind them, but also indicates what kind of technology evolution direction?


In this article, we invited the container Ali cloud platform technology expert, former CoreOS engineers, one of the core projects of K8s Operator Deng Hong Chao, the essence of the content in the field of "Application Management" This meeting is to analyze one by one Readers' Choice and comments.


The Config Changed


Applications deployed on Kubernetes will generally configured to store ConfigMap, and then mount the file system to Pod. When ConfigMap changed, only in the Pod mounted file will be automatically updated. This practice some applications are automatically updated to make heat (such as nginx) it is OK. However, for most application developers, they are more inclined to believe that change the configuration to do a new gray release, associated with the container ConfigMap should do a gray upgrade.


Gray upgrade not only simplifies user code, enhancing the security and stability, but also reflects the immutable ideological infrastructure. Once deployed application, do not change. When you need to upgrade, as long as the deployment of a new version of the system, and then verify OK to destroy the old version just fine; when the authentication fails can easily roll back to the older version.


It is based on the idea, the engineers from the R & D Pusher Wave, an automatic monitor Deployment ConfigMap / Secret associated with Deployment tool changes and consequent upgrade triggered. Distinctive features of this tool is that it will automatically search for the Deployment PodTemplate inside ConfigMap / Secret, then put inside all the data to calculate a hash into PodTemplate annotations inside; when there is change, and recalculates a hash PodTemplate annotations update, thereby triggering Deployment upgrade.


Coincidentally, the open source community as well as another tool Reloader also made a similar function - the difference is, Reloader also allows users to choose what to fill in a few ConfigMap / Secret listening.

Analysis and Comments


Upgrading is not gray, back-pot two lines of tears. Whether upgraded application image or change the configuration, you must remember to do a new gray release and verification.


In addition, we also see that immutable infrastructure to build cloud computing brings a new perspective. Moving in this direction, not only make architecture safer and more reliable, but also talk to the other major tool combines good, give full play to the role of cloud native communities, the traditional application service "corner overtaking." For example, fully integrated project and Istio wave above the weighted routing function, the site will be able to achieve the effect of a small amount of traffic that verification of the new configuration.


Video link: https://www.youtube.com/watch?v=8P7-C44Gjj8
/>

Server-side Apply


Screen shot 2019-06-03 10.21.41.png am


Kubernetes is a declarative resource management systems. User defined locally desired state, and then to talk to update the cluster status is currently designated by the part of the user kubectl apply. However, it is far from simple as it sounds ...


the original kubectl apply a client-based implementation. Apply when not simply replace the overall status of individual resources, because there are other people will go to change a resource, such as controllers, admissions, webhooks. So how do you ensure that a resource change, while others will not overwrite the changes it?


Then there is the existing 3 way merge: the user the last applied state in the presence of Pod annotations, apply the next time based on (date, last applied, the user specifies the state) do 3 way diff, and then generate a patch sent to APIServer. But this is still a problem! Apply the intention is to allow individuals to specify what resources the field under his management.


But it can not achieve the original prevent tampering field between different individuals from each other, and did not inform the user when solving conflicts. For example, when the author of the original work in CoreOS, where the product comes with the controller and the user will go to change some special labels Node object, the result of the conflict, leading to failure of the cluster can only send someone to repair.


This type of Cthulhu fear hanging over every k8s user, and now we finally ushered in the dawn of victory - that is the server-side apply. APIServer will do the diff and merge operations, many of the original fragile phenomenon have been resolved. More importantly, compared to the original with the last-applied annotations, apply new server provides a declarative API (called ManagedFields) to explicitly specify what resources who manage field. When a collision occurs, such as kubectl and controller have changed at the same time a field, non-Admin (administrator) of the request and returns an error prompt to resolve.

Analysis and Comments


My mother no longer have to worry about me kubectl apply the. While in Alpha stage, but the server apply alternative client just a matter of time. In this way, different components while changes to the same resources will become more secure and reliable.


In addition, we also see that with the development of the system, especially the widespread use of declarative API, it will become less in the local logic on the server side is become more. On the server side has many advantages: many operations, such as kubectl dry-run, diff, will be implemented on the server side simpler; providing HTTP endpoint, is so much easier to build such features apply to other tools; complex logic into the server-side implementation and publish it easier to do control, allowing users to enjoy safe, consistent, high-quality service.


Video link: https://www.youtube.com/watch?v=1DWWlcDUxtA
/>

Gitops


There is a discussion group meeting to discuss the benefits Gitops, here for everyone to sum up.


First, Gitops whole-house team more "democratic" was. Everything written down, I want to see. Any changes need to take before posting pull request, not only let you know very clearly, also allows you to enter comments involved in the assessment. All changes are recorded in all discussions on Github and other tools, you can always look at history. These various make more fluid teamwork and professionalism.


Second, Gitops allow publishers to secure more stable. Code is no longer able to freely publish, you require the responsible person, even more than the assessment. When the need to roll back, the original version there Git inside. Who posted what at what time code, audit history. These various publishing process more professional, so publishing the results more reliable.

Analysis and Comments


Gitops not just to solve a technical problem, but the main use of tools such as Github version of history, audit, authority to make, so teamwork and publishing processes more professional and streamlined.


Gitops if it can widely impact on the whole industry will be enormous. For example, whether to go to any company, anyone can get started quickly release code.


Gitops which reflects Configuration as code and Git as the source of the idea of truth, is still very worthy of our study and practice.


Video link: https://www.youtube.com/watch?v=uvbaxC1Dexc
/>

Automated Canary Rollout


Canary released (Canary rollout), refers to the release process, first a small portion of the flow into the new version, and analyze and verify on-line behavior is normal. Everything is normal, then continue to flow gradually switched to the new version, the old version until there is no traffic and destroyed. We know that in Spinnaker and other tools, there will be a manual verification steps and through. In fact, this step can be replaced with automated tools out, after each inspection of all kind of things mechanical, such as p99 and the success rate under check delay.


Based on the above ideas, engineers from Amadeus and Datadog shared how Kubernetes, Operator, Istio, Prometheus and other tools to do canary release. The idea is automatically released after the completion of the entire Canary is abstracted into a CRD, then the process of doing a canary release becomes to write a declarative YAML file is enough, Operator YAML file received user-created complex operation and maintenance operations. Here the main steps are divided into:

  1. Deploy a new version of the service (Deployment + Service)
  2. Change Istio VirtualService configured to switch part of the traffic to the new version;
  3. Success rate and p99 test Istio metrics in the new version of the service response time if the conditions are met;
  4. If you meet the entire application upgrade to the new version; otherwise it is rolled back.


Coincidentally, Weave also open source automated publishing tools canary Flagger. The difference is, Flagger will cut a new version of the step by step flows, such as each new cut 5% of traffic in the past, until the last cut traffic directly destroy the old version.

Analysis and Comments


The use of canary release Shuang moment, has been using has been cool. Canary publish help to improve the success rate of release and stability of the system, it is an important process applications management.


In addition, we also see that the era of cloud native operation and maintenance of these complex processes will be simplified and standardized. By CRD abstract, which will become a complicated process steps several short API objects to the user. Operator do use automated operation and maintenance, as long as the standard platform Kubernetes users can spend these functions. The Istio and Kubernetes as the top standardized platform provides a powerful foundation for the ability to make it easier for users to get started.
Video link: https://www.youtube.com/watch?v=mmvSzDEw-JI
/>

Written in the last


In this article, we have an inventory of the number of new knowledge in this KubeCon related to application management and deployment:

  1. When the configuration file changes, why and how to make a new application releases.
  2. The client kubectl apply have many problems, one important point is tampered with each other resource fields. These solutions apply in a server-side in.
  3. Gitops not just to solve a technical problem, but the main team and make the publishing process more professional and streamlined.
  4. Use Kubernetes, Operator, Istio, Prometheus these top standardized platform, we can simplify the operation and maintenance operations canary released, reducing the threshold for the use of the developer.


These new ideas, let us filled with emotion: in the past, we always envy "Infrastructure someone's home," they are always so good but out of reach. And now, open source projects and technical standards, are these technologies lower the threshold, so that every developer uses on.


On the other hand, a subtle change is taking place - "self-study" the basic software had to face the law of diminishing marginal effect, resulting in a growing number of companies like Twitter are beginning to join the cloud native camp. Embrace open source ecology and technology standards, the current has become a major Internet business opportunities and challenges. Build application-oriented architecture and cloud-native, with the power of the cloud and open source, fully prepared to sail in this transformation in the cloud.

Guess you like

Origin yq.aliyun.com/articles/704408