[Reprint] Kubernetes deployment strategy

Kubernetes deployment strategy, your favorite Which? 

https://www.sohu.com/a/318731931_100159565?spm=smpc.author.fd-d.78.1574127778732l07n8g2

 

Which Kubernetes deployment method to determine the most suitable for the introduction of continuous, without affecting the user's update. One of the greatest challenges of today's cloud-native application development is accelerated deployment. By micro-service method, developers have started to use and designed to be fully modular applications, allowing several teams to write and deploy applications to change.

Shorter and more frequent deployment offers the following advantages:

  • Shorten time to market.
  • Customers can take advantage of features faster.
  • Customer feedback faster product flow back into the team, which means that teams can iterate function and solve problems faster.
  • Enhance the status of personnel development, production has more features.

But with more frequent releases, the possibility of application reliability or negatively impact the customer experience will increase. This is why DevOps team must develop processes and management deployment strategies to minimize the causes of product and customer risks.

The following, along with discussion Kubernetes deployment strategy, including a rolling deployment and more advanced methods, such as canaries and its variants.

Deployment Strategy

Depending on your goals, you can use several different types of deployment strategies. For example, you may need to change to a specific environment for the promotion of more tests, or the user / client subset, or you may need to create a "generally available" to do some user testing before the function.

Scroll deployment

Scroll deployment is Kubernetes standard default deployment. It works slowly one by one, using the new version of the pod pod replaces the previous version of the application, without causing any cluster down.

Before you start narrowing the old container, rollover will wait for the new pod ready by preparing probes. If problems arise, you may suspend or rollover deployment, without shutting down the entire cluster. In YAML definition file such deployments, the new image replaces the old image.

By adjusting the parameters in the manifest file, you can further refine the rollover:

rebuild

In this very simple deployment, all the old-time pod are killed, and all at once replaced with a new pod.

This list looks like this:

Blue / green (or red / black) to deploy

在蓝/绿部署策略(有时称为红/黑)中,旧版本的应用(绿色)和新版本(蓝色)同时部署。当部署这两个用户时,用户只能访问绿色,而蓝色可供QA团队在单独的服务上进行测试自动化或通过直接端口转发。

在测试新版本并签署发布后,该服务将切换为蓝色版本,旧版绿色版本将按比例缩小:

金丝雀部署

金丝雀部署有点像蓝/绿部署,但更受控制并使用更“ 渐进式交付 ”的分阶段方法。有许多策略属于金丝雀的保护范围,包括灰度发布或A/B测试。

当你想要测试一些新功能时,通常在应用程序的后端使用金丝雀。传统上,你可能拥有两个几乎完全相同的服务器:一个用于所有用户,另一个用新功能推送到一部分用户然后进行比较。如果未报告任何错误,则新版本可逐渐推广到基础架构的其余部分。

虽然可以通过替换旧的和新的pod来使用Kubernetes资源来完成此策略,但使用像Istio这样的服务网格实现此策略会更方便,更容易。

例如,你可以在Git中检查两个不同的清单:GA标记为0.1.0,金丝雀标记为0.2.0。通过更改Istio虚拟网关清单中的权重,可以管理这两个部署的流量百分比。

使用Weaveworks Flagger的金丝雀部署

管理金丝雀部署的一种简单有效的方法是使用Weaveworks Flagger。

使用Flagger,金丝雀部署的推广是自动化的。它使用Istio或App Mesh来路由和转移流量,使用Prometheus指标进行金丝雀分析。金丝雀分析还可以通过webhook进行扩展,以运行验收测试,负载测试或任何其他类型的自定义验证。

Flagger采用Kubernetes部署和可选的水平pod自动缩放器(HPA)来创建一系列对象(Kubernetes部署,ClusterIP服务和Istio或App Mesh虚拟服务),以推动金丝雀分析和推广。

通过实施控制回路,Flagger逐渐将流量转移到金丝雀,同时测量关键性能指标,如HTTP请求成功率,请求平均持续时间和pod健康状况。根据对KPI的分析,金丝雀被提升或中止,分析结果发布给Slack。

灰度部署或A/B部署

灰度部署是金丝雀的另一种变体(顺便提一下,也可以由Flagger处理)。灰度部署和金丝雀之间的区别在于,灰度部署处理前端而不是后端的功能。

灰度部署的另一个名称是A/B测试。你可以将其发布给少数用户,而不是为所有用户启动新功能。用户通常不知道他们被用作新功能的测试人员,因此称为“灰度”部署。

通过使用功能切换和其他工具,你可以监控用户与新功能的交互方式,以及是否正在转换用户,或者他们是否发现新的UI令人困惑以及其他类型的指标。

Flagger和A/B部署

除了加权路由,Flagger还可以根据HTTP匹配条件将流量路由到金丝雀。在A/B测试场景中,你将使用HTTP标头或Cookie来定位用户的特定部分。这对需要会话关联的前端应用特别有用。

 

Kubernetes的部署策略,你常用哪种呢?说说吧

Guess you like

Origin www.cnblogs.com/jinanxiaolaohu/p/11888294.html