039. [rpm] of micro-services practice based Kubernetes and the Spring Cloud

http://dockone.io/article/2967

Microprocessor-based services and Practice Kubernetes of Spring Cloud

 

EDITORIAL

Netease cloud platform container to give expect the implementation of the micro-architecture of the service team to provide complete solutions and closed-loop user experience, for this purpose from the beginning of 2016, our container services in-house team was the first to be dogfooding practice to see if the vessel can not cloud platform support affordable container service itself micro-service architecture, this is a very interesting attempt.

Once the decision to do micro-service architecture, there are many real issues before us, such as technology selection, split the business problems, high availability, communication services, service discovery and management, fault-tolerant cluster, configuration management, data consistency problems , Conway's Law , distributed call trace, CI / CD, micro-service testing, and deployment scheduling and so on ...... this is not some simple tricks can resolve. Micro-service architecture practice the way there are millions of species, we explore and practice the possibility of them, hoping to give you a reference. This is the first "micro-serving container of practice Netease cloud platform" series of articles.

Docker container technology has been the hustle and bustle of the earliest period, gradually applied in major companies and technical team. Although today's point of view, we have been gradually recognized from the concept of "the Mirror is defined as a standard application delivery, the vessel was used as a standard application environment running" point of view, but there are still a considerable number of people confuse container technology as a standard, how should floor, how can large-scale online applications, how to play in order to truly liberate the productive forces, promote efficiency and quality software delivery? The answer is in architecture among applications.

Micro Services Architecture is not due to technical Docker containers born, but it is due to technical and container fire. Container technology provides a consistent means of distribution and execution environment, making use of only micro architecture after service of the order with the container maximize its value. The micro-service architecture introduces a lot of complexity, only the application of the container and the size of the container arrangement and scheduling in order to avoid the operation and maintenance efficiency. Container services between technology and micro-architecture this is a complementary complementary relationship.

Netease cloud platform formerly known container is the NetEase automatic application deployment platform (OMAD), it can use IaaS cloud infrastructure provided, including the realization of building and deploying the entire application life cycle management, including integration. In 2014, as the representative to Docker container technology into public view, we were pleasantly surprised to find that the container is automatically deployed technology platform for the evolution of applications from tools platform applications of the most important piece of the puzzle. Users need to initialize the original host, then complete with automatic deployment platform to build and deploy applications. After the introduction of the container technology, users develop from functional to test a line to the key deployment, the entire application process without being concerned about delivery initializing host issues a range of applications than communications between hosts, examples of scheduling. This is simply the faith of the gospel of DevOps.

From the beginning of 2015, we explore the best practices of container technology, product form from the original "fat container" container with clusters, later definition of state and non-state services, as well as today's new computing and high performance computing, we I have been thinking and enrich the application scenarios container technology. No matter how to adjust the product form, the core concept of cloud container platform has been a "micro-services" to provide high-performance container cluster management program through micro services that abstract, resilient and elastic supports, vertical expansion, gray upgrades, service discovery, service orchestration , error recovery, performance monitoring and other functions, to meet the users to enhance application delivery efficiency and rapid response to changing business needs. Netease cloud container platform expected to give the team implemented a micro-service architecture to provide complete solutions and closed-loop user experience, for this purpose from the beginning of 2016, our container services in-house team was the first to be dogfooding practice, on the one hand cloud platform to test container can not afford to support their own micro-container services service architecture, on the other hand through the experience of micro-services cloud platform container feeding product design, this is a very interesting attempt, but also our intention to share micro-service architecture practice containers cloud platform.

Before turning to the micro-service architecture practice container services, it is necessary first container NetEase cloud services generally to be introduced. NetEase cloud container service team to manage the 30+ DevOps of micro-services, the number of times a week to build the deployment of 400 +. NetEase cloud container service architecture logically composed of four levels, from bottom to top are the infrastructure layer, Docker containers engine layer, Kubernetes (hereinafter referred to as K8S) container orchestration layer, DevOps and automated tools layer:

1.jpg


Container overall business cloud platform architecture as follows:

2.jpg


Despite the container aside specific business services, service features only, it can be divided into a plurality of types of the following (in brackets is the micro-services example):

  1. End-user (the OpenAPI Serving Gateway), serving for the (bare metal Service)
  2. Synchronous communication (Customer Center), asynchronous communication (service Construction)
  3. Strong demand for consistent data (etcd synchronization services), eventually consistent demand (recycling service)
  4. Throughput-sensitive (Log service), delay-sensitive (real-time service)
  5. CPU compute-intensive (Signed Certification Center), network IO intensive (Mirror warehouse)
  6. Online services (Web Services), offline business (mirror checks)
  7. Batch jobs (accounting log push), regular tasks (Distributed scheduled task)
  8. Long connection (WebSocket Gateway Service), short connection (Hook Service)
  9. ……


一旦决定做微服务架构,有很多现实问题摆在面前,比如技术选型、业务拆分问题、高可用、服务通信、服务发现和治理、集群容错、配置管理、数据一致性问题、康威定律、分布式调用跟踪、CI/CD、微服务测试,以及调度和部署等等......这并非一些简单招数能够化解。

作为主要编程语言是 Java 的容器服务来说,选择 Spring Cloud 去搭配 K8S 是一个很自然的事情。Spring Cloud 和 K8S 都是很好的微服务开发和运行框架。从应用的生命周期角度来看,K8S 覆盖了更广的范围,特别像资源管理,应用编排、部署与调度等,Spring Cloud 则对此无能为力。从功能上看,两者存在一定程度的重叠,比如服务发现、负载均衡、配置管理、集群容错等方面,但两者解决问题的思路完全不同,Spring Cloud 面向的纯粹是开发者,开发者需要从代码级别考虑微服务架构的方方面面,而 K8S 面向的是 DevOps 人员,提供的是通用解决方案,它试图将微服务相关的问题都在平台层解决,对开发者屏蔽复杂性。举个简单的例子,关于服务发现,Spring Cloud 给出的是传统的带注册中心 Eureka 的解决方案,需要开发者维护 Eureka 服务器的同时,改造服务调用方与服务提供方代码以接入服务注册中心,开发者需关心基于 Eureka 实现服务发现的所有细节。而 K8S 提供的是一种去中心化方案,抽象了服务 (Service),通过 DNS+ClusterIP+iptables 解决服务暴露和发现问题,对服务提供方和服务调用方而言完全没有侵入。

对于技术选型,我们有自己的考量,优先选择更稳定的方案,毕竟稳定性是云计算的生命线。我们并不是 “K8S 原教旨主义者”,对于前面提到的微服务架构的各要点,我们有选择基于 K8S 实现,比如服务发现、负载均衡、高可用、集群容错、调度与部署等。有选择使用 Spring Cloud 提供的方案,比如同步的服务间通信;也有结合两者的优势共同实现,比如服务的故障隔离和熔断;当然,也有基于一些成熟的第三方方案和自研系统实现,比如配置管理、日志采集、分布式调用跟踪、流控系统等。

我们利用 K8S 管理微服务带来的最大改善体现在调度和部署效率上。以我们当前的情况来看,不同的服务要求部署在不同的机房和集群(联调环境、测试环境、预发布环境、生产环境等),有着不同需求的软硬件配置(内存、SSD、安全、海外访问加速等),这些需求已经较难通过传统的自动化工具实现。K8S 通过对 Node 主机进行 Label 化管理,我们只要指定服务属性 (Pod label),K8S 调度器根据 Pod 和 Node Label 的匹配关系,自动将服务部署到满足需求的 Node 主机上,简单而高效。内置滚动升级策略,配合健康检查 (liveness 和 readiness 探针)和 lifecycle hook 可以完成服务的不停服更新和回滚。此外,通过配置相关参数还可以实现服务的蓝绿部署和金丝雀部署。集群容错方面,K8S 通过副本控制器维持服务副本数 (replica),无论是服务实例故障(进程异常退出、oom-killed 等)还是 Node 主机故障(系统故障、硬件故障、网络故障等),服务副本数能够始终保持在固定数量。

3.jpg


Docker 通过分层镜像创造性地解决了应用和运行环境的一致性问题,但是通常来讲,不同环境下的服务的配置是不一样的。配置的不同使得开发环境构建的镜像无法直接在测试环境使用,QA 在测试环境验证过的镜像无法直接部署到线上……导致每个环境的 Docker 镜像都要重新构建。解决这个问题的思路无非是将配置信息提取出来,以环境变量的方式在 Docker 容器启动时注入,K8S 也给出了 ConfigMap 这样的解决方案,但这种方式有一个问题,配置信息变更后无法实时生效。我们采用的是使用 Disconf 统一配置中心解决。配置统一托管后,从开发环境构建的容器镜像,可以直接提交到测试环境测试,QA 验证通过后,上到演练环境、预发布环境和生产环境。一方面避免了重复的应用打包和 Docker 镜像构建,另一方面真正实现了线上线下应用的一致性。

Spring Cloud Hystrix 在我们的微服务治理中扮演了重要角色,我们对它做了二次开发,提供更灵活的故障隔离、降级和熔断策略,满足 API 网关等服务的特殊业务需求。进程内的故障隔离仅是服务治理的一方面,另一方面,在一个应用混部的主机上,应用间应该互相隔离,避免进程间互抢资源,影响业务 SLA。比如绝对要避免一个离线应用失控占用了大量 CPU,使得同主机的在线应用受影响。我们通过 K8S 限制了容器运行时的资源配额(以 CPU 和内存限制为主),实现了进程间的故障和异常隔离。K8S 提供的集群容错、高可用、进程隔离,配合 Spring Cloud Hystrix 提供的故障隔离和熔断,能够很好地实践 “Design for Failure” 设计哲学。

服务拆分的好坏直接影响了实施微服务架构的收益大小。服务拆分的难点往往在于业务边界不清晰、历史遗留系统改造难、数据一致性问题、康威定律等。从我们经验来看,对于前两个问题解决思路是一样的:1)只拆有确定边界能独立的业务。2)服务拆分本质上是数据模型的拆分,上层应用经得起倒腾,底层数据模型经不起倒腾。对于边界模糊的业务,即使要拆,只拆应用不拆数据库。

以下是我们从主工程里平滑拆出用户服务的示例步骤:

4.jpg

 

  1. 将用户相关的 UserService、UserDAO 分离出主工程,加上 UserController、UserDTO 等,形成用户服务,对外暴露 HTTP RESTful API。
  2. 将主工程用户相关的 UserService 类替换成 UserFaçade 类,采用 Spring Cloud Feign 的注解,调用用户服务 API。
  3. 主工程所有依赖 UserServce 接口的地方,改为依赖 UserFaçade 接口,平滑过渡。


经过以上三个步骤,用户服务独立成一个微服务,而整个系统代码的复杂性几乎没有增加。

数据一致性问题在分布式系统中普遍存在,微服务架构下会将问题放大,这也从另一个角度说明合理拆分业务的重要性。我们碰到的大部分数据一致性场景都是可以接受最终一致的。“定时任务重试+幂等” 是解决这类问题的一把瑞士军刀,为此我们开发了一套独立于具体业务的 “分布式定时任务+可靠事件” 处理框架,将任何需保证数据最终一致的操作定义为一种事件,比如用户初始化、实例重建、资源回收、日志索引等业务场景。以用户初始化为例,注册一个用户后,必须对其进行初始化,初始化过程是一个耗时的异步操作,包含租户初始化、网络初始化、配额初始化等等,这需要协调不同的系统来完成。我们将初始化定义为一种 initTenant 事件,将 initTenant 事件及上下文存入可靠事件表,由分布式定时任务触发事件执行,执行成功后,清除该事件记录;如果执行失败,则定时任务系统会再次触发执行。对于某些实时性要求较高的场景,则可以先触发一次事件处理,再将事件存入可靠事件表。对于每个事件处理器来说,要在实现上确保支持幂等执行,实现幂等执行有多种方式,我们有用到布尔型状态位,有用到 UUID 做去重处理,也有用到基于版本号做 CAS。这里不展开说了。

5.jpg


When the business boundaries and organizational conflict, from our practical experience, would prefer more in line with the organizational structure of the service boundary split. This is also a practice in line with Conway's law. Conway's Law says, the system architecture is equivalent to the communication structure of the organization. Organization will be bound by the form of software system architecture depicting. Conway contrary to the law, the system design is very prone to blind spots, "two regardless of" mutual shirk situation arise, we are among the teams within the team have encountered this situation.

This is the first "micro-serving container of practice Netease cloud platform" series, describes the relationship between the container and micro-technology services architecture, we aim to make containers cloud platform , as well as a brief introduction Netease cloud-based service container and Kubernetes Spring Cloud micro services of practical experience. Due to space limitations, some micro-architecture service points did not expand, such as communication services, service discovery and management, configuration management and other topics; some are not mentioned, such as distributed call trace, CI / CD, micro-service testing and other topics, these areas share practical experience will do in subsequent series of articles. Micro-service architecture practice the way there are millions of species, we explore and practice the possibility of them, hoping to give you a reference.

Guess you like

Origin www.cnblogs.com/badboyh2o/p/11495226.html