Basic introduction to cloud native Istio


1 What is Istio

insert image description here

地址:https://istio.io/

A service mesh is an independent infrastructure layer that handles communication between services. Modern cloud-native applications are service systems built with various complex technologies, and the service grid is responsible for reliable request delivery between these components. The current typical service grid usually provides a set of lightweight network agents, which will be deployed and run in parallel with the application without the application being aware of it.

As mentioned earlier, Istio came from a famous family and was jointly launched by Google, IBM and Lyft in May 2017. Its initial design goal is to provide traffic management for microservices running in clusters in a non-intrusive manner based on Kubernetes. , security hardening, service monitoring, policy management and other functions.

Istio helps reduce the complexity of deployment and relieves the pressure on development teams. It is a fully open-source service mesh that layers transparently onto existing distributed applications. It's also a platform, including APIs that allow it to be integrated into any logging platform, telemetry or policy system. Istio's diverse feature set enables us to successfully and efficiently run distributed microservice architectures and provides a unified way to secure, connect, and monitor microservices.

Traditional spring cloud microservice project

insert image description here

**Microservice project based on Istio architecture**

insert image description here

Istio is based on the Sidecar model, data plane and control platform, and is a mainstream Service Mesh solution.

2 Istio features

地址:https://istio.io/zh/

  • Connection : Intelligently manage the traffic generated by calls between services within the grid, and based on this, provide a strong guarantee for the deployment, testing, and upgrading of microservices.

  • Security : Provide authentication, encryption, and authentication support for calls between services within the grid, and strengthen existing services to improve their security without intruding codes.

  • Policies : Customize policies on the control plane and implement them in services.

  • Observation : Track and measure calls between services to obtain status information of services.

These features are described in detail below.

2.1 Connection

微服务错综复杂,要完成其业务目标,连接问题是首要问题。连接存在于所有服务的整个生命周期中,用于维持服务的运行,算得上重中之重。
相对于传统的单体应用,微服务的端点数量会急剧增加,现代的应用系统在部分或者全部生命周期中,都存在同一服务的不同版本,为不同的客户、场景或者业务提供不同的服务。同时,同一服务的不同版本也可能有不同的访问要求,甚至产生了在生产环境中进行测试的新方法论。错综复杂的服务关系对开发者来说都是很严峻的考验。

For the current common business forms, here is a simple schematic diagram to describe the connection function of Service Mesh

insert image description here

从不同的外部用户的角度来看,他们访问的都是同一服务端口,但实际上会因为不同的用户识别,分别访问服务A的不同版本;在网格内部,服务A的版本1可能会访问服务B的两个版本,服务A的版本2则只会访问服务B的版本1;服务B的版本1需要访问外部的云服务,版本2则无此需求。
在这个简化的模型中,包含了以下诉求:
◎ 网格内部的调用(服务A→服务B);
◎ 出站连接(服务B→外部云服务);
◎ 入站连接(用户→服务A);
◎ 流量分割(A服务跟B服务只负责与自己相关流量请求);
◎ 按调用方的服务版本进行路由(服务A的版本1分别调用了服务B的版本1和版本2);
◎ 按用户身份进行路由。

这里除了这些问题,还存在一些潜在需求,如下所述。
(1)在网格内部的服务之间如何根据实际需要对服务间调用进行路由,条件可能包括:
    ◎ 调用的源和目的服务;
    ◎ 调用内容;
    ◎ 认证身份。
(2)如何应对网络故障或者服务故障。
(3)如何处理不同服务不同版本之间的关系。
(4)怎样对出站连接进行控制。
(5)怎样接收入站连接来启动后续的整个服务链条。

这些当然不是问题的全部,其中,与流量相关的问题还引发了几个关键的功能需求,如下所述。
(1)服务注册和发现:要求能够对网格中不同的服务和不同的版本进行准确标识,不同的服务可以经由同一注册机构使用公认的方式互相查找。
(2)负载均衡策略:不同类型的服务应该由不同的策略来满足不同的需要。
(3)服务流量特征:在服务注册发现的基础之上,根据调用双方的服务身份,以及服务流量特征来对调用过程进行甄别。
(4)动态流量分配:根据对流量特征的识别,在不同的服务和版本之间对流量进行引导。

连接是服务网格应用过程中从无到有的最重要的一个环节。

2.2 Security

安全也是一个常谈常新的话题,在过去私有基础设施结合单体应用的环境下,这一问题并不突出,然而进入容器云时代之后,以下问题出现了。
(1)有大量容器漂浮在容器云中,采用传统的网络策略应对这种浮动的应用是比较吃力的。
(2)在由不同的语言、平台所实现的微服务之间,实施一致的访问控制也经常会因为实现的不一致而困难重重。
(3)如果是共享集群,则服务的认证和加密变得尤为重要,例如:
    ◎ 服务之间的通信要防止被其他服务监听;
    ◎ 只有提供有效身份的客户端才可以访问指定的服务;
    ◎ 服务之间的相互访问应该提供更细粒度的控制功能。


总之,要提供网格内部的安全保障,就应具备服务通信加密、服务身份认证和服务访问控制(授权和鉴权)功能。
上述功能通常需要数字证书的支持,这就隐藏了对CA的需求,即需要完成证书的签发、传播和更新业务。
除了上述核心要求,还存在对认证失败的处理、外部证书(统一 CA)的接入等相关支撑内容。

2.3 Strategy

insert image description here



Istio 通过可动态插拔、可扩展的策略实现访问控制、速率限制、配额管理等功能使得资源在消费者之间公平分配

在Istio中使用Mixer作为策略的执行者,Envoy的每次调用,在逻辑上都会通过Mixer进行事先预检和事后报告,这样Mixer就拥有了对流量的部分控制能力;在Istio中还有为数众多的内部适配器及进程外适配器,可以和外部软件设施一起完成策略的制定和执行。

A brief introduction to the components will be explained in detail later

Mixer: Mixer performs access control and policy enforcement across the service mesh and collects telemetry data from Envoy proxies and other services.

Envoy: Envoy is used as a proxy in the istio framework, using software developed in C++ to provide all inbound and outbound traffic for all services in the service grid, the only one that deals with the data plane

2.4 Observation

随着服务数量的增加,监控和跟踪需求自然水涨船高。在很多情况下,可观察的保障都是系统功能的重要组成部分,是系统运维功能的重要保障。
随着廉价服务器(相对于传统小型机)的数量越来越多,服务器发生故障的频率也越来越高,人们开始产生争论:我们应该将服务器视为家畜还是宠物?家畜的特点:是有功能、无个性、可替换;而宠物的特点:是有功能、有个性、难替换。
我们越来越倾向于将服务器视为无个性、可替换的基础设施,如果主机发生故障,那么直接将其替换即可;并且,我们更加关注的是服务的总体质量。因此,微服务系统监控,除了有传统的主机监控,还更加重视高层次的服务健康监控。
服务的健康情况往往不是非黑即白的离散值,而是一系列连续状态,例如我们经常需要关注服务的调用成功率、响应时间、调用量、传输量等表现。
而且,面对数量众多的服务,我们应该能对各种级别和层次的指标进行采样、采集及汇总,获取较为精密、翔实的运行数据,最终通过一定的方法进行归纳总结和展示。
与此同时,服务网格还应提供分布式跟踪功能,对服务的调用链路进行跟踪。

观察性:动态获取服务运行数据和输出,提供强大的调用链、监控和调用日志收集输出的能力。配合可视化工具,可方便运维人员了解服务的运行状况,发现并解决问题。

3 Istio and Service Governance

Istio是一个服务治理平台,治理的是服务间的访问,只要有访问就可以治理,不在乎这个服务是不是是所谓的微服务,也不要求跑的代码是微服务化的。单体应用不满足微服务用Istio治理也是完全可以的。提起“服务治理”,大家最先想到的一定是“微服务的服务治理”,就让我们从微服务的服务治理说起。

3.1 Three Forms of Service Governance

The evolution of service governance has gone through at least the following three forms.

Form 1: Include governance logic in the application

在微服务化的过程中,将服务拆分后会发现一堆麻烦事儿,连基本的业务连通都成了问题。在处理一些治理逻辑,比如怎么找到对端的服务实例,怎么选择一个对端实例发出请求,都需要自己写代码来实现。这种方式简单,对外部依赖少,但会导致存在大量的重复代码。所以,微服务越多,重复的代码越多,维护越难;而且,业务代码和治理逻辑耦合,不管是对治理逻辑的全局升级,还是对业务的升级,都要改同一段代码。

As shown below

insert image description here

The second form: independent code of governance logic

在解决第1种形态的问题时,我们很容易想到把治理的公共逻辑抽象成一个公共库,让所有微服务都使用这个公共库。在将这些治理能力包含在开发框架中后,只要是用这种开发框架开发的代码,就包含这种能力,非常典型的这种服务治理框架就是Spring Cloud。这种形态的治理工具在过去一段时间里得到了非常广泛的应用。
SDK模式虽然在代码上解耦了业务和治理逻辑,但业务代码和 SDK还是要一起编译的,业务代码和治理逻辑还在一个进程内。这就导致几个问题:业务代码必须和 SDK 基于同一种语言,即语言绑定。例如,Spring Cloud等大部分治理框架都基于Java,因此也只适用于 Java 语言开发的服务。经常有客户抱怨自己基于其他语言编写的服务没有对应的治理框架;在治理逻辑升级时,还需要用户的整个服务升级,即使业务逻辑没有改变,这对用户来说是非常不方便的。

As shown below

insert image description here

Form 3: Process with independent governance logic

SDK模式仍旧侵入了用户的代码,那就再解耦一层,把治理逻辑彻底从用户的业务代码中剥离出来,这就是前面提过的Sidecar模式。显然,在这种形态下,用户的业务代码和治理逻辑都以独立的进程存在,两者的代码和运行都无耦合,这样可以做到与开发语言无关,升级也相互独立。在对已存在的系统进行微服务治理时,只需搭配 Sidecar 即可,对原服务无须做任何修改,并且可以对老系统渐进式升级改造,先对部分服务进行微服务化。

As shown below

insert image description here

Summarize

比较以上三种服务治理形态,我们可以看到服务治理组件的位置在持续下沉,对应用的侵入逐渐减少。

微服务作为一种架构风格,更是一种敏捷的软件工程实践,说到底是一套方法论;与之对应的 Istio 等服务网格则是一种完整的实践,Istio 更是一款设计良好的具有较好集成及可扩展能力的可落地的服务治理工具和平台。
所以,微服务是一套理论,Istio是一种实践。

4 Istio and Kubernetes

4.1 Introduction to Kubernetes

Kubernetes is a portable and scalable open source platform for managing containerized workloads and services. It has a large and fast-growing ecosystem. It is infrastructure-oriented and tightly integrates computing, network, storage and other resources. It provides the best operating environment for containers, and provides encapsulated and easy-to-use workload and service orchestration interfaces for applications, as well as configuration management interfaces such as resource specifications, elasticity, operating parameters, and scheduling required for operation and maintenance. It is a new generation of Cloud native infrastructure platform.
From the perspective of platform architecture, Kubernetes is designed around the concept of platform, emphasizing plug-in design and easy scalability, which is one of the biggest differences between it and other similar systems, ensuring universal adaptability to various customer application scenarios. In addition, the significant difference between Kubernetes and other container orchestration systems is that Kubernetes does not regard conditions such as statelessness and microservices as constraints on runnable workloads.
Today, container technology has entered the stage of industrial implementation, and Kubernetes has been widely used as a container platform standard.

insert image description here

4.2 Istio is a good helper for Kubernetes

从场景来看,Kubernetes已经提供了非常强大的应用负载的部署、升级、扩容等运行管理能力。Kubernetes 中的 Service 机制也已经可以做服务注册、服务发现和负载均衡,支持通过服务名访问到服务实例。
从微服务的工具集观点来看,Kubernetes本身是支持微服务的架构,在Pod中部署微服务很合适,也已经解决了微服务的互访互通问题,但对服务间访问的管理如服务的熔断、限流、动态路由、调用链追踪等都不在Kubernetes的能力范围内。那么,如何提供一套从底层的负载部署运行到上层的服务访问治理端到端的解决方案?
目前,最完美的答案就是在Kubernetes上叠加Istio这个好帮手

insert image description here

4.3 Kubernetes is a good base for Istio

Istio maximizes the use of the Kubernetes infrastructure, and superimposes it to form a more powerful infrastructure for service operation and governance. It makes full use of the advantages of Kubernetes to realize the functions of Istio, such as:

1. Data plane

数据面Sidecar运行在Kubernetes的Pod里,作为一个Proxy和业务容器部署在一起。在服务网格的定义中要求应用程序在运行的时感知不到Sidecar的存在。而基于Kubernetes的一个 Pod 多个容器的优秀设计使得部署运维
对用户透明,用户甚至感知不到部署 Sidecar的过程。用户还是用原有的方式创建负载,通过 Istio 的自动注入服务,可以自动给指定的负载注入Proxy。如果在另一种环境下部署和使用Proxy,则不会有这样的便利。

insert image description here

2. Unified service discovery
The service discovery mechanism of Istio is perfectly built based on the domain name access mechanism of Kubernetes, which saves the trouble of setting up a registration center similar to Eureka, and avoids the inconsistency of service discovery data when running on Kubernetes question.

insert image description here

3. Based on Kubernetes CRD description rules,
all routing rules and control policies of Istio are implemented through Kubernetes CRD, so the data corresponding to various rules and policies are also stored in Kube-apiserver, and there is no need for another separate APIServer and backend configuration management. Therefore, it can be said that the APIServer of Istio is the APIServer of Kubernetes, and the data is naturally stored in etcd corresponding to Kubernetes.

Istio cleverly uses the good base of Kubernetes to build its own functions based on the existing capabilities of Kubernetes. There are already existing in Kubernetes, and we will never create a new one by ourselves, so as to avoid the problems of data inconsistency and user experience.

The relationship between Istio and Kubernetes architecture shows that Istio not only runs Envoy on the data plane in the Kubernetes Pod, but also runs on the Kubernetes cluster on the control plane. The control plane components themselves also exist in the form of Kubernetes Deployment and Service, based on Kubernetes Extend and build.

Review the K8S components mentioned above

  • APIServer
API Server提供了k8s各类资源对象(pod,RC,Service等)的增删改查及watch等HTTP Rest接口,是整个系统的数据总线和数据中心。

kubernetes API Server的功能:
-提供了集群管理的REST API接口(包括认证授权、数据校验以及集群状态变更);
-提供其他模块之间的数据交互和通信的枢纽(其他模块通过API Server查询或修改数据,只有API Server才直接操作etcd);
-是资源配额控制的入口;
-拥有完备的集群安全机制.
  • Deployment
一旦运行了 Kubernetes 集群,就可以在其上部署容器化应用程序。 为此,需要创建 Kubernetes Deployment 配置。
Deployment 负责 Kubernetes 如何创建和更新应用程序的实例。
  • Service
Service可以看作是一组提供相同服务的Pod对外的访问接口。借助Service,应用可以方便地实现服务发现和负载均衡
  • Ingress
ingress是Kubernetes资源的一种,可以让外部请求访问到k8s内部的资源上

Summarize

Kubernetes在容器编排领域已经成为无可争辩的事实标准;微服务化的服务与容器在轻量、敏捷、快速部署运维等特征上匹配,这类服务在容器中的运行也正日益流行;随着Istio 的成熟和服务网格技术的流行,使用 Istio 进行服务治理的实践也越来越多,正成为服务治理的趋势;而 Istio 与 Kubernetes 的天然融合且基于 Kubernetes 构建,也补齐了Kubernetes的治理能力,提供了端到端的服务运行治理平台。这都使得Istio、微服务、容器及Kubernetes形成一个完美的闭环。

云原生应用采用 Kubernetes 构建应用编排能力,采用 Istio 构建服务治理能力,将逐渐成为企业技术转型的标准配置。

5 Istio and Service Mesh

5.1 Era Selection Service Mesh

In the cloud-native era, with the rapid increase of services developed in various languages, the access topology between applications is more complex, and the governance requirements are also increasing. The original governance function embedded in the application cannot meet the demand in terms of form, dynamics, and scalability. There is an urgent need for an application governance infrastructure with cloud-native dynamic and elastic characteristics.

insert image description here

The decoupling of the Sidecar agent and the application process brings about completely non-intrusive applications and shields the characteristics of development language irrelevance, which removes the constraints of the development language, thereby greatly reducing the development cost of application developers.

This method is also often referred to as an application infrastructure layer, analogous to the TCP/IP network protocol stack, applications use this general proxy like TCP/IP: TCP/IP is responsible for reliably transferring bytecodes to network nodes Between services, Sidecar is responsible for reliably passing requests between services. TCP/IP is oriented to the underlying data flow, while Sidecar can support a variety of advanced protocols (HTTP, gRPC, HTTPS, etc.), as well as perform advanced control on service runtime, making the service monitorable and manageable.

Then, from a global perspective, there is a need for service governance when multiple services have complex mutual access. That is to say, what we focus on is the grid composed of these Sidecars, and manage the access between services in the grid. The applications still access each other in the original way. The ingress traffic and egress traffic of each application must pass through the Sidecar proxy, and Perform governance actions on Sidecar.

Finally, Sidecar is the executor of grid actions. Global management rules and metadata maintenance in the grid need to be implemented through a unified control plane.
Sidecar intercepts ingress traffic and performs governance actions. This introduces two problems:
◎ Adds two delays and possible failure points;
◎ These two extra hops bring new challenges to access performance, overall reliability, and the complexity of the entire system.

insert image description here

Therefore, for users who consider using a service grid, things will become a simpler choice: whether they are willing to spend additional resources on these infrastructures in exchange for development, operation and maintenance flexibility, and business non-intrusiveness and scalability.

At present, cloud service providers such as Huawei, Google, and Amazon provide this service in the form of cloud service, and combine it with the underlying infrastructure to provide a complete service governance solution. This is more convenient and friendly to the majority of application developers.

5.2 Service grid selection Istio

Among the various service mesh projects and products, the most notable one is Istio, which is coming from behind. It is expected to become another model after Kubernetes.

Heavyweight product.

Istio solves the performance, resource utilization and reliability problems of large-scale production clusters, and provides many new features that are practically applied in production, and has reached the standard available at the enterprise level.

First of all, on the control plane, Istio, as a brand-new design, provides capabilities far beyond service mesh in terms of function, form, architecture, and scalability. It provides a set of standard control plane specifications to transfer service information and governance rules to the data plane.

Istio uses the Envoy V2 version of the API, the gRPC protocol. The standard control plane API decouples the binding between the control plane and the data plane.

Finally, with the support of major manufacturers, Istio was jointly launched by Google and IBM. From the analysis and planning of application scenarios to its own positioning, from the design of its own architecture to the combination with the surrounding ecology, there are relatively strict arguments. When the Istio project was launched, it was confirmed that the container in the cloud-native ecosystem is the core, and Kubernetes is used as the orchestration system for managing containers. A system is needed to manage the interaction between services running on the container platform, including access control, security, Run data collection, etc., and Istio was born for this; in addition, Istio becomes a default part of the architecture, just like containers and Kubernetes have become a default part of cloud native architecture.

The positioning of the cloud-native community also coincides with the plans of multiple cloud vendors. Huawei Cloud took the lead in building Istio into its container service CCE (Cloud Container Engine) in August 2018; Google's GKE also announced built-in Istio in December 2018; more and more cloud vendors have also chosen to use Istio as their A part of the container platform is provided to users, which is to provide a set of out-of-the-box full-stack services for running and managing container applications. Just because they see the huge potential of Istio in technology and products, major manufacturers are also increasing their investment in the community, including mainstream manufacturers such as Google, IBM, Huawei, Cisco, and Red Hat.

Schematic diagram of istio

insert image description here

Summarize

时代选择服务网格是因为架构的发展
服务网格选择istio是因为提供一套开箱即用的容器应用运行治理的全栈服务

Guess you like

Origin blog.csdn.net/ZGL_cyy/article/details/130467090