Table of contents
1 The era of stand-alone minicomputers
The first computer network was born in 1969, which is the Arpanet of the US military. Arpanet can realize online operations with other computers, but it was only used for military purposes in the early stage. At the beginning of 2000, there were about 8.9 million Internet users in China,
many People don't know what the Internet is, so most of the service business is single and simple, using a typical stand-alone + database mode, all functions are written in one application and deployed centrally
Forum business, chat room business, and mailbox business are all coupled on a minicomputer, and all business data are also stored on a database.
2 vertical split
With the increasing complexity and diversification of applications, developers have higher requirements for system disaster recovery, scalability and business response capabilities. If any of the minicomputers and databases fail, the entire system will collapse. If the functions of the system need to be updated, the entire system needs to be re-released. Obviously, this is not allowed in the era of the Internet of Everything where business is rapidly developing.
How to quickly respond to business changes while ensuring availability requires splitting the system and splitting the above application into multiple sub-applications.
Advantages: Application is decoupled from application, system fault tolerance is improved, and the problem of independent application release is also solved
Vertical splitting of applications solves the problem of application publishing, but with the increase in the number of users, the computing power of a single computer is still a drop in the bucket
3 Cluster load balancing architecture
The increasing number of users means that more minicomputers are needed, but the minicomputers are expensive and the operation and maintenance costs are high.
At this time, a better choice is to deploy the same application on multiple PCs, but at this time, load balancing needs to be performed on these applications, because the client does not know which back-end PC application the request will fall on.
Load balancing can be divided into hardware level and software level.
Hardware level: F5
software load level: LVS, Nginx, Haproxy
load balancing idea: expose a unified interface to the outside world, forward corresponding rules according to user requests, and load balancing can also be used for current limiting, etc.
With load balancing, back-end applications can be dynamically expanded according to the size of the traffic, which we call "horizontal expansion"
Alibaba proposed to remove "IOE" in 2008, that is, IBM minicomputer, Oracle database, EMC storage, and all changed to a clustered load balancing architecture. In 2013, Alipay's last IBM minicomputer went offline
Advantages: applications are decoupled from applications, system fault tolerance is improved, and the problem of independent application release is also solved. At the same time, it can be expanded horizontally to provide application concurrency
4 Service-oriented transformation architecture
Although the system has been split vertically, after the split, it is found that there are repeated functions in forums and chat rooms, such as user registration, sending emails, etc. Once the project is large and the cluster is deployed more, these repeated functions will undoubtedly It causes a waste of resources, so the repeated functions will be extracted, and the name is "XX Service (Service)"
In order to solve how services and services call each other, a communication protocol between programs is needed, so there is a remote procedure call (RPC), which is used to make program calls between services as simple as local calls
Advantages: Solve the problem of business reuse based on the previous architecture
5 Service Governance
With the increase of business, there are more and more basic services, and the relationship between the call network has increased from the initial few to dozens or hundreds, resulting in intricate call links and the need to manage services.
Service governance requirements:
1. When we have tens or hundreds of service nodes, we need to have a dynamic perception of the service and introduce a registration center.
2. How to monitor the link when the service link is called for a long time.
3. Individual For service exceptions, how to avoid anomalies (avalanche) of the entire link requires consideration of fuse, downgrade, and current limiting
4. High availability of services: load balancing
Typical frameworks include: Dubbo, which uses Zookeeper as the registration center by default.
6 Microservice Era
The Era of Distributed Microservices
Microservice is a concept proposed in 2012. The hope of microservice is that a service is only responsible for an independent function.
According to the principle of splitting, any requirement will not affect unrelated services due to release or maintenance, and everything can be independently deployed and maintained.
For example, the traditional "user center" service, for microservices, needs to be split again according to the business, and may need to be split into "buyer service", "seller service", "merchant service" and so on.
Typical representative: Spring Cloud. Compared with the traditional distributed architecture, Spring Cloud uses HTTP as the RPC remote call. With the registration center Eureka and the API gateway Zuul, it can subdivide internal services and expose a unified interface to the outside world. Let the outside have no sense of the internal structure of the system. In addition, the config component of Spring Cloud can also manage the configuration in a unified manner.
Master Martin's definition of microservices: https://martinfowler.com/articles/microservices.html
Microservices were truly defined in 2014
The term "Microservice Architecture" has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data.
Rough meaning: services can be deployed independently, and the services will become more and more fine-grained
spring cloud address: https://spring.io/projects/spring-cloud
7 The new era of service mesh (service mesh)
7.1 Background
the early days
我们最开始用Spring+SpringMVC+Mybatis写业务代码
microservice era
Is Spring Cloud perfect in the era of microservices? May wish to think about what problems there will be?
(1) At first, the code was written for business, such as login function, payment function, etc. Later, we will find that we need to solve the problem of network communication. Although there are components in Spring Cloud to help us solve it, think about how it is solved of? Add spring cloud maven dependencies in the business code, add spring cloud component annotations, write configuration, and when packaging into jars, you must also integrate non-business codes together, which is called "intrusive framework";
(2) Services in microservices support development in different languages, and also require the cost of maintaining different languages and non-business codes;
(3) Business code developers should devote more energy to business familiarity instead of non-business. Although Spring Cloud can solve many problems in the field of microservices, the learning cost is still relatively large;
(4) Version upgrades of Internet company products are very frequent. In order to maintain the compatibility, permissions, traffic, etc. of each version, because Spring
Cloud is a "code intrusive framework". At this time, the version upgrade is destined to let the non-business code together. Once there is a problem, coupled with the call between multiple languages, the engineer will be very painful;
(5) In fact, we should have felt so far. The finer the service split, it just feels lightweight and decoupled, but the maintenance cost is higher, so what should we do?
We are not saying that spring cloud is not good, but just to introduce service mesh. At present, spring cloud microservices are relatively mainstream. We point out that spring cloud is not good, just to highlight the advantages of service mesh
problem solving ideas
In essence, it is to solve the problem of communication between services, and non-business code should not be integrated into business code
That is to say, the request sent from the client must be able to reach the corresponding service smoothly, and the network communication process in the middle should have nothing to do with the business code as much as possible
服务通信无非就是服务发现、负载均衡、版本控制等等
- In the single architecture a long time ago, the communication problem also needed to be written in the business code. How was it solved at that time?
解决方案:把网络通信,流量转发等问题放到了计算机网络模型中的TCP/UDP层,也就是非业务功能代码下沉,把这些网络的问题下沉到计算机网络模型当中,也就是网络七层模型 网络七层模型:应用层、表示层、会话层、传输层、网络层、数据链路层、物理层
think:
Can we also configure a proxy for each service, and all communication problems are handed over to this proxy, just like the familiar nginx and haproxy, they actually act as reverse proxies and forward requests to other servers, which is Service Mesh The birth and function realization provide a solution
7.2 SideCar
It reduces the complexity associated with microservices architecture and provides features such as load balancing, service discovery, traffic management, circuit breaking, telemetry, fault injection, and more.
The sidecar pattern is a way of separating application functionality from the application itself as a separate process. This mode allows us to non-intrusively add a variety of functions to the application, avoiding adding additional configuration code to the application to meet the needs of third-party components.
Many companies have borrowed the Proxy model and launched Sidecar products, such as Netflix's Prana and Ant Financial's SofaMesh
服务业务代码与Sidecar绑定在一起,每个服务都配置了一个Sidecar代理,每个服务所有的流量都经过sidecar,sidecar帮我们屏蔽了通信的细节,我的业务开发人员只需要关注业务就行了,而通信的事情交给sidecar处理
Summary: It can be understood as a proxy, which controls the flow in and out of the service. The sidecar is designed for general infrastructure and can be non-invasive with the company's framework technology
SideCar's exploration continues
很多公司借鉴了Proxy模式,推出了Sidecar的产品,比如像Netflix的Prana,蚂蚁金服的SofaMesh 2014年 Netflix发布的Prana 2015年 唯品会发布local proxy 2016年 Twitter的基础设施工程师发布了第一款Service Mesh项目:Linkerd (所以下面介绍Linkerd)
7.3 Left
In January 2016, an infrastructure engineer who left Twitter created a service mesh project, and the first Service Mesh project was born to solve the generality.
Linkerd well combines the functions provided by Kubernetes. Based on this, a Linkerd instance is deployed and run on each Kubernetes Node, and the Pod communication that joins the Mesh is transferred to Linkerd by proxy, so that Linkerd can The control and monitoring of communication is completed in the communication link.
Linkerd Design Ideas
Linderd's idea is very similar to that of sidecar, and the goal is to shield the details of network communication
In addition to completing the naming of Service Mesh and the implementation of the main functions of Service Mesh, Linkerd has the following important innovations:
- Direct communication monitoring and management without intruding into the workload code;
- Provides a unified configuration method for managing communication between services and edge communication;
- In addition to supporting Kubernetes, it also supports a variety of underlying platforms.
Summarize:
- It is very similar to the sidecar in front of us. The previous calling method is to call the service by the service. Linkerd requires all traffic to go through the sidecar. Linkerd helps the business personnel to shield the communication details, and the communication does not need to intrude into the business code. In this way, the business Developers focus on business development itself
- After Linkerd came out, it quickly gained the attention of users, and was successfully deployed and operated in the production environment of multiple users. In 2017, Linkerd joined CNCF, and then announced that it had completed the processing of hundreds of billions of production environment requests, and then released version 1.0, and had a certain number of commercial users.
Problem: Deploying services and sidecars in the early days was difficult for operation and maintenance personnel, so it has not been well developed. In fact, the main problem is that Linkerd only implements data-level problems, but does not Manage it well.
Data level: Solve the problem of data processing through sidecar
打开github:搜索linkerd,是由scala语言编写的
7.4 this
An open source project co-sponsored by Google, IBM and Lyft
打开github:搜索istio,是由go语言编写的
What is istio?
地址
:https://istio.io/docs/concepts/what-is-istio/#why-use-istioIstio makes it easy to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, with few or no code changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, then configure and manage Istio using its control plane functionality 翻译: 通过Istio,可以轻松创建带有负载平衡,服务到服务的身份验证,监视等功能的已部署服务网络,使得服务中的代码更改很少或没有更改。 通过在整个环境中部署一个特殊的sidecar代理来拦截微服务之间的所有网络通信,然后使用其控制平面功能配置和管理,可以为服务添加Istio支持。
Note this sentence:
使得服务中的代码更改很少或没有更改
This description is very important. If we use the spring cloud communication function, do we need to add dependencies, add annotations, and change configurations?
What is the control plane?
控制平面就是来管理数据平面,也就是管理sideCar
So istio has both data plane and control plane
What can istio do?
Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic. Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection. A pluggable policy layer and configuration API supporting access controls, rate limits and quotas. Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress. Secure service-to-service communication in a cluster with strong identity-based authentication and authorization. 翻译: 1.HTTP、gRPC、WebSocket和TCP流量的自动负载平衡。 2.路由、重试、故障转移和故障注入对流量行为进行细粒度控制。 3.支持访问控制、速率限制、配置API。 4.集群内所有流量的自动衡量、日志和跟踪,包括集群入口和出口。 5.使用基于身份验证和授权来保护集群中服务跟服务之间的通信。
**Summary:** It is obvious that Istio not only has a "Data Plane (Data Plane)", but also a "Control Plane (Control Plane), that is, it has data takeover and centralized control capabilities.
7.5 What is a service mesh
Service grid: refers to the interaction between microservice network applications. As the scale and complexity increase, services and service calls are intricate
As shown below
If each grid is a sidecar data plane, and the sidecars communicate with each other, then servicemech is to manage the control plane of each grid. This is the service grid, which looks very similar to the grid from the architectural level.
特点:
基础设施:服务网格是一种处理服务之间通信的基础设施层。
支撑云原生:服务网格尤其适用于在云原生场景下帮助应用程序在复杂的服务间可靠地传递请求。
网络代理:在实际使用中,服务网格一般是通过一组轻量级网络代理来执行治理逻辑的。
对应用透明:轻量网络代理与应用程序部署在一起,但应用感知不到代理的存在,还是使用原来的方式工作。
7.6 What is Service Mesh
The istio official website also gives a definition of what is a service mesh
地址
:https://istio.io/docs/concepts/what-is-istio/#what-is-a-service-meshIstio addresses the challenges developers and operators face as monolithic applications transition towards a distributed microservice architecture. To see how, it helps to take a more detailed look at Istio’s service mesh. 翻译: 解决开发与运维部署分布式微服务面临的问题
The term service mesh is used to describe the network of microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication. 翻译: 也是解决微服务之间服务跟服务之间通信的问题,可以包括服务发现、负载平衡、故障恢复、度量和监视,服务网格通常还具有更复杂的操作需求,如A/B测试、速率限制、访问控制和端到端身份验证
7.7 Development and Introduction of CNCF Cloud Native Organization
Cloud Native Development History Timeline
- microservice
马丁大师在2014年定义了微服务
- Kubernetes
从2014年6月由Google宣布开源,到2015年7月发布1.0这个正式版本并进入CNCF基金会,再到2018年3月从CNCF基金会正式毕业,迅速成为容器编排领域的标准,是开源历史上发展最快的项目之一
- Left
Scala语言编写,运行在JVM中,Service Mesh名词的创造者
2016年01月15号,0.0.7发布
2017年01月23号,加入CNCF组织
2017年04月25号,1.0版本发布
- Envoy
envoy是一个开源的服务代理,为云原生设计的程序,由C++语言编程[Lyft]
2016年09月13号,1.0发布
2017年09月14号,加入CNCF组织
- To that
Google、IBM、Lyft发布0.1版本
Istio是开源的微服务管理、保护和监控框架。Istio为希腊语,意思是”起航“。
Introduction to CNCF
CNCF 是一个开源软件基金会,致力于使云原生计算具有普遍性和可持续性。 云原生计算使用开源软件技术栈将应用程序部署为微服务,将每个部分打包到自己的容器中,并动态编排这些容器以优化资源利用率。 云原生技术使软件开发人员能够更快地构建出色的产品。
What problem does CNCF solve
Unified basic platform: kubernetes
If we need log monitoring: Prometheus
Proxy required: Envoy
Need distributed link tracking: Jaeger
…
地址
:https://www.cncf.io/Introduce several commonly used cloud native projects that have graduated
- Kubernetes
Kubernetes 是世界上最受欢迎的容器编排平台也是第一个 CNCF 项目。 Kubernetes 帮助用户构建、扩展和管理应用程序及其动态生命周期。
- Prometheus
Prometheus 为云原生应用程序提供实时监控、警报包括强大的查询和可视化能力,并与许多流行的开源数据导入、导出工具集成。
- Jaeger
Jaeger 是由 Uber 开发的分布式追踪系统,用于监控其大型微服务环境。 Jaeger 被设计为具有高度可扩展性和可用性,它具有现代 UI,旨在与云原生系统(如 OpenTracing、Kubernetes 和 Prometheus)集成。
- Containerd
Containerd 是由 Docker 开发并基于 Docker Engine 运行时的行业标准容器运行时组件。 作为容器生态系统的选择,Containerd 通过提供运行时,可以将 Docker 和 OCI 容器镜像作为新平台或产品的一部分进行管理。
- Envoy
Envoy 是最初在 Lyft 创建的 Service Mesh(服务网格),现在用于Google、Apple、Netflix等公司内部。 Envoy 是用 C++ 编写的,旨在最大限度地减少内存和 CPU 占用空间,同时提供诸如负载均衡、网络深度可观察性、微服务环境中的跟踪和数据库活动等功能。
- Fluentd
Fluentd 是一个统一的日志记录工具,可收集来自任何数据源(包括数据库、应用程序服务器、最终用户设备)的数据,并与众多警报、分析和存储工具配合使用。 Fluentd 通过提供一个统一的层来帮助用户更好地了解他们的环境中发生的事情,以便收集、过滤日志数据并将其路由到许多流行的源和目的地。
Incubating projects ,
- Open Tracing
OpenTracing:为不同的平台,供应中立的API,使开发人员可以轻松地应用分布式跟踪。
- GRPC
gRPC 是一个高性能、开源和通用的 RPC 框架,语言中立,支持多种语言。
- CNI
CNI 就是这样一个标准,它旨在为容器平台提供网络的标准化。不同的容器平台能够通过相同的接口调用不同的网络组件。
- Helm
Helm 是 Kubernetes 的包管理器。包管理器类似于我们在Centos中使用的yum一样,能快速查找、下载和安装软件包。
- Etcd
一个高可用的分布式键值(key-value)数据库。etcd内部采用raft协议作为一致性算法,etcd基于Go语言实现。一般用的最多的就是作为一个注册中心来使用
7.8 Service grid emerging in China
前面提到,在Service Mesh这个概念得到具体定义之前,实际上已经有很多厂商开始了微服务新的尝试,这一动作势必引发对微服务治理的强劲需求。在Service Mesh概念普及之后,有的厂商意识到自身产品也具备Service Mesh的特点,也有厂商受其启发,将自有的服务治理平台进行完善和改造,推出自己的Service Mesh产品。例如,蚂蚁金服、腾讯和华为都推出自己的网格产品,华为的产品甚至已被投入公有云进行商业应用。
- Ant Financial Sofa Mesh
proxy architecture
Formerly known as SOFA RPC, it was officially open-sourced in July 2018
- Tencent Tencent Service Mesh
proxy architecture
- Huawei CSE Mesher
proxy architecture
总结:
Basically, they all borrow design ideas such as Sidecar, Envoy and Istio