HCIP Study Notes - Cloud Native Application Architecture - 8

1. Introduction to Cloud Native Applications and Microservices

1.1 Architecture Evolution

image.png

  • For the evolution and choice of architecture, enterprises have to choose between rapid business development and a "beautiful" application architecture. Micro-service is the future trend, and its characteristics include: fault tolerance, rapid online capability, increased functional complexity, high availability, Demand responsiveness, manageability, independent release of modules, etc.
  • Features of monolithic architecture: All functions are concentrated in one project. The single structure is simple, the initial development cost is low, the cycle is short, and it is the first choice for small projects. However, all functions are concentrated in one project. As the project grows larger, it becomes difficult to develop, expand, and maintain.
  • The characteristics of the monolithic architecture: the project is divided vertically based on the project of the monolithic architecture scale. The project architecture is simple, the initial development cost is low, the cycle is short, and it is the first choice for small projects. By splitting vertically, the original single project will not expand infinitely.
  • Features of SOA (Service-Oriented Architecture): Based on the SOA architecture idea, repeated common functions are extracted as components, and services are provided to each system in the form of services. Various projects (systems) and services use Webservice, RPC, etc. to communicate. It can improve development efficiency, improve system reusability and maintainability. Clustering and optimization schemes can be formulated according to the characteristics of different services.
  • Disadvantages of SOA architecture: The boundaries between systems and services are blurred, which is not conducive to development and maintenance. The granularity of the extracted service is too large, and the coupling between the system and the service is high.
  • The characteristics of the microservice architecture: the development method of the microservice architecture style is to develop an independent application system by developing a set of small services. Each of these small services runs in its own process, and often uses the HTTP resource API lightweight mechanism to communicate with each other. The granularity of service splitting is finer, which is conducive to the reuse of resources and improves development efficiency. The optimization scheme for each service can be formulated more accurately to improve system maintainability. Applicable to the Internet era, the product iteration cycle is shorter.

1.2 Monolithic architecture

image.png

  • High complexity: Because it is a single-body system, the modules of the entire system are coupled together, the boundaries of the modules are blurred, and the dependencies are intricate. The adjustment of functions is likely to bring unknowable effects and potential bug risks .
  • Service performance issues: The monolithic system encounters performance bottlenecks, so it can only expand horizontally, increase service instances, and perform load balancing to share the pressure. It cannot be expanded vertically and needs to be split into modules.
  • Limited expansion and contraction capabilities: a single application can only be expanded as a whole, and the scope of influence is large, and individual modules cannot be scaled according to the needs of business modules.
  • Unable to do fault isolation: When all business function modules are gathered in one assembly, if a small function module has a problem (such as a request blocking), it may cause the entire system to crash. The scope of influence is relatively large. Every time 8 is released, the entire system is released. The restart of the entire system caused by the release is a relatively large challenge for a large comprehensive system. If each module is split, which part has been modified, and which part is only released module.
  • Deployments are progressively slower, and build and deploy times increase as code grows.
  • It hinders technological innovation and monolithic applications. A unified technical platform or solution is often used to solve all problems. Every member of the team must use the same development language and architecture. It is very difficult to introduce a new framework or technical platform.
  • As the name suggests, the monolithic architecture is an archive package, such as rar package or iar package, which contains all functional applications. This is a relatively traditional architectural style. In the early days of software development, everyone basically adopted this architecture model because of its simple deployment, single technology, and low labor cost. However, with the advent of the Internet era, with the increasing complexity of business requirements and the continuous acceleration of delivery frequency, the traditional monolithic architecture has become more and more difficult to meet the requirements of developers because of the following defects.

1.3 SOA Architecture

image.png

  • The SOA structure will apply the solution, make it modular, and construct each function into an independent unit to provide services
  • It can be understood that there are multiple services in the SOA architecture, and the services communicate with each other through interdependence or communication mechanisms, and finally provide a series of functions. A service usually exists in an independent form with the operating system process. Each service is called through the network.
  • SOA mainly solves the following problems
    • System integration: From the perspective of the system, solve the communication problem between enterprise systems, sort out the network structure between the originally scattered and unplanned systems into a regular and manageable inter-system star structure, this step often needs to introduce some Products, such as ESB, and technical specifications, service management specifications;
    • Servitization of the system: From a functional point of view, business logic is abstracted into reusable and assembleable services to achieve rapid business regeneration through service orchestration. Purpose: Transform the original inherent business functions into general business services to realize rapid reuse of business logic;
    • Servitization of business: From the perspective of the enterprise, abstract the functions of the enterprise into reusable and assembleable services; transform the original functional enterprise structure into a service-oriented enterprise structure, and further enhance the external service capabilities of the enterprise; the first two The first step is to solve the problem of system call and system function reuse from the technical level.

1.4 ESB Enterprise Service Bus

image.png

  • The term bus is a reference to the physical bus that transports bits between the various devices in a computer. ESBs provide similar functionality at a higher level of abstraction. In an enterprise architecture using ESB, applications will interact through the bus, and the bus plays the role of information dispatch between applications. The main advantage of this approach is that it reduces the number of point-to-point connections required for inter-application interaction. This, on the other hand, makes the analysis of the impact of major software changes simpler and more intuitive. By reducing the number of connection points in an application system, the process of retrofitting a component in the system is simplified.
  • The enterprise service bus provides reliable message transmission, service access, protocol conversion, data format conversion, content-based routing and other functions, shielding the physical location, protocol and data format of the service.
  • The evolution of enterprise integration architecture in the future will break through the boundaries of enterprise integration, realize the integration of multiple scenarios such as integrated application APIs, messages, device data, and cross-cloud, and build connections for all enterprise applications, big data 9 cloud services, devices, and partners . The traditional "integration factory" model controlled by the IT team will be transformed into a self-service integration model supported by business lines, subsidiaries, application development teams and end business users, which is what we call a "unified hybrid integration platform".

1.5 Microservice Introduction

image.png

  • The origin of microservices began with the Micro-Web-Service proposed by Dr. Peter Rodgers at the 2005 Cloud Computing Expo. Juval Lowy had a similar idea with him, turning categories into fine-grained services (Granular Services) )y The core idea is to allow services to be used in an access method similar to Unix pipelines. In 2014, Martin Fowler and James Lewis jointly proposed the concept of microservices, which defined microservices as small services composed of a single application. With its own process and lightweight processing, services are designed according to business functions, deployed in a fully automatic manner, and communicate with other services using HTTP API. At the same time, the service will use the smallest scale of centralized management (such as Docker) capabilities, and the service can be implemented with components such as different programming languages ​​and databases.
  • Microservices is an architectural and organizational approach to developing software consisting of small independent services that communicate through well-defined APIs.
  • The emergence of the microservice architecture is because a small change in the single architecture (Monolithic) will affect other modules. Especially when publishing on the cloud, any small changes need to be compiled and released uniformly. To expand a certain module, the overall expansion is also required. Therefore, applications are built through a series of microservices. Each microservice can be independently deployed, independently expanded, and provides modular boundaries. It can also be developed in different languages.

1.6 Microservice Architecture

image.png

  • The English name of microservice is Microservice, and the Microservice architecture pattern is to organize the entire web application into a series of small web services. These small web services can be compiled and deployed independently, and communicate with each other through their exposed API interfaces. They cooperate with each other to provide users with functionality as a whole, but can be extended independently.
  • Microservices (Microservices) is a software architecture style, it is based on small functional blocks (Small Building Blocks) that focus on a single responsibility and function, and uses a modular approach to combine complex large-scale applications, each functional area Blocks communicate with each other using a language-independent set of APIs.
  • Microservices use the design concept of business functions. When designing applications, they can be divided by business functions or process design first, and each business function can be independently realized as an individual service that can be executed independently, and then use the same protocol. All the services required by the application are combined to form an application. If you need to expand for a specific business function, you only need to expand the service of the business function, and you don’t need to expand the entire application. Interference, microservice administrators can configure microservices to different computing resources according to the needs of computing resources, or deploy new computing resources and configure them
  • The API gateway is generally located on the execution path of each API request. It belongs to the data plane center to receive requests from the client and reverse-proxy the requests to the underlying API. And enforce traffic control and user policies before that.
  • Before proxying the request back to the original client, it can also respond to the instructions of the underlying API and execute the corresponding strategy again. RESTful API is a REST-style API, and RESTFUL is a design style and development method of web applications that can be defined in XML format or defined in JSON format.

1.7 Characteristics of Microservices

image.png

  • Complexity is resolved by decomposing a monolithic application into multiple service methods. The application is decomposed into multiple manageable branches or services under the condition that the function remains unchanged, and each service is defined through API.
  • The microservice architectural pattern provides a modular solution to functions that are difficult to implement with monolithic coding, whereby individual services are easy to develop, understand, and maintain.
  • Since microservices are implemented and deployed independently, i.e. run in separate processes, they can be monitored and scaled independently
  • The microservice architectural pattern is the independent deployment of each microservice. Developers no longer need to coordinate the impact of other service deployments on this service.
  • The microservice architecture enables each service to be developed by a dedicated development team. Developers can freely choose development technologies and provide API services.

2. Mainstream Crazy Street of Cloud Native Applications

2.1 Introduction to the Evolution of Architecture Development

image.png

  • When two or more computers communicate with each other in the early days, the communication needs to be completed by the underlying physical layer that can transmit bytecodes and electronic signals. Before the emergence of the TCP protocol, the service needs to handle packet loss, disorder, retry, etc. by itself. A series of problems, so in addition to handling business logic, the service also needs to deal with network transmission and other issues.
  • In the 1980s, with the emergence of the TCP protocol, it solved the general flow control problem in network transmission, moved the technology stack down, and separated it from the realization of services, becoming a part of the network layer of the operating system.
  • In the 1990s, after the emergence of TCP, network communication between machines was no longer the next problem, and distributed systems represented by GFS/BigTable/MapReduce flourished. At this time, the unique communication semantics of distributed systems appear again, such as circuit breaker strategy, load balancing, service discovery, authentication and authorization, quota limit, monitoring, etc., so the service implements part of the required communication semantics according to business requirements.
  • In order to avoid the need for each service to implement a set of semantic functions for distributed system communication, some development frameworks for microservice architectures have begun to appear, such as Twitter's Finagle, Facebook's Proxygen, and SpringCloud. These frameworks implement distributed system communication. Various general semantic features that are needed.
  • Although the framework itself shields some general function implementation details of distributed system communication, developers need to spend more energy to master and manage the complex framework itself, and it is not easy to track and solve problems in the framework; and the development framework Usually only one or a few specific languages ​​are supported, resulting in services written in languages ​​without framework support, which are difficult to integrate into a microservice-oriented architecture system; complex project dependencies make version compatibility very difficult, and the upgrade of framework libraries cannot provide services. transparent. Therefore, the proxy mode (side car mode) represented by Linkerd and Envoy came into being, which is the first generation of Service Mesh.
  • The first generation of Service Mesh is composed of a series of stand-alone agent services that run independently. In order to provide a unified upper-level operation and maintenance entrance, a centralized control panel has evolved. All stand-alone agent components interact with the control panel to update the network topology strategy. And stand-alone data reporting. In this model, each service will have a sideCar proxy, and the services will only communicate through the SideCar proxy. This is the second-generation Service Mesh represented by istio (Istio is launched by companies such as Google, IBM, and Lyft).

2.2 Introduction to Spring Cloud

image.png

  • Spring Cloud is known as the "family bucket" for building distributed microservice systems. It is not a certain technology, but an ordered collection of a series of microservice solutions or frameworks. It integrates mature and proven microservice frameworks on the market, and repackages them through Spring Boot, shielding the complex configuration and implementation principles and providing developers with a set of simple, easy-to-understand, easy-to-deploy, and easy-to-maintain frameworks. Distributed systems development toolkit.
  • The sub-projects of Spring Cloud can be roughly divided into two categories, one is the encapsulation and abstraction of the existing mature framework "Spring Boot", which is also the largest number of projects; the second category is the implementation of infrastructure for a part of the distributed system , such as Spring Cloud Stream plays the role of kafka/ActiveMQ.
  • Spring Boot is a new framework provided by the Pivotal team, which is designed to simplify the initial construction and development process of new Spring applications. The framework uses a specific approach to configuration, so that developers no longer need to define boilerplate configuration. In this way, Spring Boot aims to be a leader in the burgeoning field of rapid application development.
  • Features of Spring Cloud:
    • With the support of strong Spring community, Netflix and other companies, and the contribution of the open source community is very active, the standardized combination of mature products and frameworks of microservices, Spring Cloud provides a complete set of microservice solutions, with low development costs and low risks
    • Based on Spring Boot, it has the characteristics of simple configuration, rapid development, easy deployment, and convenient testing
    • Supports REST service calls, which are more lightweight and flexible than RPC (services only rely on a paper contract, and there is no strong dependency at the code level), which is conducive to the realization of cross-language services and the release and deployment of services.
    • Provides Docker and Kubernetes microservice orchestration support

2.3 Features of Service Mesh

image.png

  • The concept of Service Mesh (Chinese translation service grid) was first proposed by Buoyant. It was used publicly for the first time in 2016. In 2017, the company released the first Service Mesh product Linkerd. The article What's a service mesh? And why do I need one? published at the same time is also recognized by the industry as the authoritative definition of Service Mesh .
  • When there is no service mesh layer, the communication of logic management can be coded into each service, but as the communication becomes more and more complex, the communication between microservices becomes more and more complex, and the communication of logic management becomes more and more complicated , this is the time, through the service grid, a large number of discrete services can be integrated, and the service grid is dedicated to dealing with complex service communication issues as a functional application.
  • If you use one sentence to explain what Service Mesh is, it can be compared to TCP/IP between applications or microservices, which is responsible for network calls, current limiting, fusing and monitoring between services. Generally, you don’t need to care about the TCP/IP layer when developing applications, and you don’t need to care about the things that are originally implemented through the service framework between services when using Service Mesh, such as Spring Cloud, Netflix Oss and other middleware, as long as you hand it over to Service Mesh. Can.
  • Without a service grid, each microservice needs to be logically coded to manage inter-service communication, which can't focus on business development, and communication failures are difficult to diagnose because the logic to manage inter-service communication is hidden in each service
  • Service Mesh is generally used to describe the network of microservices that make up an application and the interactions between applications. Its requirements include service discovery, load balancing, fault recovery, indicator collection and monitoring, and generally more complex operation and maintenance requirements, such as blue-green release, canary release, traffic limiting, access control, and end-to-end authentication.

2.4 Service Mesh Architecture

image.png

  • The communication between each application microservice needs to specify rules to explain how to get from service point A to point B. These rules are defined in the logical management layer of each service, and the service grid will extract logical management in each service The rules of inter-service communication and abstract it into an infrastructure layer, and the service mesh will not add new functions in each microservice runtime environment. When each microservice communicates, the request will be routed between the microservices through the proxy of the service grid. Each proxy that constitutes the service website is also called SideCar, because SideCar runs in parallel with each service and is separated from each service. Operating independently, these sideCar proxies form a mesh network. Sidecar proxies work side by side with microservices and are used to route requests to other proxies.
  • SideCar is a design pattern that separates application functionality from the application itself as a separate process, which allows non-intrusive addition of functionality to the application and avoids adding additional code to meet third-party requirements. In the software architecture, SideCar is attached to the main application, or called the parent application, to expand and enhance features, while SideCar is loosely coupled with the main application.
  • In a service mesh, service instances and their sidecar proxies are called to constitute the data plane, which includes not only data management, but also request processing and response. The service mesh also includes a control plane for managing interactions between services, which are coordinated by their sidecar proxies.
  • Workflow of the service grid: The control plane pushes the service configuration in the entire grid to the SideCar agents of all nodes. Routing information can be dynamically configured, either globally or individually for certain services. After SideCar confirms the destination address, w sends traffic to the corresponding service discovery endpoint, which is service in Kubernetes, and then service forwards the service to the backend instance. SideCar selects the fastest responding instance among all application instances based on the latency it observes for recent requests. SideCar sends requests to this instance, recording the response type and latency data. If the instance hangs or the process is not working, SideCar will send the request to another instance to try again. If the instance continues to return an error, SideCar will remove the instance from the load balancing pool and retry periodically later. If the request's deadline has passed, SideCar actively fails the request instead of trying to add the load again. SideCar captures various aspects of the above behavior in the form of metrrc and distributed tracing, which will be sent to the centralized metric system

2.5 ServiceComb

image.png

  • The ServiceComb project originated from Huawei Microservice Engine (CSE). CSE draws on and inherits the advantages of these frameworks and focuses on solving the problems faced by microservices from the following aspects:
    • Microservice communication performance
    • Microservice Operations and Governance. Performance monitoring, flow control, isolation fault tolerance, gray release, etc.
    • Legacy System Retrofit
    • Supporting DevOps, centering on the development framework, complete the DevOps tool set for the whole life cycle. Including service interface compatibility management, contract testing, development pipeline, unified operation and maintenance, etc.
  • ServiceComb was donated to Apache by Huawei in November 2017 and started incubation. Afterwards, under the guidance of Apache mentors, members of the incubator management committee conducted business incubation. This is also the industry's first microservice project incubated in Apache and graduated as a top-level project.

2.6 Introduction to Istio

image.png

  • lstio is a completely open source service grid, which is connected to existing distributed applications as a transparent layer. It is also a platform with API interfaces that can integrate any log, telemetry and policy systems. Istio's diverse features enable users to successfully and efficiently run distributed microservice architectures and provide a unified approach to securing, connecting, and monitoring microservices.
  • The ServiceMesh of the early SideCar mode put all the functions of communication and communication link management into the proxy service, and because it carries too many features and functions, the update and modification of the data plane proxy will be particularly frequent, affecting the proxy service stability. At the same time, in Service Mesh mode, the data plane proxy carries all the traffic of microservice communication, which requires extremely high stability. In order to solve the above problems, the strategy and configuration decision logic are separated from the proxy service to form an independent control plane, which is the second generation of Service Mesh.
  • lstio consists of two parts: control plane and data plane
    • The data plane is the communication plane between services. Without a service mesh, the network has no way of understanding what traffic is being sent and making any decisions based on what type of traffic it is, or who it's coming from or going to.
    • The control plane takes the desired configuration and view of services and dynamically programs the proxy servers, updating them as rules or circumstances change.

3. Solutions for HUAWEI CLOUD native applications

3.1 Application service network ASM

image.png

  • Covering all application forms, it supports smooth access and unified management of various applications such as containers, traditional microservices, and third-party services. Support multi-cloud and hybrid cloud complex scenarios Mixed management of service cross-cluster traffic under various network conditions, large-scale grid, provide intelligent operation and maintenance, intelligent expansion and contraction, and help users automatically and transparently manage application access
  • High-performance, low-loss, light-weight, multi-form grid data surface, supports offloading form per Pod and per Node2, and speeds up SideCar forwarding efficiency. Flexible topology learning for configuration optimization and optimization of grid control plane resources.
  • HUAWEI CLOUD ASM can well solve the problems of cloud-native application management, network connection, and application network governance such as security management.
  • HUAWEI CLOUD ASM products are deeply integrated with the CCE cloud container engine to provide non-intrusive. The application lifecycle management solution of intelligent traffic governance has enhanced the full-stack capabilities of HUAWEI CLOUD Container Service, and made a series of enhancements in terms of usability, reliability, and visualization to provide customers with an out-of-the-box experience.

3.1.1 The main problems solved by ASM

image.png

3.1.2 ASM Service Product Architecture

image.png

  • Hybrid deployment: Unified governance that supports the hybrid deployment of virtual machine applications and container applications in the future
  • Observability: Out of the box, Huawei Professional Cloud provides end-to-end intelligent global monitoring, logs, topology, and call chains.
  • Multi-cloud hybrid cloud scene global unified service governance, supports unified service governance of multiple infrastructures (multi-container cluster/container-virtual machine/virtual machine-physical machine), grayscale, topology, and call chain across clusters.
  • Protocol extension: Provide a combination solution with the SpringCloud microservice SDK.
  • Community and open source: The lstio community ranks third in the world in terms of community contribution, quickly solving community version problems and needs. Big customers need to release the version quickly, and mention the community backbone, which is compatible with the community.

3.1.3 Gray scale release based on ASM

image.png

  • Supported grayscale publishing strategies:
    • Grayscale rules based on request content: support grayscale rules based on request content, and various request information such as Header and Cookie can be configured.
    • Gray-scale rules based on traffic ratio: s supports gray-scale rules based on traffic ratio, and distributes traffic according to the weight ratio
    • Canary grayscale process: Provides a wizard to guide users through the canary grayscale process, including launching the grayscale version, observing the operation of the grayscale version, configuring grayscale rules, observing access conditions, and segmenting traffic.
    • Blue-green gray-scale process: Provide a wizard to guide users to complete the blue-green gray-scale process, including the gray-scale version online observation gray-scale version operation observation access status, version switching, etc.

3.1.4 Facility discovery and multi-cluster management

image.png

  • Provides an O&M-free hosting control plane, provides multi-cloud and multi-cluster global unified service governance, grayscale, security, and service operation monitoring capabilities, and supports unified service discovery and management of multiple infrastructures such as containers and VMs.
  • Grid sharing of multiple clusters? Set root certificates, distribute key and certificate pairs to service instances on the data plane, replace key certificates regularly, and revoke key certificates as needed. When accessing between services, grid data The surface proxy will act as a proxy for the local service and the peer for two-way authentication and channel encryption. The two-way authentication service parties here can come from two different clusters, so as to achieve transparent end-to-end two-way authentication across clusters.
  • Supports service configuration load balancing, service routing, fault injection, fuse fault tolerance and other governance rules based on application topology, combined with a one-stop governance system, provides real-time, visualized micro-service traffic management; non-intrusive intelligent traffic governance, applications do not need any modification , to perform dynamic intelligent routing and elastic traffic management
    • Routing rules such as weight, content, TCP/IP, etc., to achieve flexible grayscale publishing of applications
    • HTTP session maintenance to meet the continuous demands of business processing
    • Current limiting and fusing to achieve stable and reliable links between services.
    • Network persistent connection management reduces resource loss and improves network throughput
    • Service security certification: certification, authentication, audit, etc., providing the cornerstone of service security guarantee

3.1.5 Unified control and governance strategy, real-time traffic monitoring

image.png

  • Supports governance rules such as load balancing, service routing, fault injection, and fault tolerance for services based on application topology, and combines with a one-stop governance system to provide real-time and visualized microservice traffic management, non-intrusive intelligent traffic governance, and applications do not require any After transformation, dynamic intelligent routing and elastic traffic management can be performed.
    • Routing rules such as weight, content, TCP/IP, etc., to achieve flexible grayscale release of applications.
    • HTTP session maintenance to meet the continuous demands of business processing
    • Current limiting and fusing to achieve stable and reliable links between services.
    • Network persistent connection management reduces resource loss and improves network throughput
    • Service security certification: certification, authentication, audit, etc., providing the cornerstone of service security guarantee
  • Supports distribution based on request content/browser/OS
  • Support distribution based on traffic ratio

3.1.6 Traffic governance

image.png

  • Application service grid ASM currently supports traffic management capabilities such as retry, timeout, connection pool, circuit breaker, load balancing, HTTP header field, fault injection, etc., which can meet the management needs of most business scenarios.
  • Traffic management:
    • Support routing rules such as weight, content, TCP/IP, etc., to achieve flexible grayscale publishing of applications
    • Supports HTTP session persistence to meet the continuous demands of business processing
    • Supports current limiting and fusing to realize stable and reliable links between services.
    • Support network long connection management to reduce resource loss and improve network throughput
    • Support service security certification: certification, authentication, audit, etc., providing the cornerstone of service security guarantee

3.1.7 Applicable scenarios

image.png

  • Operating a containerized infrastructure presents a new set of challenges. We need to enhance containers, evaluate the performance of API endpoints, and identify harmful parts of the infrastructure. The lstio service mesh enables API enhancements without code modification and without service delays.
  • Usually, the iterative way of product optimization is to directly release a certain version online to all users. Once an online accident (or BUG) is encountered, it will have a great impact on users, and the problem-solving cycle will be longer, and sometimes even have to roll back to the previous version. One version seriously affected the user experience. Grayscale release is a way of smooth transition of version upgrade. When the version is upgraded, some users will use the higher version, while other users will continue to use the lower version. After the higher version is stable, gradually expand the scope and migrate all user traffic to the higher version. Come.

3.2 Application middleware introduction

3.2.1 Distributed message service

image.png

  • Main features:
    • Easy to use: out-of-the-box, visual operation, on-demand self-service creation, automatic deployment, instance creation in minutes, immediate use, real-time viewing and management of message instances.
    • Stable and reliable, worry-free operation and maintenance: supports cross-AZ deployment, solves open source availability problems (Kafka split-brain multi-controller, etc.), automatically detects alarms and switches when faults occur, and ensures reliable operation of users' key businesses
    • Practice verification: Large-scale tests such as Huawei VMALL double 11, widely deployed and running in the global customer cloud business system, and closely following the mainstream of the community, the service has won the full trust of customers in the cloud.

3.2.1.1 Distributed message service Kafka

image.png

  • Zookeeper: Distributed coordination application, storing Kafka metadata
  • Client:
    • Producer (Producen2o2) is a client application that publishes messages to a topic, and can continuously send messages to multiple topics.
    • Consumer ( Consumer ): Clients who subscribe to these topics can subscribe to multiple topics at the same time
  • Server: Consists of service processes called Brokers, and a Kafka cluster consists of multiple brokers.
  • Kafka: distributed message stream processing middleware
  • Broker: Responsible for receiving and processing requests sent by clients, and persisting messages
  • Topic; the object of publishing and subscribing in Kafka, each business, each application and even each type of data can create a dedicated topic,
  • Topic is stored by partition.
  • Kafka high availability mechanism:
    • Brokers run on different machines. When one fails, other brokers can still provide external services.
    • Backup mechanism (Replication), copy the same data to multiple machines as a copy.

3.2.1.2 Distributed message service RabbitMQ

image.png

  • Out-of-the-box: The distributed message service RabbitMQ version provides stand-alone and cluster message instances, with rich memory specifications, and can be purchased and created directly through the console without separately preparing server resources.
  • Rich message features: support AMOP protocol, support common message, broadcast message, dead letter, delayed message and other features
  • Flexible routing: In RabbitMQ, the producer sends the message to the exchange, and the exchange routes the message to the queue. The switch supports four routing methods: direct, topic, headers and fanout; it also supports switch combination and customization.
  • High availability: RabbitMQ clusters provide mirror queues, which can synchronize data on other nodes through mirroring. When a single node goes down, services can still be provided externally through a unique access address without data loss.
  • Monitoring and alarming: Support monitoring the status of RabbitMQ instances, and support monitoring the memory, CPU, network traffic, etc. of each agent in the cluster. If the cluster or node status is abnormal, an alarm will be triggered.
  • AMQP, or Advanced Message Queuing Protocol, is an application layer standard advanced message queuing protocol that provides unified messaging services. It is an open standard for application layer protocols and is designed for message-oriented middleware. The client and message middleware based on this protocol can transmit messages, and are not limited by different products of the client/middleware, different development languages ​​and other conditions.

3.2.1.3 Distributed Message Service RocketMQ

image.png

  • Supported message types
    • Ordinary messages: messages without special functions, different from delayed messages, sequential messages and transactional messages 1
    • Delayed message/timing message: After the producer produces a message to the RocketMO version, the message will not be consumed immediately but will be delayed until a certain time before being sent to the consumer for consumption.
    • Sequential messages: Consumers consume messages in the order in which they are sent
    • Transaction message: Provides a distributed transaction function similar to X/Open XA, and can achieve the final consistency of distributed transactions through transaction messages.
  • Producer: The message producer is the program that delivers the message.
  • Consumer: The message consumer is the program that receives the message
  • Namesrv: Save topic routing information. Clients need to access namesrv to obtain topic routing information for production and consumption.
  • Master: Receive client production and consumption requests.
  • slave: Equivalent to a replica node, receiving replicated data from the master
  • The Raft consensus algorithm is used between the master and the slaves to ensure data consistency, and automatic failover between the same group of masters and slaves.
  • Broker: Responsible for receiving and processing the requests sent by the client, and persisting the messages. The three nodes inside the Broker are mutually active and standby.

3.2.1.4 Comparison of Distributed Message Service Features

image.png

  • Remarks: Firehose or rabbitmq_tracing plug-in can be used in RabbitMQ to achieve persistence, but enabling the rabbitmq_tracing plug-in will affect performance, it is recommended to enable it only in the process of locating problems.
  • The performance difference between Kafka and RabbitMQ: The performance of message middleware mainly measures the throughput. The throughput of Kafka is 1~2 orders of magnitude higher than that of RabbitMQ. RabbitMQ's stand-alone QPS is at the level of 10,000, and Kafka's stand-alone QPS can reach the million level. . If Kafka enables functions such as idempotence and transactions, the performance will also be reduced.

3.2.1.5 Case: Building a real-time transaction analysis platform through Kafka

image.png

3.2.2 Microservice Engine CSE

image.png

  • The microservice architecture pattern usually includes the following content: RPC communication between microservices, distributed microservice instances and service discovery, configuration external and dynamic, centralized configuration management, providing multiple (fusing, isolation, current limiting, load balancing, etc.) ) Microservice governance capabilities, distributed transaction management capabilities, call chains, centralized log collection and retrieval.
  • Microservice architecture patterns usually include the following:
    • RPC communication between microservices. The microservice architecture pattern requires microservices to communicate through RPC instead of other traditional communication methods, such as shared memory and pipelines. Common RPC communication protocols include REST, gRPC, etc. Using RPC communication can reduce the coupling between microservices, improve the openness of the system, and reduce the restrictions on technology selection.
    • Distributed microservice instances and service discovery. The microservice architecture particularly emphasizes the flexibility of the architecture. Microservice design generally follows the stateless design principle; microservice expansion instances that conform to this principle can bring about a linear improvement in processing performance. When there are a large number of instances, a middleware that supports service registration and discovery is required for call addressing between microservices.
    • Configure external and dynamic, centralized configuration management. As the number of microservices and instances increases, managing the configuration of microservices becomes more and more complex. The configuration management middleware provides a unified configuration management view for all microservices, effectively reducing the complexity of configuration management. There are some common failure modes in the microservice architecture, through which governance can reduce the impact of failures on the overall business
    • Distributed transaction management capabilities. Common distributed transaction processing modes include Saga, TCC, non-intrusive, etc. Distributed transaction management can reduce the difficulty of dealing with distributed transaction consistency problems.
    • Call chain, centralized log collection and retrieval. Viewing logs is still the most common means of analyzing system failures. Call chain information can help define failures and analyze performance bottlenecks.

3.2.3 Applicable scenarios

image.png

  • The purpose of planning the development environment is to ensure that developers can work better in parallel, reduce dependencies, reduce the workload of setting up the environment, and reduce the risk of going online in the production environment:
    • Build a local development environment on the intranet. The advantage of the local development environment is that each business/developer can build a minimum function collection environment that meets their own needs, which is convenient for viewing logs and debugging codes. The local development environment can greatly improve code development efficiency and reduce deployment and debugging time. The disadvantage of the local development environment is that the integration level is not high, and it is difficult to ensure a stable environment when integration and joint debugging are required.
    • The test environment on the cloud is a relatively stable integration test environment. After the local development and testing is completed, each business deploys services in this field to the cloud test environment, and can call services in other fields for integration testing. Depending on the complexity of the business scale,
    • The test environment on the cloud can be further divided into (alpha) test environment (Baita) test environment, (weight) test environment, etc. The degree of integration of these test environments is from low to high. The general (weight) test environment requires the same management as the production environment to ensure a stable environment.
    • The production environment is a formal business environment. The production environment needs to support the grayscale upgrade function, support online joint debugging and drainage, and ensure that the impact of upgrade failures on services is minimized.
    • The test environment on the cloud can open the public network IP of CSE and middleware, or realize network interoperability, so that the middleware on the cloud can be used to replace the local environment, reducing the time for each developer to install the environment by themselves. This situation also belongs to the local development environment of the intranet, and the microservice runs on the machine of the local development environment. Microservices deployed in containers on the cloud and microservices deployed on local development environment machines cannot access each other. In order to avoid conflicts, the test environment on the cloud is only used as a local development environment.

3.2.4 Case: Huawei Consumer Cloud Service Microservice Base

image.png

  • Micro-service and componentization provide a technical basis for large-scale collaborative development and provide a unified framework for internal sharing capabilities. By the beginning of 2021, the application market has developed more than 300 microservices and deployed more than 10,000 instances on the live network. The client has developed more than 500 various dynamic layout cards, and more than 100 components have been split by componentization.
  • AppGallery Connect: Provide developers with mobile application lifecycle services, covering all terminals and scenarios to reduce development costs, improve operational efficiency, and help business success.

3.2.5 API Gateway

image.png

  • As an API provider, you can use mature business capabilities (such as services, data, etc.) as back-end services, open APIs in the API gateway, and provide them to API callers offline, or publish them to the API market to realize business Ability to realize.
  • As an API caller, you can obtain and call the API opened by the API provider on the API gateway, reducing development time and cost.
  • API Gateway helps users realize their service capabilities while reducing enterprise R&D investment, allowing users to focus on the core business of the enterprise and improving operational efficiency. For example, enterprise A opens the phone number attribution query API in the API gateway and publishes it to the API market. Enterprise B calls this API through the API market, and pays the fee generated by calling this API. At this time, enterprise A monetizes its own service capabilities by opening up its business capabilities, and enterprise B directly calls the API opened by enterprise A, reducing development time and costs, and ultimately achieving a win-win situation among enterprises.

3.2.6 Applicable scenario: high-efficiency control of API full-process R&D system

image.png

  • Swagger is a standardized and complete framework for generating, describing, invoking and visualizing RESTful style web services. The goal of Swagger is to define a standard and language-independent interface to the REST API, allowing people and computers to have the ability to discover and understand services without accessing source code, documentation, or network traffic monitoring.

3.3 Software development platform DevCloud

image.png

  • The software development platform consists of the following main services:
    • Project management: The software development team provides agile project management and collaboration, supporting functions such as multi-project management, agile iteration management, milestone management, requirements management, defect tracking, and multi-dimensional statistical reports
    • Code hosting: a Git-based online code hosting service for software developers. It is a cloud code warehouse with functions such as security control member/authority management, branch protection/merging, online editing, and statistical services.
    • Pipeline: Provide a visual and customizable delivery pipeline to shorten the delivery cycle and improve delivery efficiency
    • Code inspection: Code quality management is implemented based on the cloud. Software developers can perform multilingual code static inspection and security inspection after coding is completed, and provide defect group viewing and improvement suggestions.
    • Compilation and construction: Developers provide a mixed-language construction platform with simple configuration to realize cloud-based compilation and construction, support enterprises to achieve continuous delivery, and improve delivery efficiency. Support one-click creation, configuration and execution of compilation and construction tasks, realize automation of code acquisition, construction, packaging and other activities, and monitor the construction status in real time
    • Deployment: Provides visual and one-click deployment services, supports deployment to virtual machines or containers, provides templates such as Tomcat and SpringBoot, or freely assembles and arranges atomic steps for deployment, supports parallel deployment and seamless integration of pipelines, and realizes deployment environment standardization and deployment process automation.
    • Cloud Test: Provides a one-stop cloud test platform for software developers, covering functional testing and interface testing and integrating the DevOps agile testing concept to achieve efficient management testing and ensure product quality.
    • Release: Provide software development teams with the ability to manage the software release process, ensuring the standardization, visualization, and traceability of the software release process.
    • CloudIDE: Cloud development environment. Provide developers with a development environment on demand, support operations such as environment configuration writing, building, running, and debugging, and support docking with various code warehouses.
    • Open source mirror site: The open source component, open source operating system, and open source DevOps tool mirror site provided by HUAWEI CLOUD is committed to providing users with comprehensive, high-speed, and reliable open source component/OS/tool ​​download services.

3.3.1 Application Lifecycle Management

image.png

  • Whole process: One platform covers the common functions of software development, embedded integration of various functions of software development, docking management & operation and maintenance.
  • Rich programming languages ​​and technology stacks are supported: more than 20 mainstream programming languages ​​are supported: development framework and operating environment, and applications are seamlessly migrated to the cloud.
  • Safe and reliable: safety testing, trusted construction, high safety standards, 7000+ code inspection rules

3.3.2 CI/CD whole process realization

image.png

  • From the initial waterfall model, to later agile development, to today's DevOps, this is the technical route for modern developers to build great products. With the rise of DevOps came new approaches to continuous integration, continuous delivery (CI/CD ) and continuous deployment, while traditional ways of software development and delivery are rapidly becoming obsolete. In the past agile era, the software release cycle of most companies was monthly, quarterly or even annual, but in the current DevOps era, weekly, daily or even multiple times per day is the norm. This is especially true as SaaS becomes mainstream in the industry, making it easy to dynamically update applications without forcing users to download updated components. Many times, users won't even notice that a change is happening.
  • Continuous integration focuses on integrating the work of various developers into a code warehouse, usually several times a day, the main purpose is to find integration errors as early as possible, so that the team can be more closely integrated and collaborate better. The purpose of continuous delivery is to minimize the friction inherent in the team during the deployment or release process, and its implementation is usually able to automate every step of the build deployment, so that the code release can be safely completed at any time (ideally). Continuous Deployment is a higher level of automation 358 Automatically build/deploy whenever there is a major code change.

3.3.3 Project Management Services

image.png

  • A project is composed of a series of coordinated and controlled activities through a certain process. The goal of the project is to meet specific needs and is constrained by time, cost and resources. Project management achieves the established goals of the project by managing the process and results of the project. Kanban projects are a unique type of project. Kanban depends on projects. Kanban projects visually display work item levels, work item types, and work items through kanban.
  • Professional agile project management: Jigong agile project collection management, single project Scrum, lean Kanban. Professional product planning: Gantt chart, mind map, product panorama planning.
  • Multi-dimensional professional reports: multi-project management Kanban, dashboards, reports
  • R&D knowledge management: structured knowledge, precipitated innovation.
  • Trusted audit log: 1000+ audit events, comprehensive traceability, safe and reliable.
  • Typical Applicable Scenarios
    • Product, development, and test collaborative operations
    • demand management
    • Project health (schedule, quality, risk, people) management
    • defect management

3.3.4 Code hosting service

image.png

  • Access security control: Provide authentication tools such as branch protection and IP whitelist to ensure that only accounts with specific permissions + specific IPs can access the code warehouse
  • Support remote backup: support authorized users to back up the warehouse to other areas of HUAWEI CLOUD, other physical hosts and cloud hosts with one click.
  • Repository Locking: A repository can be locked manually so that no changes and commits can be made to prevent breaking of upcoming stable releases.
  • SSH deployment key: use the SSH key to control the read and write permissions of the warehouse, and use the deployment key to open the read-only permission of the warehouse. Misoperation traceability and recovery: For codes, branches, etc. that are accidentally deleted, precise rollback or retrieval can be performed. For deleted repositories, there are retention period backups in physical storage.
  • Operation log: All operations have Token, and key operations are audited and recorded. The audit log is persistent and can be kept for a long enough time to perform accurate 5W1H backtracking
  • Rule setting: Provide submission rule setting, merge review & access control setting, etc., so that the code quality is highly controllable.
  • Notification settings: When important changes occur in the warehouse, notifications such as emails and text messages can be sent to pre-set roles.

3.3.5 Code inspection service and compilation and construction

image.png

  • Independent research and development:
    • Self-developed code inspection engine based on syntax tree and CFG cross-process inspection, supporting C, C++, Java,
    • Code inspection in 10 languages ​​including Python and Go
  • A high-quality code inspection rule set based on Huawei's 30 years of R&D experience:
    • 3000+ code inspection rules, programming style/coding security/memory management/input verification/unsafe function/thread synchronization, code repetition rate, etc. 20+ code inspection rule scenarios
    • Compatible with 5+ security coding standards such as CWE/OWASP TOP 10/SANS TOP 25/MISRA/CERT.
  • Automated Assisted Defect Fixing:
    • Provide intelligent repair suggestions: code static inspection defect intelligent reconstruction technology, automatic/assisted development and repair software defects, improve problem repair efficiency.
    • Provide Java programming specification defect repair capability; C/C++ programming standard defect repair capability; Go code automatic repair capability.

3.3.6 Automated deployment and publishing services

image.png

3.3.7 Applicable Scenario: Software and Solution Operation Enterprises

image.png

  • Recommended collocation: project management, code hosting, code inspection, compilation and construction, deployment, cloud testing, release.

3.4 Application management and operation and maintenance platform ServiceStage

image.png

  • For the roles of enterprise developers, testers, operation and maintenance personnel, or project managers, it provides application hosting, monitoring, alarming, and log analysis capabilities. At the same time, the platform is extremely open and compatible with mainstream application technology stacks in the industry, including: multiple languages, Various microservice frameworks and multiple operating environments can help enterprises greatly improve the management and operation and maintenance efficiency of traditional applications and web application microservice applications, and focus on industry-oriented application innovation, thereby enhancing enterprise competitiveness.
  • Spring Cloud: The mainstream open source microservice development framework in the industry
  • Spring-Cloud-huawei: Spring Cloud applications can be hosted on Huawei Cloud by using spring-cloud-huawei.
  • ServiceComb: An open source microservice framework contributed to Apache led by Huawei

3.4.1 Application Management and Microservice Application Access

image.png

  • ServiceStaqe combines basic resources (such as CCE clusters, ECS, etc.) and optional resources (such as ELBRDS, DCS, etc.) under the same VPC into an environment, such as: development environment, test environment, pre-production environment, production environment network Interoperability can manage resources and deploy services according to the environment dimension, reducing the complexity of specific infrastructure operation and maintenance management.
  • Dubbo is an open source high-performance, lightweight Java RPC service framework open sourced by Alibaba, which can be seamlessly integrated with the Spring framework.

3.4.2 Application O&M

image.png

  • Real-time graphical display of application monitoring indicators: CPU usage, alarms, node exceptions, operation logs, and real-time grasp of key events.
  • Microservice Governance: Support microservice interface-level SLA indicators (throughput, delay, success rate) real-time (second-level) monitoring and governance to ensure continuous service of application operation.

3.4.3 ServiceStage Hybrid Cloud Microservice Solution

image.png

  • Solution & Value:
    • Application hosting: Full lifecycle hosting of traditional applications, web applications, and microservice applications, enabling application grayscale release and automatic elasticity.
    • Application monitoring: The running status of the application can be observed, monitored and controlled, and the operation and maintenance are worry-free.
    • Application alarm: alarm information is delivered in real time through multiple channels to respond to system failures in a timely manner.
    • Application logs: Massive log storage, second-level search, auxiliary problem location and operation analysis Distributed transaction processing: Non-intrusive & TCC dual-mode support to ensure transaction consistency

3.4.4 Relationship between ServiceStage and other cloud services

image.png

  • ServiceStage realizes the connection with the source code warehouse (such as DevCloud, GitHub, Gitee, GitLab, Bitbucket). After binding the source code warehouse, you can directly pull the source code from the source code warehouse for construction
  • The software center is integrated, and the completed software package (or image package) can be archived to the corresponding warehouse and organization.
  • Integrates relevant infrastructure (such as VPC, CCE, ECS, EIP, ELB), and can directly use existing or newly-built required infrastructure when deploying applications.
  • The microservice engine is integrated, and operations related to microservice governance can be performed by entering the ServiceStage console.
  • It integrates application operation and maintenance management and application performance management services, and can perform operations related to application operation and maintenance and performance monitoring.
  • It integrates services such as storage, database, and cache, and can realize persistent data storage through simple configuration.

3.4.5 Applicable scenario: microservice construction through ServiceStage

image.png

  • As the business grows, the service will encounter various unexpected situations, such as: instantaneous large-scale concurrent access, service error, intrusion, etc. Using the micro-service architecture can do fine-grained control of services to support business needs,
  • ServiceStage provides an industry-leading microservice application solution with the following advantages
    • Support native ServiceComb, Spring Cloud, Dubbo, and Service Mesh multiple microservice frameworks Support dual-stack mode (SDK and service mesh interworking), without changing the business code to host directly to the cloud
    • Support Swagger-based API management.
    • Support multi-language microservices, such as JAVA, Go, Nodejs, PHP, Python, etc.
    • Provides functions such as service center, configuration center, dashboard, and grayscale publishing.
    • Provide a complete set of microservice governance strategies such as fault tolerance, current limiting, downgrade, circuit breaker, error injection, black and white list, etc. Interface-based operations can be performed for business scenarios, which greatly improves the usability of service governance.

thinking questions

image.png
image.png

Guess you like

Origin blog.csdn.net/GoNewWay/article/details/131217260
Recommended