Kubernetes Cloud Native Gateway Gateway

1. Definition of Cloud Native

Several key points are mentioned in CNCF's definition of cloud native:

        1. Emphasize the dynamic nature of the application environment. New dynamic environments such as public cloud, private cloud, and hybrid cloud have become the first choice for most applications;

        2. Emphasize the attributes of non-cloud platform binding when deploying applications across multiple clouds;

        3. The importance of elastic expansion, rapid deployment and pull-up based on automated means is also emphasized.

2. Cloud Native Technology Solution

Two backgrounds of digital transformation:

       1. The number of applications is large, and the complexity increases accordingly;

        2. Coping with changes and complexity requires more agile support and response;

3. Development overview       

4. Cloud Native Era

        In addition to the security capabilities, traffic scheduling or control features of the API gateway in the cloud-native era, it also needs to have the following features

      1. Containerization: It supports containerized deployment, and can be deployed outside the container cluster, at the cluster entrance, or within the cluster. As the container cluster entry gateway, it needs to implement the Ingress and Gateway API model specifications;

       2. Microservices: Support the service discovery of container clusters, and serve the microservices in container clusters well;

       3. Service grid: supports edge deployment of container clusters and becomes the ingress and egress proxy gateway of the service grid;

       4. Elastic expansion: Container-based elastic expansion;

       5. Dynamic application environment: support multi-cloud deployment and realize cloud platform independence;

       6. Declarative API: use the declarative interface to complete configuration operation and maintenance, and can be integrated into the CI/CD pipeline for automation;

       7. Observability: It can be integrated by the cloud-native monitoring system to monitor logs, indicators, and links;

       8. Multiple roles: Different user roles such as DevOps, NetOps, SecOps, and AppDev can realize collaboration based on K8s and provide self-service capabilities.

5. Gateway technology selection

Organizational structures of enterprises of different sizes, different application business structures, different application technology structures, common scenarios

        1. Enterprise architecture: The organizational structure within the enterprise determines the architecture of the application system, whether the API gateway covers the entire enterprise or just a certain department;

        2. Technical architecture: The location and application scenarios of the API gateway deployment will directly affect the technology selection. Whether the location is outside the container cluster, on the edge of the cluster or inside the cluster, the gateway only serves a single K8s cluster, or spans multiple K8s clusters, or crosses K8s clusters With the old micro-service cluster, whether API management is needed, and the unified API document and developer portal are output externally;

        3. Business scenarios: business application scenarios are different for different application protocols, performance, security, etc.;

        4. Performance: what is the back-end service level and business characteristics of the API gateway agent, and whether the performance can meet your needs;

        5. Scalability: Whether the API gateway has the ability to scale horizontally or vertically;

        6. Security: Security capabilities of the API gateway: zero trust (access control, authentication, TLS/mTLS, audit logs), WAAP (WAF, Bot protection, DDoS mitigation, API security);

        7. User role: who is responsible for construction and operation and maintenance, and who is responsible for use

        8. Cost: What is the total input cost, is it completely based on open source self-build, or purchase an enterprise-level API gateway

6. Cloud Native Gateway Technical Route

        1. The technical route depends on the scenario. Enterprises use the cloud-native technology stack Kubernetes comprehensively, and the traffic entrance of the Kubernetes cluster can be realized by using the Ingress Controller or Gateway API. The Kubernetes cluster is used as a virtual machine scenario. Edge gateways need to achieve load balancing across clusters;

        2. Gateways play a role in their respective scenarios, with basic capabilities such as traffic distribution, scheduling, flow control, security protection, and flow monitoring; 

        3. Core capabilities include various standard capabilities such as performance, security, stability, protocol support, current limiting, and speed limiting.

7. The impact of language ecology on gateways

        Looking at the current popular gateway component technology stacks, the core of NGINX uses C language, the core of Envoy uses C++, Cloudflare uses Rust, Tyk and Traefik use Go, Spring Cloud Gateway uses Java, and Ocelot uses .net. The language selection logic is as follows :        

        1. The leading gateway of the microservice framework is generally determined by the language used by the microservice framework, for example: Spring Cloud Gateway and Ocelot, which pay more attention to the integration capabilities of the microservice framework;

        2. PaaS platform or container platform-led gateway generally adopts Go language, because Kubernetes uses Go extensively, and pays more attention to the consistency of platform technology stack;

        3. General-purpose gateways, components such as soft loads, reverse proxies, and data center edge gateways that can be deployed in various locations, usually use C, C++, and Rust, focusing more on performance, security, and reliability.

8. Cloud Native Gateway Products

        The cloud-native gateway refers more to the gateway of the Kubernetes entrance. There are three mainstream implementation forms:

                1. A type of gateway based on NGINX or NGINX derivatives, following the Ingress API specification, Kubernetes cluster entry;

                2. Implement Envoy Proxy based on Envoy;

                3. Self-developed products implement Kubernetes specifications, such as Ingress or Gateway API;

Nine, the choice of gateway products

 1. Clarify the concept. The core value of the gateway is to build a bridge between the client and the server. Then analyze and understand the concept of the gateway from the 5W2H way:

  • What: What is API gateway/traffic gateway/business gateway/microservice gateway/cloud native gateway/BFF, and what is the difference with ADC or load balancing? What problem do I need to solve with the gateway?

  • When: When do you need to add a gateway layer, and when do you need to modify the gateway configuration?

  • Where: Where is the gateway deployed? Generally speaking, the gateway is deployed at the edge of a certain "system", such as: data center entrance, availability zone entrance, VPC entrance, enterprise/department entrance, microservice cluster entrance, K8s cluster entrance;

  • Why: Why do I need to add this layer of gateways, what value and disadvantages does it bring?

  • Who: Who will build the gateway, who will operate and maintain it, and who will operate it?

  • How: How to use the gateway? Automatic configuration, or manual configuration? How to ensure the high availability and security of the gateway itself? How to build a gateway, do you build it yourself or use a cloud product?

  • How Many: How many layers of gateways do we need to deploy? What are the performance metrics for each gateway cluster? How much do you intend to spend?

2. Structure sorting, adapt to enterprise organizational structure and application technology structure

  • Enterprise organizational structure: According to Conway's law, the organizational structure of an enterprise determines the application architecture, and gateways are often located at the boundaries of various organizations to connect different organizations.

  • Application technology architecture: We need to clarify the main business scenarios and traffic scenarios of our applications, and which application protocols are used. For example, the traffic characteristics of video live broadcast applications and e-commerce platform applications are definitely different. In addition, what is the environment for application deployment, whether it is a traditional microservice architecture (Spring Cloud), whether a K8s cluster is used, whether a multi-K8s cluster is used, whether a multi-K8s cluster is active/standby/active-active/multi-active and highly available, whether it is Historical systems need to be compatible and integrated, and whether there is a unified PaaS platform are key considerations.

3. Scene sorting, the gateway connects the application and the client. Generally speaking, the core value of the gateway is as follows:

  • Hide the internal technical architecture of the back-end application, provide a tailor-made API externally, reduce complexity and accelerate application release;

  • Provide a unified infrastructure layer for the application, and offload all the non-functional cross-sections that are common to the application to this layer;

  • Simplify application troubleshooting through unified traffic monitoring;

  • What are the specific categories of scenarios? The common ones are:

  • Application protocol: four-layer protocol (TCP, UDP), seven-layer protocol (HTTP/1, HTTP/2, HTTP/3, HTTPS, WebSocket, gRPC, Dubbo, SOAP, MQTT, HLS, RTMP, etc.)

  • Traffic routing: conditional routing based on request context, blue-green deployment, grayscale publishing, A/B testing, load balancing, session persistence;

  • Flow control: flow limit, concurrency limit, bandwidth limit, request/response rewriting, request redirection, protocol conversion, cluster-level flow limit and speed limit

  • Traffic security: access control, authentication and authentication, TLS/mTLS, audit logs, WAF, Bot defense, DDoS mitigation, API security, CORS;

  • Service governance: automatic service discovery, circuit breaker downgrade, active health check;

  • Service optimization: response caching, response compression

  • Traffic visibility: indicator monitoring, log monitoring, link monitoring, security monitoring

  • API management: API lifecycle, version control, security policy, traffic policy, API documentation, developer portal

  • High availability: active/standby, multi-active high availability, gateway stateful cluster (configuration synchronization, session synchronization, memory counter synchronization);

  • High performance: network throughput, request throughput, response delay, encryption card-based SSL offload acceleration, DPDK-based kernel optimization;

  • Extensible: sufficient extension points, script-based plug-in extension capabilities;

  • Automated operation and maintenance: declarative API, dynamic configuration loading, automatic monitoring and collection, containerized deployment;

  • Localization support: support for domestic servers/operating systems, national secret algorithms, etc.;

10. Future gateway market

IDC released "IDC Market Glance: Integration and API Management, 2022" in October 2022, and Garner published a market report "Market Guide for API Gateways" in 2022

 NGINX architecture

 Application Scenario

  Cloud Native Extraction System

First, establish K8s with high-performance, high-security, and high-observability network connection capabilities, and expand horizontally and vertically. Horizontal expansion needs to be solved through Service Mesh, and vertical expansion is solved by Ingress Controller or Gateway API. In terms of security, NGINX-related security modules are based on zero trust and WAAP systems to achieve relatively comprehensive security capabilities.

Second, K8s secure and high-performance management APIs inside and outside the cluster. Kubernetes can do a good job in high-performance management, which is to provide API management capabilities after the security and high-performance network problems are solved.

Third, improve K8s system reliability and resilience to achieve cross-cluster or cross-cloud scaling.

source:

A New Cloud Native Story of the Veteran Gateway NGINX 

Guess you like

Origin blog.csdn.net/ejinxian/article/details/130911406