[Cloud Native Gateway] Detailed Explanation of Kong Usage

Table of contents

I. Introduction

2. Introduction to Kong

3. Kong Core Components

3.1 Kong component introduction

3.1.1 My Server

3.1.2 Apache Cassandra/PostgreSQL

3.1.3 Kong dashboard

3.2 Comparison between traditional gateway and Kong working mode

4. Kong gateway features and architecture

4.1 Kong Gateway Features

4.1.1 Scalability

4.1.2 Modularity

4.1.3 Runs on any infrastructure

4.2 Kong gateway architecture

5. Kong environment construction

5.1 Build pg environment

5.1.1 Constructing Kong's container network

5.1.2 Install pg database

5.1.3 Initialize or migrate the database

5.2 Start Kong container

6. Install the Kong management UI

6.1 introduction to konga

6.2 konga installation process

6.2.1 Define mount volume

6.2.2 Mount the database

6.2.3 Initialize the PostgreSQL database

6.2.4 Start the konga container

7. Basic use of Kong Admin Api

7.1 Pre-preparation

7.2 kong admin api core configuration module

7.3 kong admin api configuration service proxy

7.3.1 Create service

7.3.2 create route

7.3.3 Test verification

7.4 kong admin api configuration load balancing

7.4.1 Create upstream

7.4.2 create target

7.4.3 Modify service

7.4.4 Test verification

Eight, Kong plug-in use

8.1 Summary of kong plugins

8.1.1 Authentication plug-in

8.1.2 Security Control Plugin

8.1.3 Flow Control Plugin

8.1.4 Analysis and monitoring plug-in

8.1.5 Protocol conversion plug-in

8.1.6 Log application plug-in

8.2 Configuring the key-auth plugin 

8.2.1 Add routing plugin

8.2.2 Configuring apikeys for visitors

Nine, control current limit use 

9.1 Introduction to kong current limiting

9.2 Configuring the Rate Limiting plugin

9.2.1 Service (service) level to enable current limiting plug-in

9.2.2 Route level current limiting configuration

9.2.3 Enable plug-in on consumer

10. Use of Kong black and white list

10.1 service configuration whitelist

10.2 Route configuration whitelist

Eleven, written at the end of the article


I. Introduction

In the last article, I introduced the use of the cloud native gateway apisix in detail. Another high-performance cloud native gateway kong similar to apisix is ​​also a cloud-native-oriented cloud-native gateway that has attracted much attention in recent years and has many successful production cases. The api gateway, the use of kong will be introduced in detail next.

2. Introduction to Kong

Kong is a high-availability, easy-to-extend API Gateway project written by OpenResty (Nginx + Lua module), which is open sourced by Mashape.

Kong is built on NGINX and Apache Cassandra or PostgreSQL, and provides an easy-to-use restful API to operate and configure the API management system. It can be expanded horizontally to multiple Kong servers, and the requests are evenly distributed to each Server through the pre-load balancing configuration to deal with a large number of network requests. Official website: address

3. Kong Core Components

Kong mainly has three core components, which are introduced in detail below

3.1 Kong component introduction

3.1.1 My Server

nginx-based server for receiving API requests

3.1.2 Apache Cassandra/PostgreSQL

Used to store operational data, such as the data generated by the client calling the API operation of kong

3.1.3 Kong dashboard

The official UI management tool is recommended. Of course, the admin API can also be managed in a restfull way.

Kong uses a plug-in mechanism to customize functions. The set of plugins (can be 0 or N) is executed during the lifecycle of the API request-response loop. The plug-in is written in Lua and currently has several basic functions: HTTP basic authentication, key authentication, CORS (Cross-Origin Resource Sharing, cross-domain resource sharing), TCP, UDP, file logs, API request current limiting, request forwarding and Nginx monitoring.

3.2 Comparison between traditional gateway and Kong working mode

It is not difficult to see from the figure below that compared with traditional gateways, kong has better scalability in design, especially its own powerful plug-in mechanism, which can meet more abundant usage scenarios in production.

4. Kong gateway features and architecture

4.1 Kong Gateway Features

4.1.1 Scalability

Easily scale out by simply adding more servers, which means your platform can handle any request with a lower load;

4.1.2 Modularity

Can be extended by adding new plugins which are easily configurable via the RESTful Admin API;

4.1.3 Runs on any infrastructure

Kong gateways can run anywhere. You can deploy Kong in cloud or internal network environments, including single or multiple data center setups, and with public, private or invite-only APIs.

4.2 Kong gateway architecture

The core components in the kong architecture diagram are supplemented as follows:

  • The core of Kong is built based on OpenResty, which realizes Lua processing of request/response;
  • Kong plugin to intercept requests/responses;
  • The Kong Restful management API provides API/API consumer/plugin management;
  • The data center is used to store Kong cluster node information, API, consumers, plug-ins and other information. Currently, PostgreSQL and Cassandra support are provided. Cassandra is recommended if high availability is required;
  • The nodes in the Kong cluster automatically discover other nodes through the gossip protocol, and when some changes are made through the management API of a Kong node, other nodes will also be notified. The configuration information of each Kong node will be cached, such as plug-ins, so when the plug-in configuration is modified on a certain Kong node, other nodes need to be notified of the configuration change;

5. Kong environment construction

After having a certain understanding of the theory of kong, the next step is to quickly deploy the kong environment. Refer to the kong installation documentation. environment.

Note: Prepare the docker environment in advance;

5.1 Build pg environment

The operation and operation of kong depends on the postgresql database, so before installing kong, you need to install the pg database in advance, please refer to the following steps.

5.1.1 Constructing Kong's container network

First we create a Docker custom network to allow containers to discover and communicate with each other. In the following creation command kong-net is the name of the Docker network we created.

docker network create kong-net

5.1.2 Install pg database

Kong currently uses Cassandra (Facebook's open source distributed NoSQL database) or PostgreSql, you can execute one of the following commands to select your Database.

Please note to use the custom network above: --network=kong-net 

There is a small problem here. If you are using PostgreSQL , you want to mount volume persistent data to the host. Passing the -v command is not easy to use. It is recommended that you use the docker volume create command to create a mount.

docker volume create kong-volume

So the complete operation command

docker pull postgres:9.6

docker run -d --name kong-database \
--network=kong-net \
-p 5432:5432 \
-v kong-volume:/var/lib/postgresql/data \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
-e "POSTGRES_PASSWORD=kong" \
postgres:9.6

The following supplementary explanations about this command:

  • The version of the pg database is 9.6;
  • The port number exposed externally is 5432;
  • After creating the pg container, create a database named kong at the same time;
  • After creating the kong database, create a kong user at the same time, and have access to the kong database;

5.1.3 Initialize or migrate the database

We use docker run --rm to initialize the database. After the command is executed, the container will exit and the internal data volume (volume) will be retained. We still need to pay attention to this command. It must be consistent with the network, database type, and host name you declared. At the same time, pay attention to the version number of Kong (Note: The latest version of Kong is 2.x, but the current kong-dashboard (Kong Admin UI) does not yet support the version 2.x of Kong. For the convenience of subsequent demonstrations, the latest version 1. x version of Kong as a demo)
 

Execute the following command

docker run --rm \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_PASSWORD=kong" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
kong:latest kong migrations bootstrap

After the completion, you can connect to the PG database and see the following data information in the initialization table, indicating that the database environment required by kong has been prepared

5.2 Start Kong container

Start the kong running container with the following command

docker run -d --name kong \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_PASSWORD=kong" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong:latest

Pay attention to the version of kong, which is consistent with the above version. The following supplementary instructions are given for this startup command.

  • Kong binds 4 ports by default;
  • 8000: Used to receive the client's HTTP request and forward it to upstream;
  • 8443: Used to receive HTTPS requests from clients and forward them to upstream;
  • 8001: API management interface for HTTP monitoring;
  • 8444: API management interface for HTTPS monitoring;
  • KONG_DATABASE=postgres, specify the kong runtime database type;
  • KONG_PG_HOST=kong-database, specify pg;
  • KONG_PG_PASSWORD=kong, specify the pg database name at runtime;

So far, Kong has been installed. Through the docker ps command, you can view the currently running containers. Under normal circumstances, you can see the two containers of Kong and PostgreSQL;

 You can use curl -i  http://IP:8001/  or browser to call  http://IP:8001/  to verify whether Kong Admin is connected;

6. Install the Kong management UI

The Kong Enterprise Edition provides a management UI, but the Open Source Edition does not. But there are many open source management UIs, among which Kong Dashboard and Konga are more fashionable. The latest version of Kong Dashboard (3.6.x) does not support the latest version of Kong, and the last update was more than a year ago. It would be better to choose Konga. The following describes the installation of konga.

6.1 introduction to konga

Konga (official website address: pantsel.github.io/konga/, Github address: github.com/pantsel/kon… ) can observe all the current configurations of Kong well through the UI, and can view the status of managing Kong nodes , monitoring and early warning. Konga is mainly written in AngularJS and runs on the nodejs server. Has the following properties:

  • Manage all Kong Admin API objects;
  • Support for importing consumers from remote sources (databases, files, APIs, etc.);
  • Manage multiple Kong nodes. Backup, restore and migrate Kong nodes using snapshots;
  • Monitor node and API status with health checks;
  • Supports email and inactivity notifications;
  • Support for multiple users;
  • Easy database integration (MySQL, PostgresSQL, MongoDB, SQL Server);

6.2  konga installation process

6.2.1 Define mount volume

Konga supports PostgreSQL database. Define mount volume konga - postgresql

docker volume create konga-postgresql

6.2.2 Mount the database

docker run -d --name konga-database \
--network=kong-net \
-p 5433:5432 \
-v konga-postgresql:/var/lib/postgresql/data \
-e "POSTGRES_USER=konga" \
-e "POSTGRES_DB=konga" \
-e "POSTGRES_PASSWORD=konga" \
postgres:9.6

6.2.3 Initialize the PostgreSQL database

docker run --rm --network=kong-net pantsel/konga:latest -c prepare -a postgres -u postgres://konga:konga@konga-database:5432/konga

At this point , the database environment of Konga is done, and the Konga database and its data tables can be viewed through Navicat

6.2.4 Start the konga container

docker run -d -p 1337:1337 \
--network kong-net \
-e "DB_ADAPTER=postgres" \
-e "DB_URI=postgres://konga:konga@konga-database:5432/konga" \
-e "NODE_ENV=production" \
-e "DB_PASSWORD=konga" \
--name konga \
pantsel/konga

If the startup is successful, you can enter http://IP:1337/ in the browser   to access the management interface

Enter after registration, and then add Kong 's management Api path http://ip:8001 in the dashboard panel

After the configuration is complete, you can see the full function menu of konga;

7. Basic use of Kong Admin Api

Kong provides a wealth of admin APIs to meet the daily needs of development and maintenance personnel for various gateway configuration scenarios, such as basic service routing, reverse proxy, load balancing, ACL, etc. Next, we will introduce the use of admin APIs in detail.

7.1 Pre-preparation

In order to do the next test, create two springboot services in advance, the ports are 80883 and 8085 respectively, and provide an interface. After starting the service, you can access the interface through the browser, as follows

7.2 kong admin api core configuration module

In kong, it can be done by using the amin api provided by it. The official configuration manual: configuration document , which involves several core components are as follows. Simply put, the cooperation of these components can complete a load similar to nginx above. Balanced configuration effect;

7.3 kong admin api configuration service proxy

Use kong as a service proxy to simulate routing forwarding in nginx, that is, through the configuration of the admin api provided by kong, the following effects are achieved;

server {
  listen 8000;
  location /product {
    proxy_pass http://外网IP:8083/product;
  }
}

7.3.1 Create service

Create a service using the following command

curl -i -X POST http://公网IP:8001/services --data name=product-service --data url='http://公网IP:8083/product'

After the creation is complete, you can view the creation information of the service through the following command;

curl http://公网IP:8001/services/product-service

You can also see the name of the current service on the console interface v

7.3.2 create route

Execute the following command to create a route

curl -i -X POST --url http://公网IP:8001/services/product-service/routes --data 'paths[]=/product' --data name=product-route

After the execution is successful, you can see the route information through the interface

7.3.3 Test verification

Through the above two steps, the configuration of a simulated nginx service routing proxy is completed, and the effect can be seen by visiting it through a browser

7.4 kong admin api configuration load balancing

In the use of nginx, the configuration of load balancing is often involved. Next, simulate how to use kong to configure an effect similar to nginx load balancing;

7.4.1 Create upstream

curl -i -X POST http://公网IP:8001/upstreams --data name=product-upstream

7.4.2 create target

Use the following command to create a target and bind the above upstream

curl -i -X ​​POST http://public network IP:8001/upstreams/product-upstream/targets --data target="public network IP:8003"

7.4.3 Modify service

Use the following command to establish the binding relationship between service and upstream;

curl -i -X PATCH http://公网IP:8001/services/product-service --data url='http://product-upstream/product'

7.4.4 Test verification

The above configuration is equivalent to the nginx.conf configuration in Nginx:

upstream product-upstream{
  server 10.95.35.92:8001;
}

server {
  listen 8000;
  location /product {
    proxy_pass http://product-upstream/product;
  }
}

 Then enter the following address in the browser to test and verify. After seeing the following effect, it means that the configuration has taken effect;

 From the above two simple operation configuration demonstrations, it is not difficult to find that no matter whether it is configuring routing proxy or load balancing, there is no need to restart the kong service, that is, the configuration operation is dynamic until it takes effect.

Eight, Kong plug-in use

Kong implements functions such as logging, security detection, performance monitoring, and load balancing through plug-in Plugins. Through the plug-in mechanism, Kong can greatly improve its response to personalized and customized business scenarios.

8.1 Summary of kong plugins

The commonly used plug-ins for Kong gateway are summarized as follows:

8.1.1 Authentication plug-in

Kong provides implementations of Basic Authentication, Key authentication, OAuth2.0 authentication, HMAC authentication, JWT, and LDAP authentication.

8.1.2 Security Control Plugin

ACL (Access Control), CORS (Cross-Origin Resource Sharing), dynamic SSL, IP restriction, crawler detection implementation.

8.1.3 Flow Control Plugin

Request limit (limit based on request count), upstream response limit (limit based on upstream response count), request size limit. Current limiting supports local, Redis and cluster current limiting modes.

8.1.4 Analysis and monitoring plug-in

Galileo (record request and response data, realize API analysis), Datadog (record API Metric such as number of requests, request size, response status and delay, visualize API Metric), Runscope (record request and response data, realize API performance testing and monitoring) .

8.1.5 Protocol conversion plug-in

Request transformation (modify request before forwarding to upstream), response transformation (modify response before upstream response is returned to client).

8.1.6 Log application plug-in

TCP、UDP、HTTP、File、Syslog、StatsD、Loggly 等。

8.2 Configuring the key-auth plugin 

The following will demonstrate an example to implement a simple gateway security check by starting the apikey.

8.2.1 Add routing plugin

Use the following command to configure a plug-in for the above route: product-route, that is, this route needs to be verified when accessing

curl -i -X POST http://公网IP:8001/routes/product-route/plugins --data name=key-auth

This plugin accepts config.key_names definition parameter, default parameter name['apikey']. The header and params parameters in the HTTP request contain the apikey parameter, the parameter value must be the apikey key, and the Kong gateway will keep the key, and the subsequent services can only be accessed after verification.

 At this point, we use it   to verify whether it is valid. If the access fails (HTTP/1.1 401 Unauthorized, "No API key found in request") as shown below, it means that the Kong security mechanism has taken effect.curl -i http://IP:8000/product/

8.2.2 Configuring apikeys for visitors

Define consumer access API Key, let him have access to hello-service.

Create consumer Hidden: 

curl -i -X POST http://公网IP:8001/consumers/ --data username=Hidden

Then create an api key for consumer Hidden

curl -i -X POST http://公网IP:8001/consumers/Hidden/key-auth/ --data key=123456

After verifying the effect, you can see the result normally when you use the following command to access;

curl -i http://公网IP:8000/product/1 --header "apikey:123456"

Nine, control current limit use 

9.1 Introduction to kong current limiting

Kong provides the Rate Limiting plug-in to implement the request rate limiting function, so as to avoid excessive requests and hang up the back-end service. Rate Limiting supports current limiting in multiple time dimensions of seconds/minutes/hours/days/months/years, and can be used in combination. Say for example: Limit up to 100 requests per second and up to 1000 requests per minute.

Rate Limiting supports three basic dimensions of rate limiting: consumer, credential, and ip, and the default is consumer. For example
, say: set the number of requests per second allowed by each IP. The storage of the count supports the use of local, cluster and redis for storage. The default is cluster:

  • local : Stored locally in Nginx, realizing single-instance current limiting;
  • cluster : Stored in the Cassandra or PostgreSQL database to achieve cluster current limiting;
  • redis: Stored in the Redis database to implement cluster current limiting;

The current limiting algorithm adopted by Rate Limiting is a counter method, so it cannot provide a smooth current limiting capability similar to the token bucket algorithm

9.2  Configure the Rate Limiting plugin

Call Kong Admin API services/${service}/plugins to create the configuration of the Rate Limiting plugin:

9.2.1 Service ( service ) level to enable current limiting plug-in

curl -X POST http://公网IP:8001/services/product-service/plugins --data "name=rate-limiting" --data "config.second=1" --data "config.limit_by=ip"

After the execution is successful, it can be called by simulation. When the number of requests per second exceeds 1, you can see the following information, indicating that the current is limited

9.2.2 Route level current limiting configuration

curl -X POST http://127.0.0.1:8001/routes/{route_id}/plugins --data "name=rate-limiting" --data "config.second=5" --data "config.hour=10000"

9.2.3 Enable plug-in on consumer

curl -X POST http://127.0.0.1:8001/plugins \
--data "name=rate-limiting" \
--data "consumer_id={consumer_id}" \
--data "config.second=5" \
--data "config.hour=10000"

Regarding the parameters in the above configuration, make the following supplementary instructions:

  • name parameter, set to rate-limiting means to use the Rate Limiting plug-in;
  • config.second parameter, setting it to 1 means allowing 1 request per second;
  • config.limit_by parameter, set to ip means to use the current limit of IP basic dimension;
  • The rate-limiting plug-in can also be added through the konga UI operation;

Note: The above current limiting configuration, except for the admin API, can be set through the konga interface, and the effect is the same;

10. Use of Kong black and white list

Black and white lists are also a common scenario for gateways. In kong, black and white lists can also be configured through plug-ins. Specifically, the black and white list configuration can take effect at the service level or at the routing level;

10.1 service configuration whitelist

To enable the plugin on the service, execute the following command

curl -X POST http://公网IP:8001/services/{service}/plugins --data "name=ip-restriction" --data "config.whitelist=192.168.3.18"

in:

  • {service} is the specific service name;
  • config.whitelist : whitelist, comma-separated IPs or CIDR ranges;
  • config.blacklist : whitelist, comma-separated IPs or CIDR ranges;
For example, blacklist restrictions on the service product-service above are limited to the IP of the local windows, and the execution effect is as follows;

10.2 Route configuration whitelist

It can also be configured based on a specific route, you can  get your route-id through: http://public IP:8001/routes , and then fill it in

curl -X POST http://公网IP:8001/routes/{route-id}/plugins --data "name=ip-restriction" --data "config.whitelist=192.168.3.18"

Execute the above command, and then you can see the configured plug-in of the IP whitelist on the interface;

Eleven, written at the end of the article

As cloud-native applications become more and more widespread, unified cloud-native solutions represented by k8s have gradually begun to be used by various Internet companies. As the traffic portal of products, gateways can be said to play an extremely important role. I believe that in In the near future, cloud-native gateways will become an indispensable component of the microservice governance architecture and come to people's vision, so it is necessary to conduct in-depth study and exploration of cloud-native gateways.

おすすめ

転載: blog.csdn.net/zhangcongyi420/article/details/130349738