Redis & Amazon ElastiCache for Redis & Amazon MemoryDB of Redis 简介
Redis is a Key-Value storage system and a cross-platform non-relational database. Redis is one of the most popular NoSQL databases today. It is an open source, written in ANSI C language, complies with the BSD protocol, supports the network, can be based on memory, distributed, optional persistent key-value pair (Key-Value) storage database, and provides APIs in multiple languages. Redis is often called a data structure server because the value (Value) can be a string (String), hash (Hash), list (List), set (Sets) and ordered set (Sorted Sets) and other types.
Amazon ElastiCache for Redis is an ultra-fast in-memory data store that delivers sub-millisecond latency to support real-time Internet-scale applications. ElastiCache for Redis is built on open source Redis, is compatible with the Redis API, works with Redis clients, and uses the open Redis data format to store data. Amazon ElastiCache is a fully managed service. You don't need to perform administrative tasks such as hardware provisioning, software patching, setup, configuration, monitoring, failover, and backups. ElastiCache continuously monitors your cluster to keep your Redis up and running, freeing you to focus on developing higher-value applications. Supports Redis cluster and non-cluster modes, and can provide high availability with automatic failover support.
MemoryDB of Redis is a persistent in-memory database service that provides blazing-fast performance. It is purpose-built for modern applications with microservices architecture. MemoryDB is Redis compatible, enabling you to quickly build applications using the same flexible and friendly Redis data structures, APIs, and commands they already use today. With Memory DB, all your data is stored in memory, which enables you to achieve microsecond read and single-digit millisecond write latencies and high throughput. MemoryDB also uses multi-AZ transaction logs to persist data across multiple Availability Zones (AZs) for fast failover, database recovery, and node restarts. With both in-memory performance and multi-AZ persistence, Memory DB can be used as a high-performance primary database for microservices applications, eliminating the need to manage cache and persistent databases separately.
Why Redis Cluster needs Proxy
In the production environment, many customers choose the Redis Cluster deployment method in terms of capacity, performance, and scalability flexibility. But Redis Cluster also brings additional development and usage costs. In order to simplify development and use, in many scenarios, we need to build a proxy for Redis Cluster. The main advantages of Proxy are as follows:
Language Convenience: At present, only jedis works well with Redis Cluster mode. In order to carry out stable development on Redis Cluster, programmers need to choose java language; but in the case of using proxy, there are many languages that work well with Redis single client mode. Programmers can choose languages according to their own preferences.
Ease of development: For customers with a large number of users, especially those targeting the Chinese market, in order to cope with a large number of visits, the scale of Redis Cluster is often relatively large. A large Redis Cluster brings additional development/operation and maintenance costs. In order to simplify the development work, it is often necessary to find a middleware between the cluster and the client, and Redis Proxy is the middleware for the Redis cluster.
Flexible connection: According to the change of access volume, the number of nodes of Redis Cluster will expand and contract. Using proxy, the client can be transparent to changes in the number of nodes.
Cross-slot access: Redis Cluster mode does not support cross-slot access. When performing multi-slot data operations, it is often necessary to complete access splitting on the client side, splitting an overall data access into multiple single-slot data access operations; Redis Proxy supports some cross-slot access operations, such as MSET/MGET/DEL, which reduces data access development work.
According to this series of actual needs, a variety of proxy products such as Redis-Cluster-Proxy, Overlord Proxy, Envoy Proxy, etc. have been born. Among them, Redis-Cluster-Proxy and Overlord Proxy are mainly based on host deployment, while Envoy Proxy can be deployed on host and containerization, especially containerized deployment, which is more friendly to customers whose applications have been containerized.
Introduction to Envoy Proxy
Envoy is an open-source service proxy designed for cloud-native applications. Originally built by Lyft, Envoy is a high-performance C++ distributed proxy designed for individual services and applications, as well as a communication bus and "common data plane" designed for large-scale microservice "service mesh" architectures.
Main features:
Out-of-process architecture: Envoy is a standalone high-performance server with a small memory footprint. It can run with any application language or framework.
HTTP/2 and gRPC support: Envoy has good support for HTTP/2 and gRPC connections. It is a transparent HTTP/1.1 to HTTP/2 proxy.
Advanced load balancing: Envoy supports advanced load balancing features, including automatic retries, circuit breaking, global rate limiting, request shadowing, region-local load balancing, and more.
API for configuration management: Envoy provides a powerful API to manage its configuration dynamically.
Observability: In-depth observability of Layer 7 traffic, native support for distributed tracing, and wire-level observability of databases such as MongoDB and DynamoDB.
Envoy Proxy has many application scenarios. This article uses Envoy Proxy to realize the connection proxy to Redis Cluster.
Deploy Envoy Proxy
1) Create an Envoy Proxy image and upload it to ECR
a. Edit envoy.yaml, including: service port, connection string. In this document, the Proxy service port is 7480, and the connection string is the address and port of the Redis cluster.
envoy.yaml:
static_resources:
listeners:
- name: redis_listener
address:
socket_address:
address: 0.0.0.0
port_value: 7480
filter_chains:
- filters:
- name: envoy.filters.network.redis_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy
stat_prefix: egress_redis
settings:
op_timeout: 5s
prefix_routes:
catch_all_route:
cluster: redis_cluster
clusters:
- name: redis_cluster
cluster_type:
name: envoy.clusters.redis
typed_config:
"@type": type.googleapis.com/google.protobuf.Struct
value:
cluster_refresh_rate: 10s
cluster_refresh_timeout: 4s
connect_timeout: 4s
dns_lookup_family: V4_ONLY
lb_policy: CLUSTER_PROVIDED
load_assignment:
cluster_name: redis_cluster
endpoints:
lb_endpoints:
endpoint:
address:
socket_address: { address: envoy-k8s1.xxxxxx.clustercfg.memorydb.us-west-2.amazonaws.com, port_value: 6379 }
admin:
address:
socket_address:
address: 0.0.0.0
port_value: 9901
Swipe left to see more
b. Generate Proxy image according to DockerFile and upload it to ECR
Dockerfile:
FROM envoyproxy/envoy-dev:latest
COPY ./envoy.yaml /etc/envoy.yaml
RUN chmod go+r /etc/envoy.yaml
#CMD ["/usr/local/bin/envoy", "-c", "/etc/envoy.yaml", "--service-cluster", "proxy"]
# Enable logging at debug level
CMD ["/usr/local/bin/envoy", "-c", "/etc/envoy.yaml", "--service-cluster", "proxy", "--log-path", "/tmp/envoy_log.txt", "--log-level", "debug"]
Swipe left to see more
2) Deploy Envoy Proxy on EKS and expose services.
In this document, we will deploy 2 Envoy Proxy PODs and set certain CPU/memory caps. The container image is the proxy image uploaded to ECR in the previous step.
eks-redis-deployment-envoy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "envoy-redis-proxy"
labels:
ec: "redis-pod"
app: envoy
spec:
replicas: 2
revisionHistoryLimit: 2
selector:
matchLabels:
ec: "redis-pod"
app: envoy
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
ec: "redis-pod"
app: envoy
spec:
containers:
- image: "xxxxxxx.dkr.ecr.us-west-2.amazonaws.com/envoy:proxy-redis"
ports:
- containerPort: 7480
name: envoy
resources:
limits:
cpu: 200m
memory: 400Mi
requests:
cpu: 100m
memory: 200Mi
restartPolicy: Always
terminationGracePeriodSeconds: 30
Swipe left to see more
Check pod status:
NAME READY STATUS RESTARTS AGE
centos-pod-test 1/1 Running 0 2m4s
envoy-blue-proxy-55c6fd6988-4rdbb 1/1 Running 0 2m8s
envoy-blue-proxy-55c6fd6988-bs99w 1/1 Running 0 2m8s
Swipe left to see more
Check service status:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
envoy-blue-proxy ClusterIP 10.100.95.27 <none> 7480/TCP 2m13s
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 36m
Swipe left to see more
function test
Connection test:
kubectl exec -it envoy-blue-centos -- /bin/bash
[root@centos-pod-test redis-stable]# redis-cli -h 10.100.95.27 -p 7480
10.100.95.27:7480> ping
PONG
10.100.95.27:7480> set a 1
OK
10.100.95.27:7480> exit
[root@centos-pod-test redis-stable]# exit
exit
Swipe left to see more
Cross-slot MGET/MSET test:
Cross-slot MGET/MSET operations can be realized through envoy proxy.
Compared with directly connected redis cluster, the cross-slot MGET/MSET operation fails.
Performance Testing
We use redis-benchmark to compare the Proxy mode and the direct connection mode, and test the non-pipeline and pipeline scenarios respectively.
Proxy mode:
Direct connection mode:
Result analysis:
1. Non-pipeline scenario
There are three types of operations: SET, GET, and MSET. The performance of the proxy mode and the direct connection mode are basically the same, and the proxy basically has no performance loss.
2. pipeline scenario
For SET and GET operations, the Proxy mode has a performance loss of 0.5ms compared to the direct connection mode.
The MSET test in the proxy mode has a high delay. In addition to the performance loss of the proxy itself, the key distribution in the actual business must also be considered, and the real business test is carried out case by case.
Summarize
Envoy proxy is deployed in the EKS cluster in a containerized manner, which has high deployment flexibility, especially for customers whose applications have been containerized. This deployment method is more friendly; at the same time, using the characteristics of K8s itself can reduce the operation and maintenance work of Envoy proxy. Compared with the host deployment method, the convenience of operation and maintenance has been greatly improved. For the performance of Envoy proxy in actual business, it is recommended to conduct sufficient tests in actual scenarios to adjust the resource configuration of the proxy itself to match the corresponding workload.
The author of this article
Fu Xiaofei
Senior Solution Architect of Amazon Cloud Technology, responsible for consulting and architecture design of Amazon-based cloud computing solutions. Focusing on the game industry, we help customers make use of Amazon Cloud Technology's global infrastructure and strong technical capabilities to create explosive games and reduce game operating costs.
I heard, click the 4 buttons below
You will not encounter bugs!