1. Envoy Http流量治理
路由匹配
- 基础匹配: prefix path和safe_regex
- 高级匹配: headers和query_parameters
路由
- 路由route
- 重定向redirect
- 直接响应direct_response
1.1 Envoy 线程模型
Envoy使用单进程,多线程的架构,一个主线程(Main thread)负责实现各类管理任务,而一些工作线程(worker threads)则负责执行监听,过滤和转发等代理服务器的核心功能
- 主线程: 负责Envoy程序的启动和关闭.xDS API调用处理(包括DNS,健康状态检测和集群管理等),运行时配置,统计数据刷新,管理接口维护和其他线程管理(信号和热重启等)等,相关的所有时间均以异步非阻塞模式完成.
- 工作线程: 默认情况下,Envoy根据当前主机cpu核心数来创建等同数量的工作线程,不过,管理员也可以通过程序选项–concurrency具体指定,每个工作线程运行一个非阻塞型时间循环,负责为每个侦听器指定的套接字,接收新请求,为每个链接初始一个过滤器栈并处理此链接整个生命周期中的所有事件.
- 文件刷写线程: Envoy写入的每个文件都有一个专用,独立的阻塞型刷写线程,当工作线程需要写入文件时,数据实际上被移入内存缓冲区,最终通过文件刷写线程同步至文件中.
1.2 Envoy高级路由
- 将域名映射到虚拟主机
- path的前缀(prefix)匹配,精确匹配或正则匹配
- 虚拟主机级别的TLS重定向
- path级别的path/host重定向
- 由Envoy直接生成响应报文
- 显示host rewrite
- prefix rewrite
- 基于http标头或由配置请求重试与请求超时
- 基于运行时参数的流量迁移
- 基于权重或百分比的跨集群流量分割
- 基于任意标头匹配路由规则
- 基于优先级的路由
- 基于hash策略的路由
1.3 Http路由配置框架
路由配置中的顶级元素是虚拟主机
- 每个虚拟主机都有一个逻辑名称以及一组域名,请求报文中的主机头根据此处的域名进行路由
- 基于域名选择虚拟主机后,将基于配置的路由机制完成请求路由或进行重定向
---
listeners:
- name:
address: {
...}
filter_chians: []
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: ...
virutal_hosts: []
- name: ...
domains: [] # 虚拟主机的域名,路由匹配时将请求报文中的host标头值与此处列表项进行匹配检测;
routes: [] # 路由条目,匹配到当前虚拟主机的请求中的path匹配检测将针对各route中由match定义条件进行;
- name: ...
match: {
...} # 常用内嵌字段prefix|path|sate_regex|connect_matcher,用于定义基于路径前缀、路径、正则表达式或连接匹配器四者之一定义匹配条件;
route: {
...} # 常用内嵌字段cluster|cluster_header|weighted_clusters,基于集群、请求报文中的集群标头或加权集群(流量分割)定义路由目标;
redirect: {
…} # 重定向请求,但不可与route或direct_response一同使用;
direct_response: {
…} # 直接响应请求,不可与route和redirect一同使用;
virtual_clusters: [] # 为此虚拟主机定义的用于收集统计信息的虚拟集群列表;
...
1.4 将域名映射到虚拟主机
域名搜索顺序
-
请求报文中的host标头值依次与路由表中定义的virtual host的domain熟悉进行比较,并于第一次匹配时终止搜索
-
Domain search order
- Exact domain nemes: www.linux.io
- Prefix domain wildcards: *.linux.io or *-envoy.linux.io
- Suffix domain wildcards: linux.* or linux-*.
- Special wildcard * matching any domain
精确匹配—> 前缀匹配—> 右后缀匹配 ----> "*"通配所有域名
1.5 路由基础配置概览
match
- 基于prefix,path,safe_regex,connect_matcher四者其中任何一个做URL匹配
- 可额外根据headers和query_parameters完成报文匹配
- 匹配到的报文可有三种路由机制
- redirect
- direct_response
- route
route
- 支持cluster,weight_clusters和cluster_header三者之一定义流量路由目标
- 转发期间可根据prefix_rewrite和host_rewrite完成URL重写
- 还可以额外配置流量管理机制
- 韧性相关: timeout,retry_policy
- 测试相关: request_mirror_policy
- 流控相关: rate_limits
- 访问控制相关: cors
1.6 标头的路由匹配
{
"name": "...",
"exact_match": "...", # 精确值匹配
"safe_regex_match": "{...}", # 正则表达式模式匹配
"range_match": "{...}", # 值范围匹配,检查标头值是否在指定的范围内
"present_match": "...", # 标头存在性匹配,检查标头存在与否
"prefix_match": "...", # 值前缀匹配
"suffix_match": "...", # 值后缀匹配
"contains_match": "...", # 检测标头值是否包含此处指定的字符串
"string_match": "{...}", # 检测标头值是否匹配该处指定的字符串
}
2. 简单匹配条件示例
2.1 docker-compose
八个Service:
- envoy:Front Proxy,地址为172.31.50.10
- 7个后端服务
- light_blue和dark_blue:对应于Envoy中的blue集群
- light_red和dark_red:对应于Envoy中的red集群
- light_green和dark_green:对应Envoy中的green集群
- gray:对应于Envoy中的gray集群
version: '3'
services:
front-envoy:
image: envoyproxy/envoy-alpine:v1.21.5
environment:
- ENVOY_UID=0
- ENVOY_GID=0
volumes:
- ./front-envoy.yaml:/etc/envoy/envoy.yaml
networks:
envoymesh:
ipv4_address: 172.31.50.10
expose:
# Expose ports 80 (for general traffic) and 9901 (for the admin server)
- "80"
- "9901"
light_blue:
image: ikubernetes/servicemesh-app:latest
networks:
envoymesh:
aliases:
- light_blue
- blue
environment:
- SERVICE_NAME=light_blue
expose:
- "80"
dark_blue:
image: ikubernetes/servicemesh-app:latest
networks:
envoymesh:
aliases:
- dark_blue
- blue
environment:
- SERVICE_NAME=dark_blue
expose:
- "80"
light_green:
image: ikubernetes/servicemesh-app:latest
networks:
envoymesh:
aliases:
- light_green
- green
environment:
- SERVICE_NAME=light_green
expose:
- "80"
dark_green:
image: ikubernetes/servicemesh-app:latest
networks:
envoymesh:
aliases:
- dark_green
- green
environment:
- SERVICE_NAME=dark_green
expose:
- "80"
light_red:
image: ikubernetes/servicemesh-app:latest
networks:
envoymesh:
aliases:
- light_red
- red
environment:
- SERVICE_NAME=light_red
expose:
- "80"
dark_red:
image: ikubernetes/servicemesh-app:latest
networks:
envoymesh:
aliases:
- dark_red
- red
environment:
- SERVICE_NAME=dark_red
expose:
- "80"
gray:
image: ikubernetes/servicemesh-app:latest
networks:
envoymesh:
aliases:
- gray
- grey
environment:
- SERVICE_NAME=gray
expose:
- "80"
networks:
envoymesh:
driver: bridge
ipam:
config:
- subnet: 172.31.50.0/24
2.2 envoy.yaml
2个VirtualHost.
- vh_001
- 匹配[“ilinux.io”, “.ilinux.io", "ilinux.”] 三个域名
- 当访问/service/blue,就转给blue集群
- 当匹配以/service/开头 任意字符+blue结尾,重写请求给/service/blue,即有上面那条再进行匹配并响应
- 当匹配/service/yellow,直接给返回,返回值是200,返回内容是"This page will be provided soon later.\n"
- 其余的匹配到[“ilinux.io”, “.ilinux.io", "ilinux.”] 三个域名的,统一,给red集群
- vh_002
- 匹配所有域名(即vh_001没匹配上,就用vh_002匹配),转给gray集群
admin:
profile_path: /tmp/envoy.prof
access_log_path: /tmp/admin_access.log
address:
socket_address:
address: 0.0.0.0
port_value: 9901
static_resources:
listeners:
- name: listener_0
address:
socket_address: {
address: 0.0.0.0, port_value: 80 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: vh_001
domains: ["ilinux.io", "*.ilinux.io", "ilinux.*"]
routes:
- match:
path: "/service/blue"
route:
cluster: blue
- match:
safe_regex:
google_re2: {
}
regex: "^/service/.*blue$"
redirect:
path_redirect: "/service/blue"
- match:
prefix: "/service/yellow"
direct_response:
status: 200
body:
inline_string: "This page will be provided soon later.\n"
- match:
prefix: "/"
route:
cluster: red
- name: vh_002
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: gray
http_filters:
- name: envoy.filters.http.router
clusters:
- name: blue
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
http2_protocol_options: {
}
load_assignment:
cluster_name: blue
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: blue
port_value: 80
- name: red
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
http2_protocol_options: {
}
load_assignment:
cluster_name: red
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: red
port_value: 80
- name: green
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
http2_protocol_options: {
}
load_assignment:
cluster_name: green
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: green
port_value: 80
- name: gray
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
http2_protocol_options: {
}
load_assignment:
cluster_name: gray
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: gray
port_value: 80
2.3 测试效果
- 测试访问http://172.31.50.10/service/blue,返回的内容是service dark_blue和service light_blue
- 测试访问http://172.31.50.10/service/任意字符blue,返回的内容是301跳转http://www.ilinux.io/service/blue
- 测试访问http://172.31.50.10/service/yellow,返回的是"This page will be provided soon later."
- 测试访问http://172.31.50.10/service/任意字符,返回的是service dark_red和service light_red
- 测试访问任意域名,返回的是service gray
# docker-compose up
## 测试访问http://172.31.50.10/service/blue
# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/blue
Hello from App behind Envoy (service dark_blue)! hostname: cbc3d2fe1433 resolved hostname: 172.31.50.4
# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/blue
Hello from App behind Envoy (service light_blue)! hostname: 5d937ebfe60c resolved hostname: 172.31.50.3
## 测试访问^/service/.*blue$
root@k8s-node-1:~# curl -I -H "Host: www.ilinux.io" http://172.31.50.10/service/dadfkblue
HTTP/1.1 301 Moved Permanently
location: http://www.ilinux.io/service/blue
date: Sat, 01 Oct 2022 05:58:22 GMT
server: envoy
transfer-encoding: chunked
root@k8s-node-1:~# curl -I -H "Host: www.ilinux.io" http://172.31.50.10/service/dadfk/blue
HTTP/1.1 301 Moved Permanently
location: http://www.ilinux.io/service/blue
date: Sat, 01 Oct 2022 05:58:28 GMT
server: envoy
transfer-encoding: chunked
## 测试访问http://172.31.50.10/service/yellow
root@k8s-node-1:~# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/yellow
This page will be provided soon later.
## 测试其他匹配
root@k8s-node-1:~# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/black
Hello from App behind Envoy (service dark_red)! hostname: fbdc522c237d resolved hostname: 172.31.50.2
root@k8s-node-1:~# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/cat
Hello from App behind Envoy (service light_red)! hostname: d6952b16260f resolved hostname: 172.31.50.7
root@k8s-node-1:~# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/duck
Hello from App behind Envoy (service dark_red)! hostname: fbdc522c237d resolved hostname: 172.31.50.2
## 测试匹配其他域名
root@k8s-node-1:~# curl -H "Host: www.k8s.io" http://172.31.50.10/service/duck
Hello from App behind Envoy (service gray)! hostname: 9294dc3cf243 resolved hostname: 172.31.50.5
root@k8s-node-1:~# curl -H "Host: www.pana.io" http://172.31.50.10/service/duck
Hello from App behind Envoy (service gray)! hostname: 9294dc3cf243 resolved hostname: 172.31.50.5
3. 基于首部匹配条件示例
3.1 docker-compose
六个Service:
- envoy:Front Proxy,地址为172.31.52.10
- 5个后端服务
- v1.0 2个service v1.0-1 v1.0-2
- v1.1 2个service v1.1-1 v1.1-2
- v1.2 1个service v1.2-1
version: '3'
services:
front-envoy:
image: envoyproxy/envoy-alpine:v1.21.5
environment:
- ENVOY_UID=0
- ENVOY_GID=0
volumes:
- ./front-envoy.yaml:/etc/envoy/envoy.yaml
networks:
envoymesh:
ipv4_address: 172.31.52.10
expose:
# Expose ports 80 (for general traffic) and 9901 (for the admin server)
- "80"
- "9901"
demoapp-v1.0-1:
hostname: demoapp_v1_0_1
#hostname: demoapp-v1.0-1
image: ikubernetes/demoapp:v1.0
networks:
envoymesh:
aliases:
- demoappv10
expose:
- "80"
demoapp-v1.0-2:
hostname: demoapp_v1_0_2
image: ikubernetes/demoapp:v1.0
#hostname: demoapp-v1.0-2
networks:
envoymesh:
aliases:
- demoappv10
expose:
- "80"
demoapp-v1.1-1:
hostname: demoapp_v1_1_1
image: ikubernetes/demoapp:v1.1
#hostname: demoapp-v1.1-1
networks:
envoymesh:
aliases:
- demoappv11
expose:
- "80"
demoapp-v1.1-2:
hostname: demoapp_v1_1_2
image: ikubernetes/demoapp:v1.1
#hostname: demoapp-v1.1-2
networks:
envoymesh:
aliases:
- demoappv11
expose:
- "80"
demoapp-v1_2-1:
hostname: demoapp_v1_2_1
image: ikubernetes/demoapp:v1.2
#hostname: demoapp-v1_2-1
networks:
envoymesh:
aliases:
- demoappv12
expose:
- "80"
networks:
envoymesh:
driver: bridge
ipam:
config:
- subnet: 172.31.52.0/24
3.2 envoy.yaml
- 如果带特定的标头:X-Canary: true,则往demoappv12转发
- 如果匹配特定查询条件:username: vip_开头转发到demoappv11
- 如果都没匹配上,则转发到demoappv10
admin:
profile_path: /tmp/envoy.prof
access_log_path: /tmp/admin_access.log
address:
socket_address:
address: 0.0.0.0
port_value: 9901
static_resources:
listeners:
- name: listener_0
address:
socket_address: {
address: 0.0.0.0, port_value: 80 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: vh_001
domains: ["*"]
routes:
- match:
prefix: "/"
headers:
- name: X-Canary
exact_match: "true"
route:
cluster: demoappv12
- match:
prefix: "/"
query_parameters:
- name: "username"
string_match:
prefix: "vip_"
route:
cluster: demoappv11
- match:
prefix: "/"
route:
cluster: demoappv10
http_filters:
- name: envoy.filters.http.router
clusters:
- name: demoappv10
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: demoappv10
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: demoappv10
port_value: 80
- name: demoappv11
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: demoappv11
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: demoappv11
port_value: 80
- name: demoappv12
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: demoappv12
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: demoappv12
port_value: 80
3.3 测试效果
3.3.1 不使用任何独特的访问条件
root@k8s-node-1:~# curl 172.31.52.10/hostname
ServerName: demoapp_v1_0_2
root@k8s-node-1:~# curl 172.31.52.10/hostname
ServerName: demoapp_v1_0_1
root@k8s-node-1:~# curl 172.31.52.10/hostname
ServerName: demoapp_v1_0_2
root@k8s-node-1:~# curl 172.31.52.10/hostname
ServerName: demoapp_v1_0_1
3.3.2 使用特定(X-Canary: true)的标头发起请求
root@k8s-node-1:~# curl -H "X-Canary: true" 172.31.52.10/hostname
ServerName: demoapp_v1_2_1
root@k8s-node-1:~# curl -H "X-Canary: true" 172.31.52.10/hostname
ServerName: demoapp_v1_2_1
root@k8s-node-1:~# curl -H "X-Canary: true" 172.31.52.10/hostname
ServerName: demoapp_v1_2_1
root@k8s-node-1:~# curl -H "X-Canary: true" 172.31.52.10/hostname
ServerName: demoapp_v1_2_1
3.3.3 在请求中使用特定的查询条件
root@k8s-node-1:~# curl 172.31.52.10/hostname?username=vip_qiu
ServerName: demoapp_v1_1_2
root@k8s-node-1:~# curl 172.31.52.10/hostname?username=vip_qiu
ServerName: demoapp_v1_1_1
root@k8s-node-1:~# curl 172.31.52.10/hostname?username=vip_pana
ServerName: demoapp_v1_1_2
root@k8s-node-1:~# curl 172.31.52.10/hostname?username=vip_pana
ServerName: demoapp_v1_1_1
root@k8s-node-1:~# curl 172.31.52.10/hostname?username=vip_pana
ServerName: demoapp_v1_1
3.3.4 再试一次不带hostname的
root@k8s-node-1:~# curl -H "X-Canary: true" 172.31.52.10
iKubernetes demoapp v1.2 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_2_1, ServerIP: 172.31.52.6!
root@k8s-node-1:~# curl -H "X-Canary: true" 172.31.52.10
iKubernetes demoapp v1.2 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_2_1, ServerIP: 172.31.52.6!
root@k8s-node-1:~# curl 172.31.52.10
iKubernetes demoapp v1.0 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_0_1, ServerIP: 172.31.52.5!
root@k8s-node-1:~# curl 172.31.52.10
iKubernetes demoapp v1.0 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_0_1, ServerIP: 172.31.52.5!
root@k8s-node-1:~# curl 172.31.52.10
iKubernetes demoapp v1.0 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_0_2, ServerIP: 172.31.52.3!
root@k8s-node-1:~# curl 172.31.52.10
iKubernetes demoapp v1.0 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_0_2, ServerIP: 172.31.52.3!
root@k8s-node-1:~# curl 172.31.52.10
iKubernetes demoapp v1.0 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_0_1, ServerIP: 172.31.52.5!
root@k8s-node-1:~# curl 172.31.52.10
iKubernetes demoapp v1.0 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_0_2, ServerIP: 172.31.52.3!
root@k8s-node-1:~# curl 172.31.52.10/?username=vip_qiu
iKubernetes demoapp v1.1 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_1_2, ServerIP: 172.31.52.2!
root@k8s-node-1:~# curl 172.31.52.10/?username=vip_qiu
iKubernetes demoapp v1.1 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_1_1, ServerIP: 172.31.52.4!
root@k8s-node-1:~# curl 172.31.52.10/?username=vip_qiu
iKubernetes demoapp v1.1 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_1_2, ServerIP: 172.31.52.2!
root@k8s-node-1:~# curl 172.31.52.10/?username=vip_qiu
iKubernetes demoapp v1.1 !! ClientIP: 172.31.52.10, ServerName: demoapp_v1_1_2, ServerIP: 172.31.52.2!