Official website: https://istio.io/v1.11/zh/docs/concepts/what-is-istio/
istio architecture
In fact, Istio is an implementation of the Service Mesh architecture. Communication between services (such as Service A accessing Service B here) will be carried out through a proxy (the default is Envoy).
Moreover, the intermediate network protocol supports HTTP/1.1, HTTP/2, gRPC or TCP, which can be said to cover mainstream communication protocols. This layer of agents is called the data plane.
The control plane is further subdivided into Pilot, Citadel and Galley. Their respective functions are as follows:
Pilot:为 Envoy 提供了服务发现,流量管理和智能路由(AB 测试、金丝雀发布等),以及错误处理(超时、重试、熔断)功能。 Citadel:为服务之间提供认证和证书管理,可以让服务自动升级成 TLS 协议。 Galley:Galley 是 Istio 的配置验证、提取、处理和分发组件。它负责将其余的 Istio 组件与从底层平台(例如 Kubernetes)获取用户配置的细节隔离开来。
The data plane communicates with the control plane. On the one hand, it can obtain the required information between services, and on the other hand, it can also report the Metrics data of service calls.
Why use Istio?
Through load balancing, inter-service authentication, monitoring, and more, Istio makes it easy to create a network where services are already deployed with little or no changes to their code . Add Istio support to services by deploying a special sidecar proxy throughout the environment. The proxy intercepts all network communication between microservices and then uses its control plane capabilities to configure and manage Istio. This includes:
- Automatic load balancing for HTTP, gRPC, WebSocket and TCP traffic.
- Fine-grained control of traffic behavior with rich routing rules, retries, failover, and fault injection.
- Pluggable policy layer and configuration API supporting access control, rate limiting and quotas.
- Automated measurement, logging, and tracing of all traffic within the cluster, including ingress and egress to the cluster.
- Enable secure inter-service communication in a cluster with strong authentication and authorization-based authentication.
Istio is designed for scalability to meet different deployment needs.
Traffic management
Istio's simple rule configuration and traffic routing allow you to control traffic between services and the API call process. Istio simplifies the configuration of service-level properties (such as circuit breakers, timeouts, and retries) and makes it easy to perform important tasks (such as A/B testing, canary rollouts, and staged rollouts by traffic percentage).
With better visibility into your traffic and out-of-the-box failure recovery features, you can catch problems before they arise, making calls more reliable and your network more robust, no matter what.
Istio version support status
Version | Currently supported | issue date | Stop maintenance | Supported Kubernetes versions | Not tested, possible supported Kubernetes versions |
---|---|---|---|---|---|
master | No, development only | - | - | - | - |
1.15 | yes | August 31, 2022 | ~ March 2023 (expected) | 1.22, 1.23, 1.24, 1.25 | 1.16, 1.17, 1.18, 1.19, 1.20, 1.21 |
1.14 | yes | May 24, 2022 | ~ January 2023 (expected) | 1.21, 1.22, 1.23, 1.24 | 1.16, 1.17, 1.18, 1.19, 1.20 |
1.13 | yes | February 11, 2022 | ~ October 2022 (expected) | 1.20, 1.21, 1.22, 1.23 | 1.16, 1.17, 1.18, 1.19 |
1.12 | yes | November 18, 2021 | July 12, 2022 | 1.19, 1.20, 1.21, 1.22 | 1.16, 1.17, 1.18 |
1.11 | no | August 12, 2021 | March 25, 2022 | 1.18, 1.19, 1.20, 1.21, 1.22 | 1.16, 1.17 |
1.10 | no | May 18, 2021 | January 7, 2022 | 1.18, 1.19, 1.20, 1.21 | 1.16, 1.17, 1.22 |
1.9 | no | February 9, 2021 | October 8, 2021 | 1.17, 1.18, 1.19, 1.20 | 1.15, 1.16 |
1.8 | no | November 10, 2020 | May 12, 2021 | 1.16, 1.17, 1.18, 1.19 | 1.15 |
1.7 | no | August 21, 2020 | February 25, 2021 | 1.16, 1.17, 1.18 | 1.15 |
1.6 and earlier | no | - | - | - | - |
1.Download
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.11.7 TARGET_ARCH=x86_64 sh -
#转到 Istio 包目录。例如,如果包是 istio-1.11.7:
cd istio-1.11.7
#将 istioctl 客户端添加到路径
export PATH=$PWD/bin:$PATH
2. Install istio
istioctl install --set profile=demo -y
Add a label to the namespace to instruct Istio to automatically inject the Envoy sidecar proxy when deploying the application:
kubectl label namespace default istio-injection=enabled
#Uninstall istio
istioctl manifest generate --set profile=demo | kubectl delete -f -
3. Deploy the sample application
This example deploys an application that demonstrates various Istio features and consists of four separate microservices. This application mimics a category in an online bookstore and displays information about a book. The page displays a description of the book, book details (ISBN, number of pages, etc.), and some comments about the book.
The Bookinfo application is divided into four separate microservices:
productpage. This microservice will call the details and reviews microservices to generate pages.
details. This microservice contains book information.
reviews. This microservice contains book-related reviews. It also calls the ratings microservice.
ratings. This microservice contains rating information consisting of book reviews.
There are 3 versions of the reviews microservice:
The v1 version does not call the ratings service.
The v2 version will call the ratings service and use 1 to 5 black star icons to display rating information.
The v3 version will call the ratings service and use 1 to 5 red star icons to display rating information.
The figure below shows the end-to-end architecture of this application.
Deploy the Bookinfo sample application :
bookinfo.yaml
##################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
service: details
spec:
ports:
- port: 9080
name: http
selector:
app: details
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-details
labels:
account: details
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: details-v1
labels:
app: details
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: details
version: v1
template:
metadata:
labels:
app: details
version: v1
spec:
serviceAccountName: bookinfo-details
containers:
- name: details
image: docker.io/istio/examples-bookinfo-details-v1:1.16.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
securityContext:
runAsUser: 1000
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
service: ratings
spec:
ports:
- port: 9080
name: http
selector:
app: ratings
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-ratings
labels:
account: ratings
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ratings-v1
labels:
app: ratings
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: ratings
version: v1
template:
metadata:
labels:
app: ratings
version: v1
spec:
serviceAccountName: bookinfo-ratings
containers:
- name: ratings
image: docker.io/istio/examples-bookinfo-ratings-v1:1.16.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
securityContext:
runAsUser: 1000
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
service: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-reviews
labels:
account: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v1
labels:
app: reviews
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v1
template:
metadata:
labels:
app: reviews
version: v1
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v1:1.16.2
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
securityContext:
runAsUser: 1000
volumes:
- name: wlp-output
emptyDir: {
}
- name: tmp
emptyDir: {
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v2
labels:
app: reviews
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v2
template:
metadata:
labels:
app: reviews
version: v2
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v2:1.16.2
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
securityContext:
runAsUser: 1000
volumes:
- name: wlp-output
emptyDir: {
}
- name: tmp
emptyDir: {
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v3
labels:
app: reviews
version: v3
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v3
template:
metadata:
labels:
app: reviews
version: v3
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
securityContext:
runAsUser: 1000
volumes:
- name: wlp-output
emptyDir: {
}
- name: tmp
emptyDir: {
}
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
service: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-productpage
labels:
account: productpage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: productpage-v1
labels:
app: productpage
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: productpage
version: v1
template:
metadata:
labels:
app: productpage
version: v1
spec:
serviceAccountName: bookinfo-productpage
containers:
- name: productpage
image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
securityContext:
runAsUser: 1000
volumes:
- name: tmp
emptyDir: {
}
---
Deploy bookinfo
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl get po
#测试页面
kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
4. Open applications to the outside world
At this point, the BookInfo application has been deployed, but cannot yet be accessed by the outside world. To open access, you need to create an Istio Ingress Gateway , which maps a path to a route at the edge of the mesh.
- Associate the application to the Istio gateway:
If the cluster is running in an environment that does not support external load balancers (for example: minikube), the EXTERNAL-IP of istio-ingressgateway will be displayed as status. Please use the service's NodePort or port forwarding to access the gateway.
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl get svc -n istio-system
#服务的 NodePort
kubectl -n istio-system edit svc istio-ingressgateway
kubectl get svc -n istio-system
Make sure there are no issues with the configuration file:
istioctl analyze
Determine inbound IP and port
Follow the instructions below: If your environment does not have an external load balancer, select a node port instead.
Set inbound port
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
#ceph1主机Ip
export INGRESS_HOST=ceph1
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
echo "$GATEWAY_URL"
echo "http://$GATEWAY_URL/productpage"
Verify external access
Use a browser to view the product page of the Bookinfo application to verify that Bookinfo has enabled external access.
- Run the following command to obtain the external access address of the Bookinfo application.
kubectl get svc istio-ingressgateway -n istio-system
curl ceph1:31479/productpage
for i in `seq 1 100`; do curl -s -o /dev/null http://ceph1:31479/productpage; done
Copy and paste the output address of the above command into the browser and access it to confirm whether the product page of the Bookinfo application can be opened.
5. View the dashboard
Istio integrates with several telemetry applications. Telemetry can help you understand the structure of the service mesh, display the network topology, and analyze the health of the mesh.
Use the instructions below to deploy a Kiali dashboard, as well as Prometheus , Grafana , and Jaeger .
- Install Kiali and other plugins and wait for deployment to complete.
kubectl apply -f samples/addons
kubectl rollout status deployment/kiali -n istio-system
2. Modify Kiali external access NodePort
kubectl -n istio-system get svc
kubectl -n istio-system edit svc kiali
#type: NodePort
kubectl -n istio-system get svc |grep kiali
#访问 Kiali 仪表板
http://ceph1:32514
In the left navigation menu, select Graph , and then in the Namespace drop-down list, select default .
To view tracking data, you must send a request to the service. The number of requests depends on Istio's sampling rate. The sampling rate is set when installing Istio, and the default sampling rate is 1%. You need to send at least 100 requests before the first trace is visible. Use the following command to send 100 requests to the productpage service:
test url
for i in `seq 1 100`; do curl -s -o /dev/null http://ceph1:31479/productpage; done
Kiali settings
6. Test istio
1. Apply default target rules
Create DestinationRule
for each service . Before using Istio to control Bookinfo version routing, you need to define the available versions in the destination rule and name them subsets.
#设置
kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
#查询
kubectl get destinationrules -o yaml
At this point, Istio has completed all takeovers and the first example deployment is completed.
2. Request routing
Route by version
There are currently three versions of reviews. Visit the /productpage of the Bookinfo application in your browser and refresh it several times. We've found that sometimes the output of book reviews includes star ratings, and sometimes it doesn't. This is because there is no explicit default service version route.
What we need to do now is to let istio take over routing, such as routing all traffic to the v1 version of each microservice. Istio is very simple to implement, just add a virtual service (VirtualService).
3. Example: Route all traffic to the v1 version of each microservice
#virtual-service-all-v1.yaml是官方提供的示例文件
kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
Its content is as follows:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: productpage
spec:
hosts:
- productpage
http:
- route:
- destination:
host: productpage
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1 #在这里指定了所有的http请求都通过v1完成,而v1在默认的规则中有定义
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
spec:
hosts:
- details
http:
- route:
- destination:
host: details
subset: v1
---
After testing, it was found that reviews no longer switch styles.
4. Routing according to different user identities
接下来,您将更改路由配置,以便将来自特定用户的所有流量路由到特定服务版本。在这,来自名为 Jason 的用户的所有流量将被路由到服务 reviews:v2。
请注意,Istio 对用户身份没有任何特殊的内置机制。事实上,productpage 服务在所有到 reviews 服务的 HTTP 请求中都增加了一个自定义的 end-user 请求头,从而达到了本例子的效果。
Remember, reviews:v2 is the version that includes star rating functionality.
- Run the following command to enable user-based routing:
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
2. Confirm that the rule has been created
kubectl get virtualservice reviews -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
...
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
3. On the /productpage of the Bookinfo application, log in as user jason.
Refresh the browser. What do you see? A star rating appears next to each review.
4. Log in as a different user (choose any name you want).
Refresh the browser. Now the stars are gone. This is because traffic for all users except Jason is routed to reviews:v1.
You have successfully configured Istio to route traffic based on user identity.
5. Traffic transfer-grayscale release
You can also transfer part of the traffic of reviews to the v3 version, based on which grayscale publishing, A/B testing, etc. can be implemented:
#将所有流量都路由到每个服务的v1版本
kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
#将reviews服务 50%的流量转移到v3
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
The content is as follows:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 50
- destination:
host: reviews
subset: v3
weight: 50
Refresh the /productpage page in the browser, and there is about a 50% chance that you will see evaluation content with red stars on the page. This is because v3 reviews access the ratings service with star ratings, but v1 does not.
If you think the reviews:v3 microservice is stable, you can route 100% of the traffic to reviews:v3 by applying this virtual service rule:
#将reviews服务的全部流量都切换到v3版本
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-v3.yaml
In this way, all requests are forwarded to v3.
If you need to delete the virtual network of all services, you can execute:
kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml
The routing information of all services is configured in the virtual-service-all-v1.yaml configuration file. If deleted, all routing information will be deleted.
6. Timeout
The timeout for http requests can be specified using the timeout field of routing rules. By default, timeouts are disabled
Here we will experiment with the request timeout of the reviews service, and route the request to the v2 version of the reviews service, which will call the ratings service. We first artificially introduce a 2s delay (fault injection) on the ratings service, and then configure the timeout for the reviews service. timeout
1. Create a configuration file in the /samples/bookinfo/networking directory
#创建配置文件
cat > samples/bookinfo/networking/virtual-service-reviews-v2-timeout.yaml <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- fault:
delay:
percent: 100
fixedDelay: 2s
route:
- destination:
host: ratings
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v2
#timeout: 0.5s
EOF
Inject a 2s delay on the ratings service,
2. To apply the routing configuration, just execute it in the current directory.
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-v2-timeout.yaml
3. Visit the website. You can see that the Bookinfo application is running normally (the star symbol of the rating is displayed), but there will be a 2-second delay every time the page is refreshed.
4. Re-edit the file, release the call to the reviews service and add a request timeout of half a second (remove the timeout comment)
5. Reapply the configuration, or execute the command in step 2, and then check whether the configuration is updated through the following command
kubectl get virtualservice -o yaml
6. Refresh the web page again.
At this time, you should see that it will return in 1 second instead of the previous 2 seconds, but reviews are unavailable (the page has no review data)
Even though the timeout is configured to half a second, the response still takes 1 second because there are hard-coded retries in the productpage service, so it calls the reviews service to timeout (retry) twice before returning.
7. Try again
Describes the retry strategy to use when an HTTP request fails. For example, the following rule sets the maximum retries to 3 when calling the rating: V1 service, each retry timeout is 2 seconds.
#创建配置文件
cat > samples/bookinfo/networking/ratings-route-v1-request-timeout.yaml <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings-route
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
retries:
attempts: 3
perTryTimeout: 2s
retryOn: gateway-error,connect-failure,refused-stream
EOF
kubectl apply -f samples/bookinfo/networking/ratings-route-v1-request-timeout.yaml
8. Fusing
Circuit breakers are a useful mechanism provided by Istio for creating resilient microservice applications. In a circuit breaker, set a limit on a single host call in the service, such as the number of concurrent connections or the number of failed calls to that host. Once the limit is triggered, the fuse "trips" and stops connecting to that host.
Use circuit breaker mode to fail quickly without having clients try to connect to an overloaded or faulty host.
Deploy httpbin
httpbin is an open source project written in Python+Flask, which can be used to test various HTTP requests and responses. Official website: http://httpbin.org/
kubectl apply -f samples/httpbin/httpbin.yaml
The contents of this configuration file are:
##################################################################################################
# httpbin service
##################################################################################################
apiVersion: v1
kind: ServiceAccount
metadata:
name: httpbin
---
apiVersion: v1
kind: Service
metadata:
name: httpbin
labels:
app: httpbin
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
serviceAccountName: httpbin
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 80
8.1 Configure the circuit breaker.
Create a destination circuit breaker rule (DestinationRule) and apply circuit breaker settings when calling the httpbin service:
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
trafficPolicy:
connectionPool:
tcp:
maxConnections: 1 #最大连接数
http:
http1MaxPendingRequests: 1 #http请求pending状态的最大请求数
maxRequestsPerConnection: 1 #在一定时间内限制对后端服务发起的最大请求数
outlierDetection: #熔断设置
consecutiveErrors: 1 #从连接池开始拒绝连接,已经连接失败的次数,当通过HTTP访问时,返回代码是502、503或504则视为错误。
interval: 1s #拒绝访问扫描的时间间隔,即在interval(1s)内连续发生1个consecutiveErrors错误,则触发服务熔断,格式是1h/1m/1s/1ms,但必须大于等于1ms。即分析是否需要剔除的频率,多久分析一次,默认10秒。
baseEjectionTime: 3m #最短拒绝访问时长。这个时间主机将保持拒绝访问,且如果决绝访问达到一定的次数。格式:1h/1m/1s/1ms,但必须大于等于1ms。实例被剔除后,至少多久不得返回负载均衡池,默认是30秒。
maxEjectionPercent: 100 #服务在负载均衡池中被拒绝访问(被移除)的最大百分比,负载均衡池中最多有多大比例被剔除,默认是10%。
EOF
Verify that the target rule was created correctly:
kubectl get destinationrule httpbin -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
...
spec:
host: httpbin
trafficPolicy:
connectionPool:
http:
http1MaxPendingRequests: 1
maxRequestsPerConnection: 1
tcp:
maxConnections: 1
outlierDetection:
baseEjectionTime: 180.000s
consecutiveErrors: 1
interval: 1.000s
maxEjectionPercent: 100
client
Create a client program to send traffic to the httpbin service. This is a load testing client called Fortio, which can control the number of connections, the number of concurrencies, and the delay in sending HTTP requests. Fortio can effectively trigger the circuit breaker policy set previously in DestinationRule.
kubectl apply -f samples/httpbin/sample-client/fortio-deploy.yaml
kubectl get po
Wait a moment for the client to be deployed successfully! ! !
Log in to the client Pod and use the Fortio tool to call the httpbin service. The -curl parameter indicates sending a call:
$ FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
$ kubectl exec -it $FORTIO_POD -c fortio -- /usr/bin/fortio load -curl http://httpbin:8000/get
HTTP/1.1 200 OK
server: envoy
date: Tue, 16 Jan 2018 23:47:00 GMT
content-type: application/json
access-control-allow-origin: *
access-control-allow-credentials: true
content-length: 445
x-envoy-upstream-service-time: 36
{
"args": {
},
"headers": {
"Content-Length": "0",
"Host": "httpbin:8000",
"User-Agent": "istio/fortio-0.6.2",
"X-B3-Sampled": "1",
"X-B3-Spanid": "824fbd828d809bf4",
"X-B3-Traceid": "824fbd828d809bf4",
"X-Ot-Span-Context": "824fbd828d809bf4;824fbd828d809bf4;0000000000000000",
"X-Request-Id": "1ad2de20-806e-9622-949a-bd1d9735a3f4"
},
"origin": "127.0.0.1",
"url": "http://httpbin:8000/get"
}
You can see that the request to call the backend service has been successful! Next, you can test for fusing.
Triggering the circuit breaker
In the DestinationRule configuration, maxConnections: 1 and http1MaxPendingRequests: 1 are defined. These rules mean that if there are more than one concurrent connection and request, subsequent requests or connections will be blocked while istio-proxy makes further requests and connections.
发送并发数为 2 的连接(-c 2),请求 20 次(-n 20):
[root@node1 istio-1.6.5]# kubectl exec -it $FORTIO_POD -c fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
...
Code 200 : 14 (70.0 %)
Code 503 : 6 (30.0 %)
...
Increase the number of concurrent connections to 3:
[root@node1 istio-1.6.5]# kubectl exec -it $FORTIO_POD -c fortio -- /usr/bin/fortio load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
...
Code 200 : 11 (36.7 %)
Code 503 : 19 (63.3 %)
...
Query the istio-proxy status to learn more circuit breaker details:
[root@ceph1 istio-1.11.7]# kubectl exec $FORTIO_POD -c istio-proxy -- pilot-agent request GET stats | grep httpbin | grep pending
cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.default.remaining_pending: 1
cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.default.rq_pending_open: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.high.rq_pending_open: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_active: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_overflow: 89
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_total: 114
You can see that the upstream_rq_pending_overflow value is 89, which means that 89 calls have been marked as blown so far
9. Clean httpbin
#清理规则:
kubectl delete destinationrule httpbin
#下线 httpbin 服务和客户端:
kubectl delete deploy httpbin fortio-deploy
kubectl delete svc httpbin
Reference blog
https://blog.csdn.net/bxg_kyjgs/article/details/125599452?utm_source=miniapp_weixin