Table of contents
1. Create a bookinfo instance:
2. istio microservice traffic takeover
1. Ingress-gateway access productpage
6. DestinationRule forwarding strategy
7. Configure the routing policy according to the header
10. Fault injection and timeout mechanism
11. Display of Prometheus monitoring indicators
1. Install integrated components
1. Example introduction
1. Create a bookinfo instance:
$ kubectl create namespace bookinfo $ kubectl -n bookinfo create -f samples/bookinfo/platform/kube/bookinfo.yaml $ kubectl -n bookinfo get po NAME READY STATUS RESTARTS AGE details-v1-5974b67c8-wclnd 1/1 Running 0 34s productpage-v1-64794f5db4-jsdbg 1/1 Running 0 33s ratings-v1-c6cdf8d98-jrfrn 1/1 Running 0 33s reviews-v1-7f6558b974-kq6kj 1/1 Running 0 33s reviews-v2-6cb6ccd848-qdg2k 1/1 Running 0 34s reviews-v3-cc56b578-kppcx 1/1 Running 0 34s
2. Instance structure
The application consists of four separate microservices. This application mimics a category in an online bookstore, displaying information about a book. The page will show a description of a book, details of the book (ISBN, page count, etc.), and some comments about the book.
The Bookinfo application is divided into four separate microservices:
productpage
. This microservice will calldetails
andreviews
two microservices to generate pages.
details
. This microservice contains book information.
reviews
. This microservice contains book related reviews. It also callsratings
microservices.
ratings
. This microservice contains rating information consisting of book reviews.
reviews
There are 3 versions of microservices:
The v1 version does not call
ratings
the service.The v2 version calls
ratings
the service and displays rating information with 1 to 5 black star icons.The v3 version calls
ratings
the service and displays rating information with 1 to 5 red star icons
Bookinfo is a heterogeneous application, and several microservices are written in different languages. These services have no dependencies on Istio, but constitute a representative example of a service mesh: it consists of multiple services, multiple languages, and reviews
services have multiple versions.
3. Access
Use ingress to access the productpage service:
ingress-productpage.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: productpage
namespace: bookinfo
spec:
rules:
- host: productpage.bookinfo.com
http:
paths:
- backend:
serviceName: productpage
servicePort: 9080
path: /
status:
loadBalancer: {}
2. istio microservice traffic takeover
1. Ingress-gateway access productpage
How to achieve finer-grained traffic control?
Inject sidecar container
How to inject sidecar container
use
istioctl kube-inject
$ kubectl -n bookinfo apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)
Label the namespace
# Label the namespace so that services deployed in this namespace will be automatically injected into the sidecar container $ kubectl label namespace dafault istio-injection=enabled
Inject bookinfo
$ kubectl -n bookinfo apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)
traffic routing
Realize the proportional distribution of traffic that cannot be solved by ingress
ingress-gateway access productpage
productpage-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: productpage-gateway
namespace: bookinfo
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- productpage.bookinfo.com
productpage-virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gateway-front-tomcat
namespace: bookinfo
spec:
gateways:
- productpage-gateway
hosts:
- productpage.bookinfo.com
http:
- route:
- destination:
host: productpage
port:
number: 9080
Configure nginx and access it using port 80 of the domain name.
upstream bookinfo-productpage {
server 192.168.0.121:32437;
}
server {
listen 80;
listen [::]:80;
server_name productpage.bookinfo.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_pass http://bookinfo-productpage;
}
}
$ nginx -s reload
At this time, win local hosts configuration nginx domain name resolution can be accessed normally, indicating that the traffic has successfully entered the service grid
2. Weight routing
Requirement 1: Just want to visit
reviews-v3,所有流量都调度到v3也就是红心上面。其他不调度
# 设置一条路由规则,要求访问reviews服务的流量全部转到v3的后端
$ cat virtual-service-reviews-v3.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
namespace: bookinfo
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v3
# 定义流量访问的目的地,v1对应review服务里面label为version=v1的pod
$ cat destination-rule-reviews.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
namespace: bookinfo
spec:
host: reviews
trafficPolicy:
loadBalancer:
simple: RANDOM
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
[root@k8s-master bookinfo]# kubectl get po -n bookinfo --show-labels | grep review
reviews-v1-d49966d6b-vn5tx 2/2 Running 2 15h app=reviews,istio.io/rev=,pod-template-hash=d49966d6b,security.istio.io/tlsMode=istio,version=v1
reviews-v2-644f5c9ddb-pjmnh 2/2 Running 2 15h app=reviews,istio.io/rev=,pod-template-hash=644f5c9ddb,security.istio.io/tlsMode=istio,version=v2
reviews-v3-d56b49bc8-brp5k 2/2 Running 2 15h app=reviews,istio.io/rev=,pod-template-hash=d56b49bc8,security.istio.io/tlsMode=istio,version=v3
$ kubectl apply -f virtual-service-reviews-v3.yaml # Visit the productpage for testing. At this time, it is found that the page has been stopped at the v3 version and will not change, and the requirements are realized.
Requirement 2: Realize the following traffic distribution:
90% -> reivews-v1 10% -> reviews-v2 0% -> reviews-v3
# 同样的方式,刚才已经定义过v1、v2、v3的去向了,现在只需要增加v1和v2路由规则即可
$ cat virtual-service-reviews-90-10.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
namespace: bookinfo
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 90
- destination:
host: reviews
subset: v2
weight: 10
$ kubectl apply -f virtual-service-reviews-90-10.yaml
此时,再次访问,v3就不会出现了
3. Access path routing
Requirements: It means accessing different pages according to different paths
The effect is as follows:
# Define to allow external traffic to enter the grid, directly edit the existing gateway
- hosts:
- productpage.bookinfo.com
- bookinfo.com
# Add routing configuration, access the domain name of bookinfo.com through the gateway productpage-gateway, match the following rules, and only reach the service discovery level at this time, if you need to configure routing allocation strategies for specific services, you need to add a subset.
$ kubectl -n bookinfo edit gw productpage-gateway
$ cat bookinfo-routing-with-uri-path.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
namespace: bookinfo
spec:
gateways:
- productpage-gateway
hosts:
- bookinfo.com
http:
- name: productpage-route
match:
- uri:
prefix: /productpage
route:
- destination:
host: productpage
- name: reviews-route
match:
- uri:
prefix: /reviews
route:
- destination:
host: reviews
- name: ratings-route
match:
- uri:
prefix: /ratings
route:
- destination:
host: ratings
New bookinfo.com configuration in nginx:
upstream bookinfo {
server 192.168.0.121:32437;
}
server {
listen 80;
listen [::]:80;
server_name bookinfo.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_pass http://bookinfo;
}
}
Add bookinfo.com domain name resolution to local hosts
access:
http://bookinfo.com/productpage
http://bookinfo.com/ratings/1
The actual access corresponds to:
bookinfo.com/productpage -> productpage:8090/productpage bookinfo.com/ratings -> ratings:9080/ratings bookinfo.com/reviews -> reviews:9080/reviews
To actually access the productpage page, since static resources such as css and js need to be referenced, it is necessary to supplement
/static
the forwarding of the path:... http: - name: productpage-route match: - uri: prefix: /productpage - uri: prefix: /static route: - destination: host: productpage ...
virtualservice
The port port of the service is not specified in the configuration, and the forwarding can also take effect?
Note that if there is only one port in the service, it will be automatically forwarded to this port without specifying the port number explicitly
4. Path rewriting
If you want to realize the rewrite function and hide the real url path name, it is very simple:
bookinfo.com/rate -> ratings:8090/ratings
... - name: ratings-route match: - uri: prefix: /haha rewrite: uri: "/ratings" route: - destination: host: ratings ...
5. Matching priority
Requirement: We can access different backends according to the URL, but we can't take into account all the url suffixes. At this time,
/login
the match found by clicking login is not added, and there may be other things in the future, so we can add a rule at the end of the rule list , as the default forwarding rule.
$ kubectl -n bookinfo edit vs bookinfo ... - name: default-route route: - destination: host: productpage
6. DestinationRule forwarding strategy
The round robin strategy is used by default, and the following load balancing models are also supported, which can be DestinationRule
used in to distribute requests to specific services or service subsets.
-
Random: forward the request to a random instance
-
Weighted: forward the request to the instance according to the specified percentage
-
Least requests: forward the request to the instance with the least number of requests
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: my-destination-rule
spec:
host: my-svc
trafficPolicy: #默认的负载均衡策略模型为随机
loadBalancer:
simple: RANDOM
subsets:
- name: v1 #subset1,将流量转发到具有标签 version:v1 的 deployment 对应的服务上
labels:
version: v1
- name: v2 #subset2,将流量转发到具有标签 version:v2 的 deployment 对应的服务上,指定负载均衡为轮询
labels:
version: v2
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
- name: v3 #subset3,将流量转发到具有标签 version:v3 的 deployment 对应的服务上
labels:
version: v3
7. Configure the routing policy according to the header
By default, the project will write the user name into the request header of the head when logging in. This is generally implemented in the code. We can use istop to obtain the head header information of the request according to this feature.
Requirements: v2 is the official version, and v3 is the test version. The internal test user testuser can access the v3 interface when logging in, and other users can access the v2 interface if they are not logged in.
$ cat virtual-service-reviews-header.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
namespace: bookinfo
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: testuser
route:
- destination:
host: reviews
subset: v3
- route:
- destination:
host: reviews
subset: v2
$ kubectl apply -f virtual-service-reviews-header.yaml # Refresh observations http://bookinfo.com/productpage
More supported match types can be found here.
https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest
8. Traffic mirroring
background:
In many cases, when we have refactored the service, or we have made major optimizations to the project, how can we ensure that the service is robust? In traditional services, we can only simulate the response of services under various circumstances through a large number of tests. Although there are a series of methods such as manual testing, automated testing, and stress testing to detect it, the testing itself is a sample behavior. Even if the tester improves its test sample, it cannot fully show a real flow of online services. form.
The design of traffic mirroring solves such problems to the greatest extent. The focus of traffic mirroring is no longer to use a small number of samples to evaluate the robustness of a service, but to continuously mirror online traffic to our pre-release environment without affecting the online environment, so that reconstruction The final service will be subjected to the impact and test of a wave of real traffic before it goes online, so that all risks will be exposed on the eve of going online. Through continuous exposure and problem solving, the service will be the same as the online service on the eve of going online. robustness. Since the test environment uses real traffic, it will be able to show the diversity, authenticity, and complexity of the traffic. At the same time, the pre-release service will also show its most realistic processing capabilities and handling of exceptions. ability.
practice
# 准备httpbin v1
$ cat httpbin-v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin-v1
namespace: bookinfo
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
# 准备httpbin v2
$ cat httpbin-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin-v2
namespace: bookinfo
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v2
template:
metadata:
labels:
app: httpbin
version: v2
spec:
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
$ istioctl kube-inject -f httpbin-v2.yaml | kubectl create -f -
$ istioctl kube-inject -f httpbin-v1.yaml | kubectl create -f -
# 测试访问是否正常
$ curl $(kubectl -n bookinfo get po -l version=v1,app=httpbin -ojsonpath='{.items[0].status.podIP}')/headers
{
"headers": {
"Accept": "*/*",
"Content-Length": "0",
"Host": "10.244.0.88",
"User-Agent": "curl/7.29.0",
"X-B3-Sampled": "1",
"X-B3-Spanid": "777c7af4458c5b81",
"X-B3-Traceid": "6b98ea81618deb4f777c7af4458c5b81"
}
}
# Service文件
$ cat httpbin-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: httpbin
namespace: bookinfo
labels:
app: httpbin
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
$ kubectl apply -f httpbin-svc.yaml
# 创建gateway和virtualservice,由于都是使用http请求,因此,直接
# 使用bookinfo.com/httpbin访问,因此直接修改bookinfo这个virtualservice即可
$ kubectl -n bookinfo get vs
NAME GATEWAYS HOSTS
bookinfo [bookinfo-gateway] [bookinfo.com]
gateway-front-tomcat [productpage-gateway] [productpage.bookinfo.com]
reviews [reviews]
$ kubectl -n bookinfo edit vs bookinfo
#添加httpbin的规则
...
- match:
- uri:
prefix: /httpbin
name: httpbin-route
rewrite:
uri: /
route:
- destination:
host: httpbin
subset: v1
...
$ cat httpbin-destinationRule.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
namespace: bookinfo
spec:
host: httpbin
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
$ kubectl apply -f httpbin-destinationRule.yaml
# 访问http://bookinfo.com/httpbin/headers,查看日志
# 此时发起请求,查看httpbin-v1的日志有刷,v2没反应,因为上面vs规则只匹配了v1,现在我们希望在访问v1的同时,流量也转发v2一份
[root@k8s-master bookinfo]# kubectl -n bookinfo logs -f httpbin-v1-66c7d456fb-hhqjv -c httpbin
[2022-01-17 09:04:00 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2022-01-17 09:04:00 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2022-01-17 09:04:00 +0000] [1] [INFO] Using worker: sync
[2022-01-17 09:04:00 +0000] [9] [INFO] Booting worker with pid: 9
127.0.0.6 - - [17/Jan/2022:09:05:48 +0000] "GET /headers HTTP/1.1" 200 257 "-" "curl/7.29.0"
127.0.0.1 - - [17/Jan/2022:09:11:57 +0000] "GET //headers HTTP/1.1" 200 1133 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36"
127.0.0.1 - - [17/Jan/2022:09:13:07 +0000] "GET //headers HTTP/1.1" 200 1168 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36"
[root@k8s-master ~]# k logs -f httpbin-v2-69b898fddc-zv2vj -c httpbin
kubectl -n bookinfo logs -f httpbin-v2-69b898fddc-zv2vj -c httpbin
[2022-01-17 09:04:15 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2022-01-17 09:04:15 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2022-01-17 09:04:15 +0000] [1] [INFO] Using worker: sync
[2022-01-17 09:04:15 +0000] [9] [INFO] Booting worker with pid: 9
# 为httpbin-v1添加mirror设置,mirror点为httpbin-v2,mirror_percent表示转发100%的流量
$ kubectl -n bookinfo edit vs bookinfo
...
- match:
- uri:
prefix: /httpbin
name: httpbin-route
rewrite:
uri: /
route:
- destination:
host: httpbin
subset: v1
mirror:
host: httpbin
subset: v2
mirror_percent: 100
...
At this time, the access page is the v1 service, and the v2 background date is refreshed synchronously, indicating that the traffic image has been received.
9. Fuse
introduce
Circuit Breaker originally refers to the mechanism of disconnecting the circuit when the current exceeds the specified value, and performing short-circuit protection or severe overload protection. For microservice systems, fusing is particularly important. When the system encounters certain module failures, it can improve the availability of core system functions through service degradation and other methods, and can cope with failures, potential peaks, or other unknown network factors. .
prepare the environment
Istio implements the circuit breaking mechanism through Envoy Proxy. Envoy enforces the configuration of the circuit breaking policy at the network level, so that there is no need to configure or reprogram each application individually. The following is an example to demonstrate how to configure the number of broken connections, number of requests, and anomaly detection for services in the Istio grid. The essence of istio's fusing is a current limiting
-
Create httpbin service
-
Create a test client
We have
httpbin
set a circuit breaker policy for the service, and then create a Java client to send a request to the backend service to see if the circuit breaker policy is triggered. This client can control the number of connections, the number of concurrency, and the queue of pending requests. Using this client can effectively trigger the fuse policy set in the target rule. the client'sdeployment yaml 内容如下: # httpbin-client-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: httpbin-client-v1 namespace: bookinfo spec: replicas: 1 selector: matchLabels: app: httpbin-client-v1 version: v1 template: metadata: labels: app: httpbin-client-v1 version: v1 spec: containers: - image: ceposta/http-envoy-client-standalone:latest imagePullPolicy: IfNotPresent name: httpbin-client command: ["/bin/sleep","infinity"]
Here we will also inject Sidecar to the client to ensure Istio's control over network interaction:
$ kubectl apply -f <(istioctl kube-inject -f httpbin-client-deploy.yaml)
verify
First try NUM_THREADS=1
to create a connection with a single thread ( ) and make 5 calls (default: NUM_CALLS_PER_CLIENT=5
):
$ CLIENT_POD=$(kubectl get pod -n bookinfo | grep httpbin-client | awk '{ print $1 }') $ kubectl -n bookinfo exec -it $CLIENT_POD -c httpbin-client -- sh -c 'export URL_UNDER_TEST=http://httpbin:8000/get export NUM_THREADS=1 && java -jar http-client.jar'
Let's try to increase the number of threads to 2:
$ kubectl -n bookinfo exec -it $CLIENT_POD -c httpbin-client -- sh -c 'export URL_UNDER_TEST=http://httpbin:8000/get export NUM_THREADS=2 && java -jar http-client.jar'
Create a DestinationRule and httpbin
set a circuit breaker policy for the service:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
namespace: bookinfo
spec:
host: httpbin
trafficPolicy:
connectionPool:
tcp:
maxConnections: 1
http:
http1MaxPendingRequests: 1
maxRequestsPerConnection: 1
EOF
maxConnections : Limit the number of connections initiated to the backend service
HTTP/1.1
. If this limit is exceeded, the fuse will be turned on.maxPendingRequests : Limit the length of the pending request list. If this limit is exceeded, the circuit breaker will be turned on.
maxRequestsPerConnection : Limits the number of requests made to the backend service at any given time
HTTP/2
. If this limit is exceeded, the circuit breaker will be turned on.
You can view the set circuit breaker policy in the envoy configuration fragment:
$ istioctl pc cluster httpbin-client-v1-56b86fb85c-vg5pp.bookinfo --fqdn httpbin.bookinfo.svc.cluster.local -ojson ... "connectTimeout": "10s", "maxRequestsPerConnection": 1, "circuitBreakers": { "thresholds": [ { "maxConnections": 1, "maxPendingRequests": 1, "maxRequests": 4294967295, "maxRetries": 4294967295 } ] }, ...
The fusing strategy of istio is essentially a remedy at the proxy level and will not invade the code layer. If you want to truly avoid exceptions, the best way is to implement retry or exception handling through code, which is more effective
10. Fault injection and timeout mechanism
In a system with a microservice architecture, in order for the system to meet high robustness requirements, it is usually necessary to test the system for directional errors. For example, if the order system and payment system in e-commerce fail, it will be a very serious production accident. Therefore, it is necessary to consider a variety of abnormal faults in the early stage of system design and design a perfect recovery strategy or elegance for each abnormality. The fallback strategy, try our best to avoid the occurrence of similar accidents, so that when the system fails, it can still operate normally. In this process, service failure simulation has always been a very complicated task.
Why there is fault injection? In order to improve program compatibility, test whether the system is running normally when there are a large number of 404 and 502. The system cannot hang up because some failed requests.
istio provides a non-intrusive fault injection mechanism, allowing developers and testers to simulate service exceptions through configuration without adjusting the service program. Currently, there are two categories:
abort : optional item, interrupt fault, configure an object of type Abort. Used to inject request exception class faults. Simply put, it is used to simulate whether the current service has the processing capability when the upstream service returns a specified exception code to the request.
delay : Optional item, delay fault, configure an object of Delay type. Used to inject delayed faults. In layman's terms, it is artificially simulating the response time of upstream services to test whether the current service is capable of fault tolerance and disaster recovery under high latency conditions.
1. Delay and timeout
Currently, for users who log in to luffy, the instructions for accessing services are as follows:
productpage --> reviews v2 --> ratings \ -> details
ratings
A 2-second delay can be injected into the service by :
# 请求ratings这个hosts时,注入2s的延迟
$ cat virtualservice-ratings-2s-delay.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
namespace: bookinfo
spec:
hosts:
- ratings
http:
- fault:
delay:
percentage:
value: 100
fixedDelay: 2s
route:
- destination:
host: ratings
$ kubectl apply -f virtualservice-ratings-2s-delay.yaml # Visit http://bookinfo.com/productpage again, you can clearly feel the delay of 2s, network can see
You can view the corresponding envoy configuration:
$ istioctl pc r ratings-v1-556cfbd589-89ml4.bookinfo --name 9080 -ojsonThe call at this time is:
productpage --> reviews v2 - (delay 2 seconds) -> ratings \ -> details
At this point, add a request timeout for the reviews service:
$ kubectl -n bookinfo edit vs reviews
...
http:
- match:
- headers:
end-user:
exact: testuser
route:
- destination:
host: reviews
subset: v2
timeout: 1s
- route:
- destination:
host: reviews
subset: v3
...
The calling relationship of this function is:
productpage - (0.5 sec timeout) -> reviews v2 - (2 sec delay) -> ratings \ -> details
At this point, we are equivalent to injecting a 2s delay when setting the review request rating service, and then injecting a 1s timeout for the testuser user. If the testuser user is used, an error will be reported because of the timeout. If a non-testuser user is used, only a delay will occur and no failure will occur. Case.
Remove delay:
$ kubectl -n bookinfo delete vs ratings
2. Status code
# 注入请求details服务时有50%返回500错误码的几率
$ cat virtualservice-details-aborted.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
namespace: bookinfo
spec:
hosts:
- details
http:
- fault:
abort:
percentage:
value: 50
httpStatus: 500
route:
- destination:
host: details
$ kubectl apply -f virtualservice-details-aborted.yaml
# 再次刷新查看details的状态,查看productpage的日志
$ kubectl -n bookinfo logs -f $(kubectl -n bookinfo get po -l app=productpage -ojsonpath='{.items[0].metadata.name}') -c istio-proxy
[2020-11-09T09:00:16.020Z] "GET /details/0 HTTP/1.1" 500 FI "-" "-" 0 18 0 - "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36" "f0387bb6-a445-922c-89ab-689dfbf548f8" "details:9080" "-" - - 10.111.67.169:9080 10.244.0.52:56552 - -
11. Display of Prometheus monitoring indicators
1. Install integrated components
https://istio.io/latest/docs/ops/integrations
Grafana(图标展示)
$ kubectl apply -f samples/addons/grafana.yaml
Jaeger(分布式追踪)
$ kubectl apply -f samples/addons/jaeger.yaml
Kiali(针对istio可视化组件)
# 完善扩展组件地址:
grafana url: "http://grafana.istio.com"
tracing url: "http://jaeger.istio.com"
$ kubectl apply -f samples/addons/kiali.yaml
Prometheus(监控)
$ kubectl apply -f samples/addons/prometheus.yaml
创建kiali报错:
unable to recognize "samples/addons/kiali.yaml": no matches for kind "MonitoringDashboard" in version "monitoring.kiali.io/v1alpha1"
解决:
需要创建“CustomResourceDefinition”
cd istio-1.8.2/samples/addons/;vim kiali-crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: monitoringdashboards.monitoring.kiali.io
spec:
group: monitoring.kiali.io
names:
kind: MonitoringDashboard
listKind: MonitoringDashboardList
plural: monitoringdashboards
singular: monitoringdashboard
scope: Namespaced
versions:
- name: v1alpha1
served: true
storage: true
kubectl apply -f kiali-crd.yaml
参照:https://blog.csdn.net/qq_41674452/article/details/113345163
2. Visual interface access
$ cat prometheus-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus
namespace: istio-system
spec:
rules:
- host: prometheus.istio.com
http:
paths:
- backend:
serviceName: prometheus
servicePort: 9090
path: /
status:
loadBalancer: {}
$ kubectl apply -f prometheus-ingress.yaml
[root@k8s-master warning]# kubectl get po -n istio-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
grafana-75b5cddb4d-m6rph 1/1 Running 0 25m 10.244.1.216 k8s-node2 <none> <none>
istio-egressgateway-66f8f6d69c-6mdkj 1/1 Running 3 46h 10.244.1.210 k8s-node2 <none> <none>
istio-ingressgateway-758d8b79bd-xvxt8 1/1 Running 3 46h 10.244.1.212 k8s-node2 <none> <none>
istiod-7556f7fddf-kjhpr 1/1 Running 3 46h 10.244.1.209 k8s-node2 <none> <none>
jaeger-5795c4cf99-w5lzg 1/1 Running 0 25m 10.244.1.217 k8s-node2 <none> <none>
kiali-6c49c7d566-n5hpr 1/1 Running 0 8m44s 10.244.1.219 k8s-node2 <none> <none>
prometheus-9d5676d95-67hvb 2/2 Running 0 25m 10.244.1.218 k8s-node2 <none> <none>
Configure the hosts domain name resolution of the nginx service machine; view the list of targets added by default, which have been integrated by default:
The core of which is kubernetes-pods
monitoring. Each service in the service grid is monitored as a target, and the service traffic indicators are directly provided by the sidecar container.
$ kubectl -n bookinfo get po -owide $ curl 10.244.0.53:15020/stats/prometheus
The data collected by these monitoring indicators can be viewed in grafana.
$ cat grafana-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: istio-system
spec:
rules:
- host: grafana.istio.com
http:
paths:
- backend:
serviceName: grafana
servicePort: 3000
path: /
status:
loadBalancer: {}
$ for i in $(seq 1 10000); do curl -s -o /dev/null "http://bookinfo.com/productpage"; done
After accessing the interface, you can view Istio Mesh Dashboard and other related dashboards, because in the resource file of grafana, the dashboard JSON configuration files of each component of Istio are mounted in the form of ConfigMap:
$ kubectl -n istio-system get cm istio-services-grafana-dashboards NAME DATA AGE istio-services-grafana-dashboards 3 7d1h
jaeger
$ cat jaeger-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jaeger
namespace: istio-system
spec:
rules:
- host: jaeger.istio.com
http:
paths:
- backend:
serviceName: tracing
servicePort: 80
path: /
status:
loadBalancer: {}
$ kubectl apply -f jaeger-ingress.yaml
kiali
kiali is an observability analysis service
$ cat kiali-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kiali
namespace: istio-system
spec:
rules:
- host: kiali.istio.com
http:
paths:
- backend:
serviceName: kiali
servicePort: 20001
path: /
status:
loadBalancer: {}
$ kubectl apply -f kiali-ingress.yaml
Integrates Prometheus, grafana, tracing, log,