Table of contents
1. Introduction
This article records the k8s cluster installation of Helm, Istio and Kiali to facilitate learning related concepts.
The premise requires a k8s cluster:
you can refer to: Arm64 architecture (MacBookPro M1) virtual machine installation k8s1.27.3 version records and problem summary
Helm is the package manager of the k8s cluster. We can install applications in the k8s cluster through Helm.
Istio is a powerful service mesh platform that provides a rich set of tools and features for microservice architectures to simplify and enhance communication, security, and observability between services.
The Kiali dashboard presents an overview of the grid and the relationships between the various services of the Bookinfo sample application. It also provides filters to visualize the flow of traffic.
Helm version support policy: https://helm.sh/zh/docs/topics/version_skew/
Istio version support policy: https://istio.io/latest/zh/docs/releases/supported-releases/
2. Deploy Helm
Official document: https://helm.sh/zh/docs/intro/quickstart/
The installation method of Helm is very simple. You only need to execute one command and execute the corresponding script to complete the installation. If you need to install the corresponding version, each Helm version provides binary versions of various operating systems: https://github.com/helm/helm/releases, these versions can be downloaded and installed manually, and then decompressed to move helm to the desired directory ( mv linux-amd64/helm /usr/local/bin/helm
)
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
[root@k8s-master ~]# curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 11345 100 11345 0 0 8068 0 0:00:01 0:00:01 --:--:-- 8063
[WARNING] Could not find git. It is required for plugin installation.
Downloading https://get.helm.sh/helm-v3.12.1-linux-arm64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
[root@k8s-master ~]# helm version
version.BuildInfo{
Version:"v3.12.1", GitCommit:"f32a527a060157990e2aa86bf45010dfb3cc8b8d", GitTreeState:"clean", GoVersion:"go1.20.4"}
[root@k8s-master ~]# helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
[root@k8s-master ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
[root@k8s-master ~]#
[root@k8s-master ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
3. Deploy Istio
Official document: https://istio.io/latest/zh/docs/setup/getting-started/
Istio can also perform script installation, but I can't do it because my network is disconnected.
[root@k8s-master ~]# curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.18.0
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 102 100 102 0 0 359 0 --:--:-- --:--:-- --:--:-- 359
0 0 0 0 0 0 0 0 --:--:-- 0:01:36 --:--:-- 0
curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104
[root@k8s-master ~]# cat >> /etc/hosts << EOF
> 75.2.60.5 istio.io
> EOF
> [root@k8s-master ~]# curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.18.0
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 102 100 102 0 0 337 0 --:--:-- --:--:-- --:--:-- 337
0 0 0 0 0 0 0 0 --:--:-- 0:01:32 --:--:-- 0
curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104
Only binary packages can be downloaded and uploaded to virtual machines for deployment.
Binary package download: https://github.com/istio/istio/releases/tag/1.18.0
Download the corresponding version
Use tar -zxvf
the command to decompress, enter the folder, and add the istio command to the environment variable.
Use the demo configuration combination. It contains a set of features that are specially prepared for testing, as well as a combination of configurations for production or performance testing.
cd istio-1.18.0
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y
Envoy
And add a label to the namespace to instruct Istio to automatically inject the sidecar ( ) proxy when deploying the application Sidecar
:
kubectl label namespace default istio-injection=enabled
Deploy the official example Bookinfo
Then install the official Demo example:
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
[root@k8s-master ~]# ll | grep istio
-rw-r--r--. 1 root root 25307383 7月 5 23:16 istio-1.18.0-linux-arm64.tar.gz
[root@k8s-master ~]# tar -zxvf istio-1.18.0-linux-arm64.tar.gz
[root@k8s-master ~]# ll | grep istio
drwxr-x---. 6 root root 115 6月 7 16:01 istio-1.18.0
-rw-r--r--. 1 root root 25307383 7月 5 23:16 istio-1.18.0-linux-arm64.tar.gz
[root@k8s-master ~]# cd istio-1.18.0
[root@k8s-master istio-1.18.0]# ls
bin LICENSE manifests manifest.yaml README.md samples tools
[root@k8s-master istio-1.18.0]# export PATH=$PWD/bin:$PATH
[root@k8s-master istio-1.18.0]# istioctl install --set profile=demo -y
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete Making this installation the default for injection and validation.
[root@k8s-master istio-1.18.0]# kubectl label namespace default istio-injection=enabled
namespace/default labeled
[root@k8s-master istio-1.18.0]# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
[root@k8s-master istio-1.18.0]#
The application will launch shortly. When each pod is ready, an Istio sidecar is deployed along with the application.
[root@k8s-master istio-1.18.0]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.111.93.162 <none> 9080/TCP 12m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18h
productpage ClusterIP 10.97.94.189 <none> 9080/TCP 12m
ratings ClusterIP 10.106.155.115 <none> 9080/TCP 12m
reviews ClusterIP 10.106.49.5 <none> 9080/TCP 12m
[root@k8s-master istio-1.18.0]# kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-7c7dbcb4b5-jx866 2/2 Running 0 12m
productpage-v1-664d44d68d-v722l 2/2 Running 0 12m
ratings-v1-844796bf85-kktgq 2/2 Running 0 12m
reviews-v1-5cf854487-gn6xv 2/2 Running 0 12m
reviews-v2-955b74755-rp9b5 2/2 Running 0 12m
reviews-v3-797fc48bc9-wspwt 2/2 Running 0 12m
[root@k8s-master istio-1.18.0]#
After confirming that the above operations are correct, run the following command to verify that the application is running in the cluster and has provided web services by checking the returned page title:
[root@k8s-master istio-1.18.0]# kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>
[root@k8s-master istio-1.18.0]#
At this point, the BookInfo application has been deployed, but it cannot be accessed by the outside world. To open access, an Istio Ingress Gateway needs to be created, which maps a path to a route at the edge of the mesh.
[root@k8s-master istio-1.18.0]# kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created
# 确保配置文件没有问题:
[root@k8s-master istio-1.18.0]# istioctl analyze
✔ No validation issues found when analyzing namespace: default.
[root@k8s-master istio-1.18.0]#
Execute the following command to determine whether your Kubernetes cluster environment supports external load balancing:
[root@k8s-master ~]# kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.103.45.216 <pending> 15021:31564/TCP,80:30704/TCP,443:30854/TCP,31400:30301/TCP,15443:30563/TCP 20h
[root@k8s-master ~]#
After setting EXTERNAL-IP
the value of , the environment has an external load balancer that can be used as an inbound gateway. However, if the value of EXTERNAL-IP is (or remains in the < pending
> state), your environment does not provide an external load balancer that can act as a gateway for inbound traffic. In this case, you can also use the service (Service) NodePort
to access the gateway.
If you don't have an external load balancer in your environment, choose one NodePort
instead.
Set the inbound IP address and port:
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}')
Set the environment variable GATEWAY_URL:
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
echo "$GATEWAY_URL"
echo "http://$GATEWAY_URL/productpage"
Copy and paste the output address of the above command into the browser and visit to confirm whether the product page of the Bookinfo application can be opened.
[root@k8s-master ~]# export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
[root@k8s-master ~]# export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}')
[root@k8s-master ~]# export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
[root@k8s-master ~]# echo "$GATEWAY_URL"
192.168.153.102:30704
[root@k8s-master ~]# echo "http://$GATEWAY_URL/productpage"
http://192.168.153.102:30704/productpage
Successful external (host) access:
4. Deploy Kali
Istio integrates with several telemetry applications. Telemetry can help us understand the structure of the service mesh, display the topology of the network, and analyze the health of the mesh.
Use the instructions below to deploy the Kiali dashboard, along with Prometheus, Grafana, and Jaeger.
kubectl apply -f samples/addons
# 查询kiali在滚动更新期间的状态
kubectl rollout status deployment/kiali -n istio-system
To access kiali's web page externally, you need to create a NodePort
Service.
kubectl -n istio-system expose service kiali --type=NodePort --name=kiali-external
kubectl get svc -n istio-system
kubectl -n istio-system get service kiali-external -o=jsonpath='{.spec.ports[0].nodePort}'
[root@k8s-master ~]# cd istio-1.18.0
[root@k8s-master istio-1.18.0]# kubectl apply -f samples/addons
serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created
deployment.apps/jaeger created
service/tracing created
service/zipkin created
service/jaeger-collector created
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
role.rbac.authorization.k8s.io/kiali-controlplane created
rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
service/kiali created
deployment.apps/kiali created
serviceaccount/loki created
configmap/loki created
configmap/loki-runtime created
service/loki-memberlist created
service/loki-headless created
service/loki created
statefulset.apps/loki created
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created
[root@k8s-master istio-1.18.0]# kubectl rollout status deployment/kiali -n istio-system
deployment "kiali" successfully rolled out
[root@k8s-master istio-kiali]# kubectl -n istio-system expose service kiali --type=NodePort --name=kiali-external
service/kiali-external exposed
[root@k8s-master istio-kiali]# kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-egressgateway ClusterIP 10.111.39.235 <none> 80/TCP,443/TCP 21h
istio-ingressgateway LoadBalancer 10.103.45.216 <pending> 15021:31564/TCP,80:30704/TCP,443:30854/TCP,31400:30301/TCP,15443:30563/TCP 21h
istiod ClusterIP 10.109.218.54 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 21h
kiali ClusterIP 10.105.43.99 <none> 20001/TCP,9090/TCP 21h
kiali-external NodePort 10.110.49.251 <none> 20001:31430/TCP,9090:30588/TCP 9s
[root@k8s-master istio-kiali]# kubectl -n istio-system get service kiali-external -o=jsonpath='{.spec.ports[0].nodePort}'
31430[root@k8s-master istio-kiali]#
Access through the cluster IP address + NodePort:http://192.168.153.102:31430/
To view trace data, a request must be sent to the service. The number of requests depends on Istio's sampling rate. The sampling rate is set when Istio is installed, and the default sampling rate is 1%. You need to send at least 100 requests before the first trace is visible. Send 100 requests to the productpage service with the following command:
for i in `seq 1 100`; do curl -s -o /dev/null http://$GATEWAY_URL/productpage; done
The Kiali dashboard presents an overview of the grid and the relationships between the various services of the Bookinfo sample application. It also provides filters to visualize the flow of traffic.
bug record
Failed to deploy Bookinfo demo
When I used kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
the command to deploy the Demo, I found that the Pod couldn't start all the time.
[root@k8s-master istio-1.18.0]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
details-v1-7c7dbcb4b5-lw8hr 0/2 Init:CrashLoopBackOff 5 (82s ago) 4m13s 172.16.85.212 k8s-node01 <none> <none>
productpage-v1-664d44d68d-lgc4k 0/2 Init:CrashLoopBackOff 5 (69s ago) 4m12s 172.16.58.203 k8s-node02 <none> <none>
ratings-v1-844796bf85-7s4zp 0/2 Init:CrashLoopBackOff 5 (87s ago) 4m13s 172.16.85.213 k8s-node01 <none> <none>
reviews-v1-5cf854487-ztl9l 0/2 Init:CrashLoopBackOff 5 (73s ago) 4m13s 172.16.58.202 k8s-node02 <none> <none>
reviews-v2-955b74755-tm6cj 0/2 Init:CrashLoopBackOff 5 (74s ago) 4m13s 172.16.85.214 k8s-node01 <none> <none>
reviews-v3-797fc48bc9-s29zm 0/2 Init:CrashLoopBackOff 5 (78s ago) 4m13s 172.16.85.215 k8s-node01 <none> <none>
At first I thought it was because the image could not be downloaded, so I downloaded the image separately
crictl pull docker.io/istio/examples-bookinfo-details-v1:1.17.0
crictl pull docker.io/istio/examples-bookinfo-productpage-v1:1.17.0
crictl pull docker.io/istio/examples-bookinfo-ratings-v1:1.17.0
crictl pull docker.io/istio/examples-bookinfo-reviews-v1:1.17.0
crictl pull docker.io/istio/examples-bookinfo-reviews-v2:1.17.0
crictl pull docker.io/istio/examples-bookinfo-reviews-v3:1.17.0
crictl pull docker.io/istio/proxyv2:1.18.0
[root@k8s-node02 istio-1.18.0]# crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/calico/cni v3.25.0 0bb8d6f033a05 81.1MB
docker.io/calico/kube-controllers v3.25.0 2a83e28de3677 27.1MB
docker.io/calico/node v3.25.0 8a2dff14388de 82.2MB
docker.io/istio/examples-bookinfo-details-v1 1.17.0 8c7b34204cae9 59.8MB
docker.io/istio/examples-bookinfo-productpage-v1 1.17.0 348980125f0b0 64.7MB
docker.io/istio/examples-bookinfo-ratings-v1 1.17.0 18290de2e4a28 54.2MB
docker.io/istio/examples-bookinfo-reviews-v1 1.17.0 9dc1566776c17 412MB
docker.io/istio/examples-bookinfo-reviews-v2 1.17.0 5233615dc9972 412MB
docker.io/istio/examples-bookinfo-reviews-v3 1.17.0 fbb7b7ceabf34 412MB
docker.io/istio/proxyv2 1.18.0 c901fe029266e 90.4MB
docker.io/kubernetesui/dashboard v2.3.1 5bb89698273d8 65.4MB
registry.aliyuncs.com/google_containers/pause 3.8 4e42fb3c9d90e 268kB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns v1.10.1 97e04611ad434 14.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.27.3 fb73e92641fd5 21.4MB
[root@k8s-node02 istio-1.18.0]#
The image was downloaded on all three nodes, but the container still failed to start.
Later, I checked the Pod's log and found this problem:error output: xtables parameter problem: iptables-restore: unable to initialize table 'nat'
Finally found the solution:
cat <<EOT >> /etc/modules-load.d/k8s.conf
overlay
br_netfilter
nf_nat
xt_REDIRECT
xt_owner
iptable_nat
iptable_mangle
iptable_filter
EOT
modprobe br_netfilter ; modprobe nf_nat ; modprobe xt_REDIRECT ; modprobe xt_owner; modprobe iptable_nat; modprobe iptable_mangle; modprobe iptable_filter
https://stackoverflow.com/questions/73473680/service-deployed-with-istio-doesnt-start-minikube-docker-mac-m1
https://github.com/istio/istio/issues/36762
The container runs successfully:
[root@k8s-master istio-1.18.0]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
details-v1-7c7dbcb4b5-jx866 2/2 Running 0 12m 172.16.85.220 k8s-node01 <none> <none>
productpage-v1-664d44d68d-v722l 2/2 Running 0 12m 172.16.58.207 k8s-node02 <none> <none>
ratings-v1-844796bf85-kktgq 2/2 Running 0 12m 172.16.85.221 k8s-node01 <none> <none>
reviews-v1-5cf854487-gn6xv 2/2 Running 0 12m 172.16.58.206 k8s-node02 <none> <none>
reviews-v2-955b74755-rp9b5 2/2 Running 0 12m 172.16.85.222 k8s-node01 <none> <none>
reviews-v3-797fc48bc9-wspwt 2/2 Running 0 12m 172.16.85.223 k8s-node01 <none> <none>
[root@k8s-master istio-1.18.0]#