After enabling ISTIO-CNI, the automatically injected POD will start the istio-validation container to detect whether the network is normal. When setting up the test environment for another line of business of our company, it was found that the istio-validation container could not be started. The log output:
Error connecting to 127.0.0.6:15002: dial tcp 127.0.0.1:0->127.0.0.6:15002: connect: connection refused
Various investigations, and finally check the system log journalctl -ex
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: W1102 14:50:30.291177 1029 cni.go:202] Error validating CNI config list {
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "name": "cbr0",
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "cniVersion": "0.3.1",
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "plugins": [
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: {
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "type": "flannel",
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "delegate": {
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "hairpinMode": true,
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "isDefaultGateway": true
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: }
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: },
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: {
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "type": "portmap",
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "capabilities": {
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "portMappings": true
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: }
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: },
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: {
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "cniVersion": "0.3.1",
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "name": "istio-cni",
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "type": "istio-cni",
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "log_level": "info",
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "kubernetes": {
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "kubeconfig": "/etc/cni/net.d/ZZZ-istio-cni-kubeconfig",
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "cni_bin_dir": "/opt/cni/bin",
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "exclude_namespaces": [
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "istio-system",
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: "kube-system"
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: ]
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: }
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: }
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: ]
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: }
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: : [failed to find plugin "istio-cni" in path [/opt/kube/bin]]
Nov 02 14:50:30 k8s-worker-03 kubelet[1029]: W1102 14:50:30.291194 1029 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
It is found that the configuration of istio-cni and the path of the cni executable file of the K8S configuration are inconsistent. This situation is more likely to occur in the environment of K8S cluster deployment with the help of various third-party tools, such as ansible deployment of k8s cluster, the default CNI executable The file directory is in /opt/kube/bin and istio is set to /opt/cni/bin by default , you can find it by checking the pod log of configmap or istio-cni
solution:
Option One:
Modify the yaml file of istio deployment and add the official cniBinDir: your path
cni:
excludeNamespaces:
- istio-system
- kube-system
logLevel: info
cniBinDir: /opt/kube/bin
repair:
enabled: true
deletePods: false
Or add the --set values.cni.cniBinDir=... and --set values.cni.cniConfDir=... options during command line deployment
Option II:
Modify the configmap named istio-cni-config in the istio-system space,
find the cniBinDir and change it to the correct path, and regenerate all pods
The above only lists the errors of the bin directory. It may also be the error of cniConfDir in different environments. Just modify it to the correct one.