kubernetes 坑人的错误!!!Unable to connect to the server: x509: certificate signed by unknown authority

Explanation

Nima tonight when the virtual machine build kubernetes cluster environment, experiencing a break, I spent nearly four hours to solve! ! ! Now document how this problem and how to fix, avoid stepping descendants pit! ! !

surroundings

  • ubuntu 16.04 virtual machines
  • docker 18.09.1
  • 1.14.3 kubernetes

Recurring problem

Yesterday Follow the tutorial to build a cluster, today re-think the next experiment, then execute kubeadm resetthe command to clear all cluster configurations.

Then accordance with the conventional process deployment kubeadm init --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=0.0.0.0command to create a cluster. Then execute the following command:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then when I execute kubectl get nodesisochronous command, all commands will print out the error: Unable to Connect to at The Server: X509: Certificate Signed by Unknown Authority (Possibly Because of "Crypto / rsa: Verification error" the while Trying to the Verify Candidate Authority Certificate " kubernetes ")

When adding --insecure-skip-tls-verify these parameters after kubectl command, will report the following error: error: the MUST BE logged in to by You at The Server (Unauthorized)

Problem-solving process

Period, I tried all the relevant information can be searched, they are not a crippled. I also confirmed the kubeadm resetcommand will completely erase the cluster configuration that has been created, create a new cluster configuration is cleared after then why not do it? I really no way to focus on the implementation of these additional commands:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

These commands will create a directory and copy several configuration files, re-create the cluster, this directory is still there, so I try to execute several commands before executing this rm -rf $HOME/.kubecommand to delete the directory, and finally solved the problem ! ! !

to sum up

This problem is very deceptive, delete and then re-create the cluster can be considered a routine operation, if you execute kubeadm resetthe command does not delete the created $HOME/.kubedirectory, create a new cluster will appear this problem!

Guess you like

Origin blog.csdn.net/woay2008/article/details/93250137