Explanation
Nima tonight when the virtual machine build kubernetes cluster environment, experiencing a break, I spent nearly four hours to solve! ! ! Now document how this problem and how to fix, avoid stepping descendants pit! ! !
surroundings
- ubuntu 16.04 virtual machines
- docker 18.09.1
- 1.14.3 kubernetes
Recurring problem
Yesterday Follow the tutorial to build a cluster, today re-think the next experiment, then execute kubeadm reset
the command to clear all cluster configurations.
Then accordance with the conventional process deployment kubeadm init --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=0.0.0.0
command to create a cluster. Then execute the following command:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Then when I execute kubectl get nodes
isochronous command, all commands will print out the error: Unable to Connect to at The Server: X509: Certificate Signed by Unknown Authority (Possibly Because of "Crypto / rsa: Verification error" the while Trying to the Verify Candidate Authority Certificate " kubernetes ")
When adding --insecure-skip-tls-verify these parameters after kubectl command, will report the following error: error: the MUST BE logged in to by You at The Server (Unauthorized)
Problem-solving process
Period, I tried all the relevant information can be searched, they are not a crippled. I also confirmed the kubeadm reset
command will completely erase the cluster configuration that has been created, create a new cluster configuration is cleared after then why not do it? I really no way to focus on the implementation of these additional commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
These commands will create a directory and copy several configuration files, re-create the cluster, this directory is still there, so I try to execute several commands before executing this rm -rf $HOME/.kube
command to delete the directory, and finally solved the problem ! ! !
to sum up
This problem is very deceptive, delete and then re-create the cluster can be considered a routine operation, if you execute kubeadm reset
the command does not delete the created $HOME/.kube
directory, create a new cluster will appear this problem!