k8s - informe de error

k8s - informe de error

1.
[root@master kubeconfig]# kubectl get csr
No resources found.
解决
查看master/var/log/messages

--------------------------------------------------------------------------------------------------------------------

2.
kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "cluster-admin-binding" already exists
由于残留文件导致无法重新导入集群所以执行:
kubectl delete  clusterrolebinding kubelet-bootstrap

sudo kubectl delete clusterrolebindings cluster-admin-binding

进行集群导入到rancher的时候残留配置文件导致导入失败
sudo kubectl get clusterrolebindings cluster-admin-binding -o yaml

apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding
metadata: creationTimestamp: "2020-02-10T13:35:42Z" name:
cluster-admin-binding resourceVersion: "35967" selfLink:
/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin-binding
uid: d3c207d2-4adc-4e3e-951d-48c5ad99eeaa roleRef: apiGroup:
rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin
subjects:

apiGroup: rbac.authorization.k8s.io kind: User name: lishikai
sudo kubectl delete clusterrolebindings cluster-admin-binding

clusterrolebinding.rbac.authorization.k8s.io "cluster-admin-binding"
deleted

----------------------------------------------------------------------------------------------------

3.  master 获取不到节点的请求  setenforce: SELinux is disabled
[root@master kubeconfig]#  kubectl get csr
No resources found.
[root@master kubeconfig]#  kubectl get csr
No resources found.
[root@master kubeconfig]#  kubectl get csr
No resources found.

节点 提示;;  
9月 29 18:05:39 node1 kubelet[39660]: I0929 18:05:39.569373   39660 bootstrap.go:235] Failed to connect to apiserver: the server has asked for the client to ...credentials
9月 29 18:05:41 node1 kubelet[39660]: I0929 18:05:41.749264   39660 bootstrap.go:235] Failed to connect to apiserver: the server has asked for the client to ...credential

处理方法:
kubeconfig 脚本配置里的 token 序列号配置错误
修改如下,后
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=11403f512b6f0dcf9807cec2862cd32a \
  --kubeconfig=bootstrap.kubeconfig

删除原来的文件         重新生成
[root@master kubeconfig]# rm -rf kube-proxy.kubeconfig
[root@master kubeconfig]# rm -rf bootstrap.kubeconfig
[root@master kubeconfig]# ls
kubeconfig
[root@master kubeconfig]# bash kubeconfig 192.168.100.3 /root/k8s/k8s-cert/
Cluster "kubernetes" set.

重新将生成的文件传给节点,
[root@master kubeconfig]# scp bootstrap.kubeconfig  kube-proxy.kubeconfig  [email protected]:/opt/kubernetes/cfg/
[email protected] s password:
bootstrap.kubeconfig                                                                                                                      100% 2167     1.7MB/s   00:00
kube-proxy.kubeconfig                                                                                                                     100% 6273     7.1MB/s   00:00
节点重启服务

systemctl restart kubelet.service
master  重新获取信息

[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-D-6Qg-440uk6mAMVNkwmyAQbDSXH3r7GB9BjarecFvg   11s   kubelet-bootstrap   Pending

-----------------------------------------------------------------
3.加入etcd群集报错
request sent was ignored: peer 
错误原因:集群id不匹配
解决:创建server.pem 和server-key.pem
连同ca证书一起发给要加入的节点192.168.100.200

cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.100.170", "master地址"
    "192.168.100.180",  "node1地址"
    "192.168.100.190"   "node2地址"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

删除各节点ETCD的日志
rm -rf /var/lib/etcd/default.etcd
master重新生成配置文件
[root@master k8s]# bash etcd.sh etcd01 192.168.100.170 etcd02=https://192.168.100.180:2380,etcd03=https://192.168.100.190:2380,etcd03=https://192.168.100.200:2380  '//进入卡住状态等待其他节点加入,使用另外一个终端查看'
检查群集状态
-----------------------------------------------------

#暴露端口供外部访问
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=Nodeport

kubectl get svc 查看pod服务列表

清空重来 kubeadm reset

----------------------------------------------------------
######问题描述

创建bootstrap角色赋予权限用于连接apiserver请求签名时报错,修改如下所示:

[root@localhost kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io “kubelet-bootstrap” already exists

问题分析

这是因为之前已经创建过错误的签名,签名被占用,需要删除已经被占用的签名

问题解决

1、删除签名

kubectl delete clusterrolebindings kubelet-bootstrap

2、重新创建成功

[root@localhost kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

Supongo que te gusta

Origin blog.51cto.com/14625831/2548609
Recomendado
Clasificación