学习kubernetes中遇见的一些坑

一、权限问题

通过rc配置文件起pod,rc中配置了privileged为true,发现pod状态一直Running不起来,查看pod详情发现

    [root@docker tmp]# kubectl describe pods nfs-rc-acbo1
    Name:       nfs-rc-acbo1
    Namespace:  default
    Node:       duni-node2
    Labels:     role=nfs-server
    Status:     Pending
    IP:     
    Controllers:    ReplicationController/nfs-rc
    Containers:
      nfs-server:
        Image:          192.168.100.90:5000/nfs-data
        Port:           2049/TCP
        Volume Mounts:      <none>
        Environment Variables:  <none>
    Conditions:
      Type      Status
      PodScheduled  True 
    No volumes.
    QoS Class:  BestEffort
    Tolerations:    <none>
    Events:
      FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason          Message
      --------- --------    -----   ----            -------------   --------    ------          -------
      27s       27s     1   {default-scheduler }            Normal      Scheduled       Successfully assigned nfs-rc-acbo1 to duni-node2
      27s       27s     1   {kubelet duni-node2}            Warning     FailedValidation    Error validating pod nfs-rc-acbo1.default from api, ignoring: spec.containers[0].securityContext.privileged: Forbidden: disallowed by policy
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

解决:

修改所有node以及master节点的k8s配置文件 vim /etc/kubernetes/config

$ KUBE_ALLOW_PRIV="--allow-privileged=true"
$ systemctl restart kube-apiserver
  • 1
  • 2

二、pause k8s镜像下载失败

pod启动失败,查看pod详情(kubectl describe pods podname)

    Events:
      FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason      Message
      --------- --------    -----   ----            -------------   --------    ------      -------
      56s       56s     1   {default-scheduler }            Normal      Scheduled   Successfully assigned nfs-rc-fc2w8 to duni-node1
      11s       11s     1   {kubelet duni-node1}            Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause-amd64:3.0, this may be because there are no credentials on this request.  details: (Get https://gcr.io/v1/_ping: dial tcp 74.125.203.82:443: i/o timeout)"
  • 1
  • 2
  • 3
  • 4
  • 5

解决:

方法一:如果服务器可以访问外网,则可在docker daemon的启动参数中加上--insecure-registry gcr.io 
1、修改docker配置文件(vim /etc/sysconfig/docker)

OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://olzwzeg2.mirror.aliyuncs.com --insecure-registry gcr.io'
  • 1

2、重启docker服务

$ systemctl restart docker
  • 1

方法二、

如果kubernetes集群在内网环境中,无法访问gcr.io网站,则可先通过一台能访问gcr.io的机器下载pause镜像,导出后再导入内网的docker私有镜像仓库中,并在kubelet的启动参数中加上--pod_infra_container_image,然后重启kubelet

一般google官方的镜像被墙了不能下载,我们都可以到阿里云或者DaoCloud下载,有人会同步google镜像的,下载后同步到自己的私有仓库

到docker hub 下载pause镜像

$ docker pull kubernetes/pause
  • 1

假设你已搭建私有的镜像仓库地址为:192.168.10.12:5000,如何搭建自己的镜像私有仓库

修改pause镜像标签

$ docker tag docker.io/kubernetes/pause:latest 192.168.10.12:5000/google_containers/pause-amd64.3.0
  • 1

上传镜像到私有仓库

$ docker push 192.168.10.12:5000/google_containers/pause-amd64.3.0
  • 1

vim /etc/kubernetes/kubelet配置为:

KUBELET_ARGS="--pod_infra_container_image=192.168.10.12:5000/google_containers/pause-amd64.3.0"
  • 1

重启kubelet

$ systemctl restart kubelet
  • 1

三、pod删除了又重启

kubectl run test --image=test_image启动的容器,删除pod后一直重启

解决:

详情:http://dockone.io/question/1076

四、磁盘空间不够,Dockerfile build失败

当我们制作docker镜像比较大,而制作镜像的机器磁盘空间不够大时,便会提示我们docker build失败

查看磁盘空间

df -h

查看缓存

free -h

清空缓存

echo 3 > /proc/sys/vm/drop_caches

查看docker镜像

docker images -a

删除docker镜像

du -h /var/lib/docker

猜你喜欢

转载自blog.csdn.net/yangxiaobo118/article/details/80632407