kubeadm安装过程 -- centos7

先建一台虚拟机

这个kubeadm 消耗比较大,直接在租用的服务器上会报错
cpu 以及内存不够
所以在本地hyper-v 新建主机
导入centos 7 镜像,采取 dhcp即可

dhcp配置文件

vi  /etc/sysconfig/network-scripts/ifcfg-enp0s10f0

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s10f0"
UUID="d9415879-08e7-4c64-9c78-13a3c552f1d9"
DEVICE="enp0s10f0"
ONBOOT="yes"
IPV6_PRIVACY="no"
#IPADDR=192.168.0.105
#GATEWAY=192.168.0.1
#DNS1=192.168.1.1,192.168.0.1

静态ip

修改 BOOTPROTO=“static”
自己指定ip就好

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s10f0"
UUID="d9415879-08e7-4c64-9c78-13a3c552f1d9"
DEVICE="enp0s10f0"
ONBOOT="yes"
IPV6_PRIVACY="no"
IPADDR=192.168.0.105
GATEWAY=192.168.0.1
DNS1=192.168.1.1,192.168.0.1

主机名修改

vi /etc/hostname
就改为你的主机名
vi /etc/hosts
添加 主机名   127.0.0.1

reboot 重启生效

单机款快速安装

https://blog.csdn.net/u013355826/article/details/82801482

#需要的镜像
images=(
		kube-proxy-amd64:v1.10.0 
        kube-scheduler-amd64:v1.10.0 
        kube-controller-manager-amd64:v1.10.0 
        kube-apiserver-amd64:v1.10.0
        etcd-amd64:3.1.12 
        pause-amd64:3.1 
        kubernetes-dashboard-amd64:v1.8.3 
        k8s-dns-sidecar-amd64:1.14.8 
        k8s-dns-kube-dns-amd64:1.14.8
        k8s-dns-dnsmasq-nanny-amd64:1.14.8)
#下载并且打上指定的标签
for imageName in ${images[@]} ; do
  docker pull keveon/$imageName
  docker tag keveon/$imageName k8s.gcr.io/$imageName
  docker rmi keveon/$imageName
done

原来是 kubelet 启动时的 cgroup driver 和 docker 的不一致。

根据官方文档

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf


原文链接:https://blog.csdn.net/u012570862/article/details/80150988
 yum makecache fast && yum install -y kubelet-1.10.0  kubeadm-1.10.0 kubectl-1.10.0 kubernetes-cni-0.6.0

安装kubeadm

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-1.8.1-0.x86_64.rpm \
 && sudo rpm -ivh minikube-1.8.1-0.x86_64.rpm

curl -Lo minikube https://github.com/kubernetes/minikube/releases/download/v1.17.4/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

Kubernetes的镜像地址为:https://repo.huaweicloud.com/kubernetes/
Ubuntu/Debian
CentOS/RHEL/Fedora
1、备份/etc/yum.repos.d/kubernetes.repo文件:

cp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/kubernetes.repo.bak

2、修改/etc/yum.repos.d/kubernetes.repo文件:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://repo.huaweicloud.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://repo.huaweicloud.com/kubernetes/yum/doc/yum-key.gpg https://repo.huaweicloud.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3、SELinux运行模式切换为宽容模式

setenforce 0

4、更新索引文件

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

相关网址
Kubernetes官网:https://kubernetes.io/
Kubernetes官方指南:https://kubernetes.io/docs/setup/independent/install-kubeadm/

简单使用kubeadm

systemctl enable kubelet.service
 kubeadm init

报错
exec: "docker": executable file not found in $PATH
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
        [ERROR Swap]: running with swap on is not supported. Please disable swap

逐一解决
FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
#解决
modprobe br_netfilter 

问题 /proc/sys/net/bridge/bridge-nf-call-iptables
解决
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

问题 /proc/sys/net/ipv4/ip_forward contents are not set to 1
解决
echo 1 > /proc/sys/net/ipv4/ip_forward

问题  running with swap on is not supported. Please disable swap
解决
swapoff -a

问题
error execution phase preflight: docker is required for container runtime: exec: "docker": executable file not found in $PATH
解决
安装docker

centos7安装docker-ce

yum install -y yum-utils device-mapper-persistent-data lvm2


yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

 yum install docker-ce -y

再次初始化 kubeadm init

再次出现新的错误
 [ERROR Service-Docker]: docker service is not active, please run 'systemctl start docker.service'
        [ERROR IsDockerSystemdCheck]: cannot execute 'docker info': exit status 1
        [ERROR SystemVerification]: failed to get docker info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

解决 systemctl start docker.service


 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/

 docker pull gcr.azk8s.cn/google-containers/kube-proxy:v1.14.2
 docker tag gcr.azk8s.cn/google-containers/kube-proxy:v1.14.2 k8s.gcr.io/kube-proxy:vv1.14.2

// 使用Azure中国镜像
-> [root@kube0.vm] [~] docker pull quay.azk8s.cn/coreos/flannel:v0.11.0-amd64
-> [root@kube0.vm] [~] docker tag quay.azk8s.cn/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
docker.io 镜像加速(docker tag 同上)
// 使用Azure中国镜像

// docker pull nginx:latest  使用下面代替
-> [root@kube0.vm] [~] docker pull dockerhub.azk8s.cn/library/nginx:latest

// docker pull aaa/bbb:ccc  使用下面代替
-> [root@kube0.vm] [~] docker pull dockerhub.azk8s.cn/aaa/bbb:ccc 
--apiserver-advertise-address string
API 服务器所公布的其正在监听的 IP 地址。如果未设置,则使用默认网络接口。
--apiserver-bind-port int32     默认值:6443
API 服务器绑定的端口。
需要的镜像
k8s.gcr.io/kube-apiserver:v1.14.2
k8s.gcr.io/kube-controller-manager:v1.14.2
k8s.gcr.io/kube-scheduler:v1.14.2
k8s.gcr.io/kube-proxy:v1.14.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

docker pull gcr.azk8s.cn/google-containers/kube-apiserver:v1.17.4
docker pull gcr.azk8s.cn/google-containers/kube-controller-manager:v1.17.4
docker pull gcr.azk8s.cn/google-containers/kube-scheduler:v1.17.4
docker pull gcr.azk8s.cn/google-containers/kube-proxy:v1.17.4
docker pull gcr.azk8s.cn/google-containers/pause:3.1
docker pull gcr.azk8s.cn/google-containers/etcd:3.4.3-0 
docker pull gcr.azk8s.cn/google-containers/coredns:1.6.5

docker tag gcr.azk8s.cn/google-containers/kube-apiserver:v1.17.4  k8s.gcr.io/kube-apiserver:v1.17.4
docker tag gcr.azk8s.cn/google-containers/kube-controller-manager:v1.17.4 k8s.gcr.io/kube-controller-manager:v1.17.4
docker tag gcr.azk8s.cn/google-containers/kube-scheduler:v1.17.4 k8s.gcr.io/kube-scheduler:v1.17.4
docker tag gcr.azk8s.cn/google-containers/kube-proxy:v1.17.4 k8s.gcr.io/kube-proxy:v1.17.4
docker tag gcr.azk8s.cn/google-containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag gcr.azk8s.cn/google-containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag gcr.azk8s.cn/google-containers/coredns:1.6.5 k8s.gcr.io/coredns:1.6.5

安装网络插件

扫描二维码关注公众号,回复: 10957886 查看本文章
在这里插入代码片
再次进行 
kubeadm init


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.103:6443 --token l40zcw.cwzoaebwylp219gp \
    --discovery-token-ca-cert-hash sha256:ca09966540506589cdb67ad10a055ec92e792ed46221c3b5703351493ba1bc1a 

配置k8s软件源

#配置k8s软件源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

部署仪表盘web

cd /root/k8s
touch kubernetes-dashboard.yaml

kubernetes-dashboard.yaml 文件内容如下

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

修改权限信息

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: kubernetes-dashboard-minimal
 namespace: kube-system
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: kubernetes-dashboard
 namespace: kube-system
cd /root/k8s

$ curl -O https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml



#查看自己的版本号
kubectl get nodes
k8s1   NotReady   master   23m   v1.14.2

docker pull gcr.azk8s.cn/google-containers/kubernetes-dashboard-amd64:v1.10.0

docker tag gcr.azk8s.cn/google-containers/kubernetes-dashboard-amd64:v1.10.0   k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

#kubectl apply -f kubernetes-dashboard.yaml

kubeadm init 初始化
部署应用
kubectl create -f kubernetes-dashboard.yaml

删除应用
kubectl delete -f kubernetes-dashboard.yaml

查看运行的服务
kubectl get pods --all-namespaces 

查看运行的应用
kubectl get svc 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   80m

kubectl expose deployment nginx --type=LoadBalancer --port=80 --target-port=80

type的类型选择为LoadBalancer, --port指定的是80端口,意思是这个service对外界暴露出来的服务端口是80--target-port=80

设置swap开机不启动

CentOS 7 安装Kubernetes(单机版)

一、关闭CentOS自带的防火墙服务

#  systemctl disable firewalld  

# systemctl  stop firewalld  

二、安装etcd和Kubernetes软件(会自动安装Docker)

#   yum  install  -y  etcd  kubernetes  

三、安装好软件后,修改两个配置文件(其他的配置文件使用系统默认的配置参数即可)

› Docker配置文件为 /etc/sysconfig/docker,其中OPTIONS的内容为:

  OPTIONS='--selinux-enabled=false --insecure-registry gcr.io'  

› Kubernetes apiserver配置文件为/etc/kubernetes/apiserver,把--admission_control参数中的ServiceAccount删除。

四、按顺序启动所有服务

#  systemctl start etcd  

#  systemctl start docker  

#  systemctl start kube-apiserver  

#  systemctl start kube-controller-manager  

#  systemctl start kube-scheduler  

#  systemctl start kubelet   

#  systemctl start kube-proxy   

常用命令

启动服务
sh /root/k8s/init01.sh

终止服务
sh /root/k8s/stop01.sh

记得删除相关配置文件
rm -rf /root/.kube


#查看版本
kubectl get nodes

kubeadm init 初始化
部署应用
kubectl create -f kubernetes-dashboard.yaml

删除应用
kubectl delete -f kubernetes-dashboard.yaml

查看运行的服务
kubectl get pods --all-namespaces 

查看所有应用
kubectl get svc 

kubectl expose deployment kubernetes-dashboard --type=LoadBalancer --port=8000 --target-port=443

type的类型选择为LoadBalancer, --port指定的是80端口,意思是这个service对外界暴露出来的服务端口是80--target-port=80
发布了101 篇原创文章 · 获赞 3 · 访问量 1万+

猜你喜欢

转载自blog.csdn.net/qq_43373608/article/details/105057942