K8S集群部署经验总结

一、环境准备

K8S集群的环境配置如下:

机器 角色 所需软件
192.168.88.14 master docker、etcd、kube-apiserver、kube-controller-manager、kube-scheduler
192.168.88.15 node docker、kubelet、kube-proxy

二、服务配置

2.2 Master节点配置

2.2.1 配置etcd

vim /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DI
R}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

2.2.2 配置kube-apiserver

 vim /usr/lib/systemd/system/kube-apiserver.service
 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=kube
ExecStart=/usr/bin/kube-apiserver \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_ETCD_SERVERS \
            $KUBE_API_ADDRESS \
            $KUBE_API_PORT \
            $KUBELET_PORT \
            $KUBE_ALLOW_PRIV \
            $KUBE_SERVICE_ADDRESSES \
            $KUBE_ADMISSION_CONTROL \
            $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

编辑/etc/kubernetes/apiserver文件,指定kube-apiserver服务的启动参数。

vim /etc/kubernetes/apiserver

# 要监听的本地服务器地址,0.0.0.0代表任意地址
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# 要监听本地服务器端口
KUBE_API_PORT="--port=8080"

# etcd服务地址,多个地址使用分号隔开
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# 在apiserver服务内部通讯所使用的地址范围
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=169.169.0.0/16"

# 默认许可的控制策略
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDen
y,ResourceQuota"

# 自定义参数
KUBE_API_ARGS="--storage-backend=etcd3 --service-node-port-range=1-65535 --logtostderr=true --log-dir=/var/lo
g/kubernetes --v=2"

2.2.3 配置kube-controller-manager

vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=kube
ExecStart=/usr/bin/kube-apiserver \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_ETCD_SERVERS \
            $KUBE_API_ADDRESS \
            $KUBE_API_PORT \
            $KUBELET_PORT \
            $KUBE_ALLOW_PRIV \
            $KUBE_SERVICE_ADDRESSES \
            $KUBE_ADMISSION_CONTROL \
            $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

编辑/etc/kubernetes/controller-manager文件,指定kube-controller-manager服务的启动参数。

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://192.168.88.14:8080 --logtostderr=true --log-dir=/var/log/kubern
etes --v=2"

2.2.4 配置kube-scheduler

vim /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=kube
ExecStart=/usr/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

编辑/etc/kubernetes/scheduler文件,指定kube-scheduler服务的启动参数。

vim /etc/kubernetes/scheduler

# Add your own!
KUBE_SCHEDULER_ARGS="--master=http://192.168.88.14:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=
2"

2.2.5 启动Master服务

systemctl start docker etcd
systemctl status docker etcd
systemctl start kube-apiserver kube-controller-manager kube-scheduler
systemctl status kube-apiserver kube-controller-manager kube-scheduler

2.3 Node节点配置

2.3.1 配置kubelet

vim /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

编辑/etc/kubernetes/kubelet文件,指定kubelet服务的启动参数。

# 要提供服务的信息服务器地址
KUBELET_ADDRESS="--address=127.0.0.1"

# 信息服务器监听的端口
# KUBELET_PORT="--port=10250"

# kubelet服务的主机名
KUBELET_HOSTNAME="--hostname-override=127.0.0.1"

# api-server服务地址
KUBELET_API_SERVER="--api-servers=http://192.168.88.14:8080"

# pod基础容器
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.88.14:5000/pod-infrastructure:latest"

# 自定义参数
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=2
"

编辑/etc/kubernetes/kubeconfig,指定用于连接kube-apiserver服务的配置信息。

vim /etc/kubernetes/kubeconfig

apiVersion: v1
kind: Config
clusters:
  - cluster:
      server: http://192.168.88.14:8080
    name: local
contexts:
  - context:
      cluster: local
    name: mycontext
current-context: mycontext

2.3.2 配置kube-proxy

vim /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

编辑/etc/kubernetes/proxy文件,指定kube-proxy服务的启动参数。

vim /etc/kubernetes/proxy

# Add your own!
KUBE_PROXY_ARGS="--master=http://192.168.88.14:8080 --hostname-override=192.168.88.15 --logtostderr=true --lo
g-dir=/var/log/kubernetes --v=2"

2.3.3 启动Node服务

systemctl start kubelet kube-proxy
systemctl status kubelet kube-proxy

三、测试

3.1 检查服务

  • 检查集群状态是否健康:
[root@control k8s]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
etcd-0               Healthy   {
    
    "health":"true"}   
scheduler            Healthy   ok                  
controller-manager   Healthy   ok
  • 检查Node节点状态:
[root@control k8s]# kubectl get nodes
NAME            STATUS    AGE
192.168.88.15   Ready     9d

3.2 部署应用

如果检查没有问题,那么在Master节点上的某个目录下新建两个文件。

  • tomcat-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
 name: mytomcat
spec:
 replicas: 2
 selector:
  app: mytomcat
 template:
  metadata:
   labels:
    app: mytomcat
  spec:
   containers:
   - name: mytomcat
     image: tomcat:7-jre7
     ports:
     - containerPort: 8080
  • tomcat-svc.yaml
apiVersion: v1
kind: Service
metadata:
 name: mytomcat
spec:
 type: NodePort
 ports:
  - port: 8080
    nodePort: 30001
 selector:
   app: mytomcat

创建完成后,执行kubectl create命令执行部署。

[root@control k8s]# kubectl create -f mytomcat-rc.yaml         
replicationcontroller "mytomcat" created

[root@control k8s]# kubectl create -f mytomcat-svc.yaml   
service "mytomcat" created

执行完成后,查看部署状态。

[root@control k8s]# kubectl get all
NAME          DESIRED   CURRENT   READY     AGE
rc/mytomcat   2         2         2         40m

NAME             CLUSTER-IP        EXTERNAL-IP   PORT(S)          AGE
svc/kubernetes   169.169.0.1       <none>        443/TCP          9d
svc/mytomcat     169.169.101.129   <nodes>       8080:30001/TCP   40m

NAME                READY     STATUS    RESTARTS   AGE
po/mytomcat-4cbx7   1/1       Running   0          40m
po/mytomcat-d7s3n   1/1       Running   0          40m

启动成功后,也可以在浏览器上查看效果。
在这里插入图片描述 值得注意的是,访问的IP地址不是Master地址,而是Node地址。

四、常用指令

功能 指令
查询所有pod kubectl get po
查询节点标签信息 kubectl get no --show-labels
查询所有Pod名称 kubectl get pod | awk ‘{print $1}’
查询pod详情 kubectl describe po [pod_name]
查询所有svc kubectl get svc
查询svc详情 kubectl describe svc [svc_name]
查询deployment kubectl get deploy
查询集群 kubectl cluster-info
查询pod资源使用情况(需要额外安装metrics-server组件) kubectl top pod [pod_name]
查询所有node kubectl get no
查看pod日志 kubectl logs [pod_name]
查看deployment运行情况 kubectl rollout status deploy [deployment_name]
查看历史版本 kubectl rollout history deploy [deployment_name]
创建服务 kubectl create -f [部署文件]
更新服务 kubectl apply -f [部署文件]
扩容和缩容 kubectl scale deploy [deployment_name] --replicas=副本数
停止服务 kubectl scale deploy [deployment_name] --replicas=0 或 kubectl delete deploy [deployment_name]
回滚服务 kubectl rollout undo deploy [deployment_name] --to-revision=版本号
进入Pod容器 kubectl exec -it [pod_name] bash
执行Pod容器里的shell命令 kubectl exec [pod_name] bash – [shell command]
本地与pod之间文件拷贝 kubectl cp [local_file] [pod_name]:[remote_file]
为指定节点添加标签 kubectl label nodes [node] [标签名=标签值]

猜你喜欢

转载自blog.csdn.net/zhongliwen1981/article/details/118805204