菜鸟系列k8s——k8s集群部署(2)

k8s集群部署

1. 角色分配

角色 IP 安装组件
k8s-master 10.0.0.170 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node0 10.0.0.113 kubelet,kube-proxy,docker,flannel,etcd
k8s-node1 10.0.0.56 kubelet,kube-proxy,docker,flannel,etcd

2. 环境预备

#系统更新
sudo yum install -y epel-release; sudo yum update -y 
sudo apt-get install -y epel-release; sudo apt-get update -y

3. 下载组件

链接:https://pan.baidu.com/s/1cOUJyQ8tuT_lT4a1ofY1Jw 提取码:6n31 

4. cfssl安装

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

5. etcd集群部署

Etcd是Kubernetes集群中的一个十分重要的组件,用于保存集群所有的网络配置和对象的状态信息

5.1 etcd证书生成

  • etcd ca配置
cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
  • etcd ca证书
cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
  • etcd server证书
cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "10.0.0.170",
    "10.0.0.113",
    "10.0.0.56"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

生成证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca 
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server

[centos@jiliguo ssl]$ ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

5.2 etcd安装

  • 解压缩
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
  • 配置etcd配置文件
vim /k8s/etcd/cfg/etcd.conf   
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/data1/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.170:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.170:2379,https://127.0.0.1:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.170:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.170:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.0.0.170:2380,etcd02=https://10.0.0.113:2380,etcd03=https://10.0.0.56:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
  • 配置etcd启动文件
mkdir /data1/etcd
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/data1/etcd/
EnvironmentFile=-/k8s/etcd/cfg/etcd.conf
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd"
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5.2 etcd集群测试

[centos@jiliguo etcd]$ sudo /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.0.0.170:2379,https://10.0.0.113:2379,https://10.0.0.56:2379" cluster-health
member 5369b6e78dc66d92 is healthy: got healthy result from https://10.0.0.113:2379
member aa55e29e6ec202ab is healthy: got healthy result from https://10.0.0.170:2379
member b5010a06fcd898a8 is healthy: got healthy result from https://10.0.0.56:2379

6. kubernets证书与私钥

  • kubernetes ca证书
cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF


cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  • apiserver证书
cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.254.0.1",
      "127.0.0.1",
      "10.0.0.170",
      "10.0.0.113",
      "10.0.0.56",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
  • kube-proxy证书
cat << EOF | tee kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

4、5、6在一台机器执行即可,后续将配置文件传输到其他节点

7. 部署master节点

tar -zxvf kubernetes-server-linux-amd64.tar.gz 
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/

7.1 部署kube-apiserver组件

  • 部署kube-apiserver组件 创建TLS Bootstrapping Token
[centos@jiliguo bin]$ head -c 16 /dev/urandom | od -An -t x | tr -d ' '
c3daa4aea38e73da480df317c0fd63f9
vim /k8s/kubernetes/cfg/token.csv
c3daa4aea38e73da480df317c0fd63f9,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  • 创建Apiserver配置文件
vim /k8s/kubernetes/cfg/kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://10.0.0.170:2379,https://10.0.0.113:2379,https://10.0.0.56:2379 \
--bind-address=10.0.0.170 \
--secure-port=6443 \
--advertise-address=10.0.0.170 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem  \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
  • 创建apiserver systemd文件
vim /usr/lib/systemd/system/kube-apiserver.service 

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
  • 启动服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver

7.2 部署kube-scheduler组件

  • 创建kube-scheduler配置文件
vim  /k8s/kubernetes/cfg/kube-scheduler 

KUBE_SCHEDULER_OPTS=" --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect "
  • 创建kube-scheduler systemd文件
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
  • 启动服务
systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl start kube-scheduler.service

7.3 部署kube-controller-manager组件

  • 创建kube-controller-manager配置文件
vim /k8s/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
  • 创建kube-controller-manager systemd文件
vim /usr/lib/systemd/system/kube-controller-manager.service 
 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
  • 启动服务
systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl start kube-scheduler.service

7.4 验证kubeserver服务

vim /etc/profile
export PATH=/k8s/kubernetes/bin:$PATH
source /etc/profile

[centos@jiliguo ~]$ kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}  

8. 部署node节点

8.1 部署kubelet组件

以下2步骤在master节点执行

  • 将kubelet-bootstrap用户绑定到系统集群角色
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
  • 创建kubelet-bootsreap kubeconfig文件
#!/bin/bash
#创建kubelet bootstrapping kubeconfig
BOOTSTRAP_TOKEN=c3daa4aea38e73da480df317c0fd63f9
KUBE_APISERVER="https://10.0.0.170:6443"
#设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

# 此处需要换个路径,普通用户出现无法读取,加sudo出现没有找到命令
kubectl config set-credentials kube-proxy \
  --client-certificate=/home/centos/k8s/kubernetes/ssl/kube-proxy.pem \
  --client-key=/home/centos/k8s/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
sh /k8s/kubernetes/cfg/environment.sh 

将配置文件传输到node节点目录下/k8s/kubernetes/cfg

node节点

  • 安装二进制文件
tar zxvf kubernetes-node-linux-amd64.tar.gz
cd kubernetes/node/bin/
cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/
  • 创建kubelet参数配置模板文件
vim /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.0.113
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.10"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
  • 创建kubelet配置文件
vim /k8s/kubernetes/cfg/kubelet
 
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.0.0.113 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
  • 创建kubelet systemd文件
vim /usr/lib/systemd/system/kubelet.service 
 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target
  • 启动服务
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet
sudo systemctl status kubelet
  • master节点 Master接受kubelet CSR请求 可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书,如下是手动 approve CSR请求操作方法 查看CSR列表
[centos@jiliguo k8s]$ kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-uOJ3ElpCCtzi4E1JKCGzf6zm8B8769kKX4OHRtYuPVM   67s   kubelet-bootstrap   Pending

[centos@jiliguo k8s]$ kubectl certificate approve node-csr-uOJ3ElpCCtzi4E1JKCGzf6zm8B8769kKX4OHRtYuPVM
certificatesigningrequest.certificates.k8s.io/node-csr-uOJ3ElpCCtzi4E1JKCGzf6zm8B8769kKX4OHRtYuPVM

[centos@jiliguo k8s]$ kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-uOJ3ElpCCtzi4E1JKCGzf6zm8B8769kKX4OHRtYuPVM   3m20s   kubelet-bootstrap   Approved,Issued

8.2 部署kube-proxy组件

kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint的变化情况,创建路由规则来进行服务负载均衡

  • 创建 kube-proxy 配置文件
vim /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.0.0.113 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
  • 创建kube-proxy systemd文件
vim /usr/lib/systemd/system/kube-proxy.service 
 
[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
  • 启动服务
sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy
sudo systemctl status kube-proxy

按照上述流程操作另一node节点

9. 安装flannel网络

默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,为了部署步骤简洁明了,故flanneld放在后面安装 flannel服务需要先于docker启动。flannel服务启动时主要做了以下几步的工作: 从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录到/run/flannel/subnet.env中

以下操作在需要加入flannel网络节点间操作

9.1 etcd注册网段

/k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.0.0.170:2379,https://10.0.0.113:2379,https://10.0.0.56:2379"  set /k8s/network/config  '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}

9.2 flannel部署

  • 解压安装
tar -xvf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
  • 配置flanneld
vim /k8s/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://10.0.0.170:2379,https://10.0.0.113:2379,https://10.0.0.56:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem -etcd-prefix=/k8s/network"
  • 创建flanneld systemd文件
vim /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
 
[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld
ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
  • 配置Docker启动指定子网 修改EnvironmentFile=/run/flannel/subnet.env,ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS即可
vim /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
 
[Install]
WantedBy=multi-user.target
  • 启动服务

注意启动flannel前要关闭docker及相关的kubelet这样flannel才会覆盖docker0网桥

sudo systemctl daemon-reload
sudo systemctl stop docker
sudo systemctl start flanneld
sudo systemctl enable flanneld
sudo systemctl start docker
master节点:

sudo systemctl restart kube-apiserver.service 
sudo systemctl restart kube-controller-manager.service 
sudo systemctl restart kube-scheduler.service

node 节点: 

sudo systemctl restart kubelet
sudo systemctl restart kube-proxy
  • 验证服务
[centos@jiliguo k8s1.13]$ cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=10.254.55.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.254.55.1/24 --ip-masq=false --mtu=1450"

[centos@jiliguo k8s1.13]$ ip a
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:d4:14:40:81 brd ff:ff:ff:ff:ff:ff
    inet 10.254.55.1/24 brd 10.254.55.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d4ff:fe14:4081/64 scope link 
       valid_lft forever preferred_lft forever

10. 集群测试

通过部署nginx服务来进行测试。

  • 部署nginx deployment
kubectl create deployment nginx --image=nginxkubectl
  • 创建nginx服务
kubectl create service nodeport nginx --tcp 80:80
  • 查看nginx服务
[centos@jiliguo ~]$ kubectl get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.254.0.1       <none>        443/TCP        25h
nginx        NodePort    10.254.219.203   <none>        80:31900/TCP   13m
  • 测试服务
在node节点执行
curl 127.0.0.1:31900

或在浏览器输入node节点ip:31900

问题

  • 用192.168.9段ip etcd服务无法启动,一直显示cannot assign requested address,但是换成本机ens0 10.0.0段ip后成功

参考:https://github.com/minminmsn/k8s1.13/blob/master/kubernetes/kubernetes1.13.1%2Betcd3.3.10%2Bflanneld0.10%E9%9B%86%E7%BE%A4%E9%83%A8%E7%BD%B2.md

猜你喜欢

转载自www.cnblogs.com/jiliguo/p/10908004.html