Kubernetes binary package deployment

Package information

name address
etcd https://github.com/coreos/etcd/releases/tag/v3.2.12
Kubernetes server https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#downloads-for-v1160
Node-Kubernetes https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#downloads-for-v1160

Installation instructions Environment
Here Insert Picture Description

Mounting step
mounting the certificate cfssl tool
is mounted in the package with a common component ftp://192.168.1.11/

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

Cluster deployment Etcd
create a certificate
generate three files

[cqs@localhost etcd]$ ls *json
ca-config.json  ca-csr.json  server-csr.json
ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "192.168.1.11",
    "192.168.1.10",
    "192.168.1.13"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

Execute the following command to generate a certificate

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
ca-key.pem  ca.pem  server-key.pem  server.pem

Upload and extract the installation package, installed into / opt / etcd directory, later in this section operate three servers are required to do it again, note the file which is not different at

mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

Creating etcd profile

vi /opt/etcd/etcd

Typing

etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.11:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.11:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.11:2380,etcd02=https://192.168.1.10:2380,etcd03=https://192.168.1.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

Note that the contents of the file inside, three servers are somewhat different: ETCD_LISTEN_PEER_URLS, ETCD_LISTEN_CLIENT_URLS

* ETCD_NAME node name
* ETCD_DATA_DIR data directory
* ETCD_LISTEN_PEER_URLS cluster communication monitoring address
* ETCD_LISTEN_CLIENT_URLS Client Access listen address
* ETCD_INITIAL_ADVERTISE_PEER_URLS group advertisement address
* ETCD_ADVERTISE_CLIENT_URLS client notification address
* ETCD_INITIAL_CLUSTER cluster nodes address
* ETCD_INITIAL_CLUSTER_TOKEN cluster Token
current status * ETCD_INITIAL_CLUSTER_STATE join the cluster, new a new cluster, existing representation join an existing cluster
creation etcd service

sudo vi /usr/lib/systemd/system/etcd.service #CentOS
sudo vi /lib/systemd/system/etcd.service     #Ubuntu
etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

The generated certificate just copied to the location profile

sudo cp *pem /opt/etcd/ssl

Set etcd service boot

sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

Execute the following command to look at whether the deployment was successful. Executed in the directory generated certificate

/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.11:2379,https://192.168.1.10:2379,https://192.168.1.13:2379" cluster-health

Output similar to the following information is a success

member 18218cfabd4e0dea is healthy: got healthy result from https://192.168.31.63:2379
member 541c1c40994c939b is healthy: got healthy result from https://192.168.31.65:2379
member a342ea2798d20705 is healthy: got healthy result from https://192.168.31.66:2379
cluster is healthy

Docker deployment
deployed in Docker to two Node node, see other articles.

Flannel network deployment
now etcd written inside a preset segment information. Like the above command checks in the certificate directory execution, only in one of these machines perform on it

/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.11:2379,https://192.168.1.10:2379,https://192.168.1.13:2379" set /coreos.com/network/config  '{ "Network": "172.17.0.0/16",
Backend": {"Type": "vxlan"}}

Download the installation package, copy it on several servers

mkdir /opt/kubernetes/{bin,cfg,ssl} -p
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar zxvf flannel-v0.9.1-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /opt/kubernetes/bin

The following steps are to be executed in three servers

Create a flannel profile

sudo vi /opt/kubernetes/cfg/flanneld

Follows

flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.1.11:2379,https://192.168.1.10:2379,https://192.168.1.13:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"

Creating Flannel Service

sudo vi /usr/lib/systemd/system/flanneld.service  #CentOS
sudo vi /lib/systemd/system/flanneld.service  #Ubuntu
flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
 
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

Docker start the specified subnet configuration

sudo vi /usr/lib/systemd/system/docker.service  #CentOS
sudo vi /lib/systemd/system/docker.service  #Ubuntu

Follows, the key is:

EnvironmentFile=/run/flannel/subnet.env,

ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS

These two lines

docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
 
[Install]
WantedBy=multi-user.target

Start flannel service and join self-starting, restarting docker service

systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker
ip addr #看下网络配置,几台服务器互相ping下看看通不通

Master node deployment

Generate a certificate
note, not just with etcd certificate in the same directory

ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
server-csr.json
{
    "CN": "kubelet-bootstrap",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.1.11",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

Generate kube-proxy certificate:

kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls *pem  #看下生成的证书文件,应该有6个 ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem

Apiserver deployment components

The installation package is copied to the master server, the lower kube-apiserver kube-scheduler kube-controller-manager kubectl copying the directory / opt / kubernetes / bin

sudo mkdir /opt/kubernetes/{bin,cfg,ssl} -p
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
sudo cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin
sudo ln -s /opt/kubernetes/bin/kubectl /usr/local/bin/   #这样的话kubectl可以随处执行

Create a token file

sudo vi /opt/kubernetes/cfg/token.csv
token.csv
674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

The first column: random string, their may be generated
in the second column: Username: kubelet-bootstrap, to keep up with the back of the user name matches the
third column: the UID
fourth column: User Group

Creating kube-apiserver profile

kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.1.11:2379,https://192.168.1.10:2379,https://192.168.1.13:2379 \
--bind-address=192.168.1.11 \
--secure-port=6443 \
--insecure-bind-address=0.0.0.0 \
--insecure-port=18080 \
--advertise-address=192.168.1.11 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--kubelet-client-certificate \
--kubelet-client-key \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

Configured previously generated certificate, ensure connection etcd.

Parameter Description:

-Logtostderr enable log
-v log level
-etcd-servers etcd cluster address
-bind-address monitoring address
-insecure-bind-address Http protocol binding address
-insecure-port Http port Note that the default value is 8080, in order to avoid conflicts change became 18080, the latter two places to maintain the unity of
-secure-port https port security
-advertise-address announcements cluster address
-allow-privileged enable authorization
-service-cluster-ip-range Service virtual IP addresses
-enable-admission-plugins admission control module
-authorization-mode authentication and authorization, enable self-managing RBAC authorization and node
-enable-bootstrap-token-auth enable TLS bootstrap function'll come back
-token-auth-file token file
-service-node-port- range Service Node type default port range assignment

Creating apiserver Service

sudo vi /usr/lib/systemd/system/kube-apiserver.service  #CentOS
sudo vi /lib/systemd/system/kube-apiserver.service  #Ubuntu

start up

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

Deployment scheduler components
to create a scheduler configuration file

sudo vi /opt/kubernetes/cfg/kube-scheduler

Fill in the following information

kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \   #特别注意,这个地方要改成18080
--leader-elect"

Parameter Description:

-Master local connection apiserver
-leader-Elect when the plurality of component starts automatically election (the HA)
systed scheduler management component

sudo vi /usr/lib/systemd/system/kube-scheduler.service #CentOS
sudo vi /lib/systemd/system/kube-scheduler.service #Ubuntu

Fill in the following information

kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

start up

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

Deploy controller-manager component
to create a profile

sudo vi /opt/kubernetes/cfg/kube-controller-manager

Fill in the following information

kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \   #特别注意,这个地方应该改成18080
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

systemd management controller-manager component

sudo vi /usr/lib/systemd/system/kube-controller-manager.service #CentOS
sudo vi /lib/systemd/system/kube-controller-manager.service   #Ubuntu

Fill in the following information

kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

start up

sudo systemctl daemon-reload
sudo systemctl enable kube-controller-manager
sudo systemctl restart kube-controller-manager

Deployment kubectl

kubectl is a command line tool that can perform various operations through this operation kubernetes cluster command, query information

Create a soft link to kubectl command, so that can be executed kubectl anywhere, do not write the full path

sudo ln -s /opt/kubernetes/bin/kubectl /usr/local/bin/   #前提是kubectl 已经拷到/opt/kubernetes/bin/目录下

Create a configuration file, execute the following command in the directory generated kubernetes certificate

# 创建kubelet bootstrapping kubeconfig
BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc
KUBE_APISERVER="https://192.168.1.11:6443"
 
# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
 
# 创建kube-proxy kubeconfig文件
 
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

The above command will generate two configuration files

ls *config
bootstrap.kubeconfig  kube-proxy.kubeconfig

Copy the contents of the bootstrap.kubeconfig ~ / .kube / config file

cp bootstrap.kubeconfig ~/.kube/config

Then you can use the command kubectl, with his first look at the state of the cluster

kubectl get cs
kubectl get pods

Deploy Node node
under node node requires deployment kubelet and kube-proxy, these two files to each node node / opt / kubernetes / bin directory

Generated above: bootstrap.kubeconfig, kube-proxy.kubeconfig copied to the / opt / kubernetes / cfg directory

Deployment kubelet
create a profile

sudo vi /opt/kubernetes/cfg/kubelet

Fill in the following information

omelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.1.10 \   #注意这个地方,每个node都不一样
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

Parameter Description:

-Hostname-override the host name displayed in the cluster
-kubeconfig specified kubeconfig file location will be automatically generated
-bootstrap-kubeconfig specified earlier generation bootstrap.kubeconfig file
-cert-dir certificates stored position
-pod-infra-container-image management Pod mirroring network
which follows /opt/kubernetes/cfg/kubelet.config

kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.1.10 #注意这个地方,每台node服务器都不同
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

systemd management kubelet components

sudo vi /usr/lib/systemd/system/kubelet.service

Fill in the following information

kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

start up

sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl restart kubelet

After starting not join the cluster, you need to manually allow the node can.
In the Master node Node View request signature:

In the Master Node join the cluster nodes approval

kubectl get csr
kubectl certificate approve XXXXID
kubectl get node

Deployment kube-proxy
to create a profile

sudo vi /opt/kubernetes/cfg/kube-proxy

Fill in the following information

kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.31.65 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

systemd management kube-proxy

sudo vi /usr/lib/systemd/system/kube-proxy.service #CentOS
sudo vi /lib/systemd/system/kube-proxy.service   #ubuntu

Fill in the following information

kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

start up

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

View cluster status

kubectl get node
kubectl get cs
Published 15 original articles · won praise 0 · Views 291

Guess you like

Origin blog.csdn.net/weixin_44478009/article/details/103854238