k8s installation of Master

Configure and start kube-apiserver
Creating kube-apiserver.service
/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
ExecStart=/usr/local/bin/kube-apiserver \
        --anonymous-auth=false \
        --basic-auth-file=/etc/kubernetes/basic_auth_file \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_ALLOW_PRIV \
        --etcd-servers=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379 \
        --advertise-address=172.16.20.206 \
        --bind-address=0.0.0.0 \
        --insecure-bind-address=0.0.0.0 \
        --service-cluster-ip-range=10.254.0.0/16 \
        --admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota \
        --authorization-mode=RBAC,Node \
        --runtime-config=rbac.authorization.k8s.io/v1beta1 \
        --kubelet-https=true --enable-bootstrap-token-auth=true \
        --token-auth-file=/etc/kubernetes/token.csv \
        --service-node-port-range=30000-32767 \
        --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
        --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
        --client-ca-file=/etc/kubernetes/ssl/ca.pem \
        --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
        --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
        --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
        --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
        --enable-swagger-ui=true \
        --apiserver-count=3 \
        --allow-privileged=true \
        --audit-log-maxage=30 \
        --audit-log-maxbackup=3 \
        --audit-log-maxsize=100 \
        --audit-log-path=/var/lib/audit.log \
        --event-ttl=1h
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Explanation

--authorization-mode = RBAC secure port specified in the authorization using the RBAC model, reject a request for authorization failed;
Kube-Scheduler, Kube-Controller-Manager ships and kube-apiserver deployed on the same machine, and they use non-secure port kube-apiserver communication;
kubelet, Kube-Proxy, kubectl deployed on other node node, if through a secure port access kube-apiserver, you must first pass TLS certificate authentication, and then authorized by RBAC;
Kube-Proxy, kubectl by the use of a certificate related to the specified User, Group object is achieved by the RBAC authorization;
If the kubelet TLS Boostrap mechanism, no longer specifies --kubelet-certificate-authority, - kubelet -client-certificate and --kubelet-client- error;: key option, "certificate signed by unknown authority x509 " otherwise occur subsequent kube-apiserver check kubelet certificate
--admission-control value must contain ServiceAccount;
--bind-address can not be 127.0.0.1;
Runtime-config configuration apiVersion represent runtime: rbac.authorization.k8s.io/v1beta1
--service- cluster-ip-range specified Service Cluster IP addresses, the address section of the route can not reach;
kubernetes object is stored in the default etcd / registry path, can be adjusted by --etcd-prefix parameter;

Start kube-apiserver
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
Kube-controller-manager configuration
Creating kube-controller-manager.service
/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
ExecStart=/usr/local/bin/kube-controller-manager \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        --address=127.0.0.1 \
        --service-cluster-ip-range=10.254.0.0/16 \
        --cluster-name=kubernetes \
        --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
        --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
        --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
        --root-ca-file=/etc/kubernetes/ssl/ca.pem \
        --leader-elect=true
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Explanation

= --Master HTTP: // {} MASTER_IP: 8080: a non-secure port 8080 and kube-apiserver communication;
--cluster-CIDR CIDR range specified in the Pod Cluster, the network must be routed up between the respective Node ( flanneld guaranteed);
--service Cluster-IP-range-specified parameters in the Cluster Service CIDR range, the network must be routed among the unreachable Node, must kube-apiserver parameters consistent;
--cluster-signing- * specified certificate and private key used to sign the certificate and private key file created for BootStrap TLS;
--root-CA-kube-apiserver file used to verify the certificate, after specified, the container will be in the Pod ServiceAccount placed in the CA certificate file;
--address value must be 127.0.0.1, because the current kube-apiserver expectations scheduler and controller-manager on the same machine
when deploying master cluster composed of multiple machines --leader-elect = true elected kube-controller-manager process in a working state

start up
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
verification
kubectl get componentstatuses
NAME                 STATUS      MESSAGE                                                                                     ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Healthy     ok                                                                                          
etcd-1               Healthy     {"health":"true"}                                                                           
etcd-2               Healthy     {"health":"true"}                                                                           
etcd-0               Healthy     {"health":"true"}
Configuring kube-scheduler
Creating kube-scheduler.service
/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
ExecStart=/usr/local/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            --leader-elect=true \
            --address=127.0.0.1
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
start up
systemctl daemon-reload
systemctl start kube-scheduler
verification
kubectl get componentstatuses
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}

kubectl get cs    (上面命令的简写)
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}

Guess you like

Origin blog.51cto.com/phospherus/2445747