Kubernetes v1.6之前版本安装配置(不包括网络,安全证书)

配置更版本的kubernetes指南:  https://blog.csdn.net/qq_37423198/article/details/79762687

本例配置环境:

master 192.168.1.107 (ubuntu17.10主机)
node1 192.168.1.182 (ubuntu16.04虚拟机)
(虚拟机网络模式:bridge-adapter)

配置Kubernetes集群本质是启动如下服务

(首先docker,这肯定是必备的)
master节点

kube-apiserver.service
kube-controller-manager.service
kube-scheduler.service

node节点

kubelet.service
kube-proxy.service

由于数据记录在etcd数据库,所以还需要启动etcd服务。

etcd.service

下载每个服务启动所需要的可执行文件

kubernetes二进制文件官方地址:https://github.com/kubernetes/kubernetes/releases/download/
选择合适的版本(建议v1.7以后版本使用kubeadm安装配置,可以不用继续看了)
选择好下载路径wget 那个url就可以了(建议科学上网下载)
(eg: $wget https://github.com/kubernetes/kubernetes/releases/download/v1.4.1/kubernetes.tar.gz )

etcd二进制相关文件地址:https://github.com/coreos/etcd/releases/
(每个etcd节点)需要etcd, etcdctl 两个可执行文件,etcd是去中心化分布式数据库,在配置kubernetes时,可以用单节点etcd集群,也可以多节点,每个节点的配置方法都一样,下面配置的是双节点etcd集群。

master节点/usr/bin/下需要:

扫描二维码关注公众号,回复: 1371912 查看本文章

kube-apiserver
kube-controller-manager
kube-schduler
kubectl
etcd
etcdctl

node节点/usr/bin/下需要:

kubelet
kube-proxy
etcd
etcdctl

配置启动etcd服务

1.服务文件

# /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target


[Service]
User=root
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target

(小解释一下,EnvironmentFile定义配置文件地址,配置文件实际上是写成环境变量的启动参数,ExecStart是定义可执行文件的地址)
2.node1配置文件:

# /etc/etcd/etcd.conf
ETCD_NAME=node1 
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_INITIAL_CLUSTER_TOKEN="cluster1"
ETCD_INITIAL_CLUSTER_STATE='new"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.107:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=http://192.168.1.107:2380,node2=http://192.168.1.182:2380" 
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.107:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.107:2379"
ETCD_LISTEN_PEER_URLS="http://192.168.1.107:2380"

3.执行命令

$ sudo mkdir /var/lib/etcd
$ sudo systemctl daemon-reload
$ sudo systemctl enable etcd
$ sudo systemctl start etcd

对于node也用同样的方法配置
4.验证是否配置成功

$ sudo etcdctl member list

或者

$ sudo etcdctl cluster-health

例如这样

$ sudo etcdctl cluster-health
member 98055c737238f08a is healthy : got healthy result from http://192.168.1.107:2379
member ca933ab8cfffe553 is healthy : got healthy result from http://192.168.1.182:2379
cluster is healthy

或者member list显示name, peerURLs, clientURLs信息表示正常。

etcd服务正常运行,
两个命令都可以显示所有节点的信息,不报错就可以表示配置成功。

配置etcd时遇到的一些坑

1.虽然配置文件有写,但是如果member list只显示本机节点,可能表示需要手动添加节点

sudo etcdctl member add node1 http://192.168.1.107:2380

2.有时候etcd读取已有的data-dir数据启动失败,常常表现为cluster-id not match
常常配置正确但还是会报错,可以试试删除服务存放数据的文件夹,或者disable服务。

$ sudo rm -rf /var/lib/etcd/

或者

sudo systemctl disable etcd

配置启动master-Kubernetes服务

1.Kubernetes服务通用配置文件

# /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.1.107:8080" 

2.kube-apiserver服务文件

# /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
Wants=etcd.service

[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_ETCD_SERVERS \
        $KUBE_API_ADDRESS \
        $KUBE_API_PORT \
        $KUBELET_PORT \
        $KUBE_ALLOW_PRIV \
        $KUBE_SERVICE_ADDRESSES \
        $KUBE_ADMISSION_CONTROL \
        $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

3.kube-apiserver配置文件

# /etc/kubernetes/apiserver

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.182:2379"                          

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.4.0/24"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

4.kube-controller-manager服务文件

# /lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
After=kube-apiserver.service
Requires=etcd.service
Requires=kube-apiserver.service

[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5.kube-controller-manager配置文件

# /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=""                                                                                           
KUBE_ADDRESSES="--machines=192.168.1.182" 

6.kube-scheduler服务文件

# /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
        $KUBE_LOGTOSTDERR \
        $KUBE_MASTER
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

7.kube-scheduler配置文件

# /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS=""   

8.启动这些服务

sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl start kube-apiserver
sudo systemctl enable kube-controller-manager
sudo systemctl start kube-controller-manager
sudo systemctl enable kube-scheduler
sudo systemctl start kube-scheduler

(注意:要在联网下启动并运行服务,需要联网k8s.io)
使用sudo systemctl status [service] 验证服务

配置node节点kubernetes服务

0.node节点基础配置文件

# /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://kubernetes-master:8080"

1.kubelet服务文件

# /lib/systemd/system
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

ervice]WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

2.kubelet配置文件

# /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.1.182"
KUBELET_API_SERVER="--api_servers=http://192.168.1.107:8080"
KUBELET_ARGS=""

3.kube-proxy服务文件

# /lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

4.kube-proxy配置文件

# /etc/kubernetes/proxy
KUBE_PROXY_ARGS=""

5.启动服务

# systemctl daemon-reload
# systemctl start kubelet.service kube-proxy docker
# systemctl enable kubelet.service kube-proxy docker

配置完成可以查看服务的状态,来确定是否正常工作。
通过该命令验证:

$ kubectl get nodes

如果返回节点信息,则表示正常工作。

一些报错

1.报错“Start request repeated too quickly”:
尝试用journalctl 命令查看日志

$ journalctl -xe
$ journalctl -e -u kubernetes

(两种方法查看日志)

2.启动kubelet时, 报错”running with swap on is not supported, please disable swap or set –fail-swap-on flag to false”
需要在kubelet启动参数中加入–fail-swap-on=false

猜你喜欢

转载自blog.csdn.net/qq_37423198/article/details/79738284