centos7 下google Kubernetes(k8s)集群安装部署

简介

       kubernetes,简称K8s,是用8代替8个字符“ubernete”而成的缩写。是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。

       传统的应用部署方式是通过插件或脚本来安装应用。这样做的缺点是应用的运行、配置、管理、所有生存周期将与当前操作系统绑定,这样做并不利于应用的升级更新/回滚等操作,当然也可以通过创建虚拟机的方式来实现某些功能,但是虚拟机非常重,并不利于可移植性。

       新的方式是通过部署容器方式实现,每个容器之间互相隔离,每个容器有自己的文件系统 ,容器之间进程不会相互影响,能区分计算资源。相对于虚拟机,容器能快速部署,由于容器与底层设施、机器文件系统解耦的,所以它能在不同云、不同版本操作系统间进行迁移。

       容器占用资源少、部署快,每个应用可以被打包成一个容器镜像,每个应用与容器间成一对一关系也使容器有更大优势,使用容器可以在build或release 的阶段,为应用创建容器镜像,因为每个应用不需要与其余的应用堆栈组合,也不依赖于生产环境基础结构,这使得从研发到测试、生产能提供一致环境。类似地,容器比虚拟机轻量、更“透明”,这更便于监控和管理。

安装环境

服务器:3台

系统:centos 7.4.1708(core)

ectd version:
    etcd Version: 3.3.11
    Git SHA: 2cf9e51
    Go Version: go1.10.3
    Go OS/Arch: linux/amd64

Kubernetes:kubectl version
    结果:
      Client Version: version.Info{Major:“1”, Minor:“5”, GitVersion:“v1.5.2”, GitCommit:“269f928217957e7126dc87e6adfa82242bfe5b1e”,       GitTreeState:“clean”, BuildDate:“2017-07-03T15:31:10Z”, GoVersion:“go1.7.4”, Compiler:“gc”, Platform:“linux/amd64”}
      Server Version: version.Info{Major:“1”, Minor:“5”, GitVersion:“v1.5.2”, GitCommit:“269f928217957e7126dc87e6adfa82242bfe5b1e”,       GitTreeState:“clean”, BuildDate:“2017-07-03T15:31:10Z”, GoVersion:“go1.7.4”, Compiler:“gc”, Platform:“linux/amd64”}

安装前准备

在安装部署集群前,先将三台服务器的时间通过NTP进行同步,否则,在后面的运行中可能会提示错误。

[root@k8s ~]# yum -y install ntpdate
[root@k8s ~]# ntpdate -u cn.pool.ntp.org

在node节点上安装redhat-ca.crt

[root@k8s-node1 ~]# yum install *rhsm* -y

ECTD集群配置

命令含义:

`--name` 
etcd集群中的节点名,这里可以随意,可区分且不重复就行
`--listen-peer-urls`
监听的用于节点之间通信的url,可监听多个,集群内部将通过这些url进行数据交互 
 `--initial-advertise-peer-urls`
建议用于节点之间通信的url,节点间将以该值进行通信。
`--listen-client-urls`
监听的用于客户端通信的url,同样可以监听多个。
`--advertise-client-urls`
建议使用的客户端通信url,该值用于etcd代理或etcd成员与etcd节点通信。
`--initial-cluster-token etcd-cluster-1`
节点的token值,设置该值后集群将生成唯一id,并为每个节点也生成唯一id,当使用相同配置文件再启动一个集群时,只要该token值不一样,etcd集群就不会相互影响。
-`-initial-cluster`
也就是集群中所有的initial-advertise-peer-urls 的合集
`--initial-cluster-state new`
新建集群的标志,初始化状态使用 new,建立之后改此值为 existing

master节点配置

1.安装kubernetes和etcd
[root@k8s-master ~]# yum -y install kubernetes-master etcd
2.修改ectd配置文件
[root@k8s-master ~]# vim /etc/etcd/etcd.conf 

修改以下内容,将ip更换为自己的ip:

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://10.10.10.1:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.10.10.1:2379,http://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd1"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.10.10.1:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.10.10.1:2379,http://127.0.0.1:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=http://10.10.10.1:2380,etcd2=http://10.10.10.2:2380,etcd3=http://10.10.10.3:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

nodes节点配置

1.分别在两台node节点上安装部署kubernetes-node、etcd、flannel、docker
[root@k8s-node1 ~]# yum -y install kubernetes-node etcd flannel docker
[root@k8s-node2 ~]# yum -y install kubernetes-node etcd flannel docker
2.修改etcd配置文件

node节点1配置如下:

[root@k8s-node1 ~]# vim /etc/etcd/etcd.conf 
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://10.10.10.2:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.10.10.2:2379,http://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="ectd2"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.10.10.2:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.10.10.2:2379,http://127.0.0.1:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=http://10.10.10.1:2380,etcd2=http://10.10.10.2:2380,etcd3=http://10.10.10.3:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"
#ETCD_INITIAL_CLUSTER_STATE="exist"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

node节点2配置如下:

[root@k8s-node2 ~]# vim /etc/etcd/etcd.conf 
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://10.10.10.3:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.10.10.3:2379,http://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="ectd3"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.10.10.3:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.10.10.3:2379,http://127.0.0.1:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="ectd1=http://10.10.10.1:2380,ectd2=http://10.10.10.2:2380,ectd3=http://10.10.10.3:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"
#ETCD_INITIAL_CLUSTER_STATE="exist"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

更新etcd系统默认配置:

当前使用的是etcd v3版本,系统默认的是v2,通过下面命令修改配置,三台服务器均需修改。

[root@k8s-node2 ~] vim /etc/profile
# 在文件末尾加入
export ETCDCTL_API=3
# 使系统配置生效
[root@k8s-node2 ~] source /etc/profile

启动/停止ectd服务

# 启动服务
[root@k8s-node1 ~] systemctl start etcd.service   
#停止服务
[root@k8s-node1 ~] systemctl stop etcd.service  
#重新启动服务
[root@k8s-node1 ~] systemctl restart etcd.service   
# 开机自启动服务
[root@k8s-node1 ~] systemctl enable etcd.service  
# 查看状态
[root@k8s-node1 ~] systemctl status etcd.service

查看ectd集群运行状态

[root@k8s-master ~]# etcdctl cluster-health
member 359947fae86629a7 is healthy: got healthy result from http://10.10.10.1:2379
member 4be7ddbd3bb99ca0 is healthy: got healthy result from http://10.10.10.2:2379
member 84951a697d1bf6a0 is healthy: got healthy result from http://10.10.10.3:2379
cluster is healthy

查看ectd成员信息

[root@k8s-node1 ~]# etcdctl member list
31c8ea313cbd4fdc, started, ectd2, http://10.10.10.1:2380, http://127.0.0.1:2379,http://10.10.10.1:2379
3ceaa63f2eec898a, started, ectd3, http://10.10.10.2:2380, http://127.0.0.1:2379,http://10.10.10.2:2379
f1e20482d2ea1996, started, etcd1, http://10.10.10.3:2380, http://127.0.0.1:2379,http://10.10.10.3:2379

Kubernetes集群配置

1.master节点apiserver配置文件修改

[root@k8s-master ~]# vim /etc/kubernetes/apiserver 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
 
# The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_API_ADDRESS="--address=0.0.0.0"
 
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
 
# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
 
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://10.10.10.1:2379,http://10.10.10.2:2379,http://10.10.10.3:2379"
 
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
 
# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
 
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
 
# Add your own!
KUBE_API_ARGS=""

2.服务启动/暂停

// 启动 apiserver
[root@k8s-master ~]# systemctl start kube-apiserver
// 启动 controller-manager
[root@k8s-master ~]# systemctl start kube-controller-manager
// 启动 scheduler
[root@k8s-master ~]# systemctl start kube-scheduler
// 设置开机自启
[root@k8s-master ~]# systemctl enable kube-apiserver
[root@k8s-master ~]# systemctl enable kube-controller-manager
[root@k8s-master ~]# systemctl enable kube-scheduler

3.配置nodes节点kubernetes-config

[root@k8s-node1 ~]# vim /etc/kubernetes/config

两台node节点均需修改,此处以node1为例,修改如下配置:

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
 
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
 
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
 
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.10.10.1:8080"

4.配置nodes节点kubelet配置

[root@k8s-node1 ~]# vim /etc/kubernetes/kubelet

两台node节点均需修改,此处以node1为例,修改如下配置:

###
# kubernetes kubelet (minion) config
 
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"
 
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
 
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=10.10.10.2"
 
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://10.10.10.1:8080"
 
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
 
# Add your own!
KUBELET_ARGS=""

5.node节点修改flannel网络配置

[root@k8s-node1 ~]# vim /etc/sysconfig/flanneld

两台node节点均需修改,此处以node1为例,修改如下配置:

# Flanneld configuration options
 
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://10.10.10.1:2379"
 
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
 
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

6.查看集群状态

[root@k8s-master ~]# kubectl get nodes
NAME            STATUS    AGE
10.10.10.2   Ready     1m
10.10.10.3   Ready     1m

特别声明:本文为原创作品,转载请注明出处来源! (https://blog.csdn.net/qq_32201423/article/details/98638533)

发布了9 篇原创文章 · 获赞 16 · 访问量 2764

猜你喜欢

转载自blog.csdn.net/qq_32201423/article/details/98638533