01: k8s installation deployment

Detailed k8s official website:
https://www.kubernetes.org.cn/k8s


Preparation Environment: three centos7 server
192.168.6.129 k8s-master (master)
192.168.6.130. 1-K8S-Node (node)
192.168.6.131 K8S-2-Node (node)

#kubernetes (k8s) mounting method

Five ways:
Kubernetes binary installation (configure the most complicated, as much as installing OpenStack)
kubeadm installation (Google launched the automated installation tools, network has required)
minikube installation (only to experience K8S)
yum install (most simple version is relatively low ==== study recommended this method)
Go compile and install (the most difficult)


We use yum to install, learn how to use k8s is the key.

1: Analytical modify the host and host
# 129-130-131 three machines are requested to perform operations in
Vim / etc / the hosts:
192.168.6.129 K8S-Master
192.168.6.130 K8S-Node-1
192.168.6.131 Node-2-K8S

Modifying the host name:
hostnamectl K8S SET-hostname-Master
hostnamectl K8S-SET-Node-hostname. 1
hostnamectl SET-Node-2-hostname K8S

2: Install docker 1.12 version 1.13 system comes with a little bug, modify, or late container network communications will be blocked
[root @ K8S-Master ~] # yum the Provides Docker
Loaded plugins: fastestmirror
Determining Fastest Mirrors
* Base: Mirrors. aliyun.com
* Extras: mirrors.aliyun.com
* Updates: mirrors.aliyun.com
2: Docker-1.13.1-102.git7f2769b.el7.centos.x86_64: Automates Deployment of
: Containerized applicat
Repo: Extras
2: docker- 1.13.1-103.git7f2769b.el7.centos.x86_64: Automates Deployment of
: Containerized Applications
Repo: Extras
[K8S the root-Master @ ~] #

# Quguan network to find version 1.12 Docker
http://vault.centos.org/7.4.1708/extras/x86_64/Packages/

#需要提前安装 CentOS-Base.repo源
三台机器都需要下载这三个docker包:
http://vault.centos.org/7.4.1708/extras/x86_64/Packages/docker-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm
http://vault.centos.org/7.4.1708/extras/x86_64/Packages/docker-client-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm
http://vault.centos.org/7.4.1708/extras/x86_64/Packages/docker-common-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm

[root@k8s-master ~]# ls
docker-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm
docker-client-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm
docker-common-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm
[root@k8s-master ~]# scp * 192.168.6.130:~
[email protected]'s password:
docker-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm 100% 15MB 30.7MB/s 00:00
docker-client-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm 100% 3451KB 29.6MB/s 00:00
docker-common-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm 100% 83KB 6.9MB/s 00:00
[root@k8s-master ~]# scp * 192.168.6.131:~
[email protected]'s password:
docker-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm 100% 15MB 24.2MB/s 00:00
docker-client-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm 100% 3451KB 23.3MB/s 00:00
docker-common-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm 100% 83KB 5.8MB/s 00:00
[root@k8s-master ~]#

(1):卸载系统已经安装的docker
由于笔者前面安装有docker-ce版本,需要全部卸载干净(推荐你使用全新的机器安装)
[root@k8s-node-1 ~]# rpm -qa |grep docker
docker-ce-19.03.3-3.el7.x86_64
docker-ce-cli-19.03.3-3.el7.x86_64
[root@k8s-node-1 ~]# rpm -e docker-ce-19.03.3-3.el7.x86_64
[root@k8s-node-1 ~]# rpm -e docker-ce-cli-19.03.3-3.el7.x86_642
[root@k8s-node-1 ~]# rm -rf /var/lib/docker/* 清空之前docker产生的所有文件。
[root@k8s-node-1 ~]# rm -rf /etc/docker/*

3:在三台都安装docker 1.12(必须要按如下顺序安装,不然可能会报错)
yum localinstall docker-common-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm -y
yum localinstall docker-client-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm -y
yum localinstall docker-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm -y

4:验证docker 是否安装成功
[root@k8s-master ~]# docker -v
Docker version 1.12.6, build 3e8e77d/1.12.6

5:master节点安装etcd (k8s数据库kv类型存储)原生支持做集群
[root@k8s-master ~]# yum install etcd.x86_64 -y
[root@k8s-master ~]# vim /etc/etcd/etcd.conf
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.6.129:2379"

启动
[root@k8s-master ~]# systemctl start etcd.service
[root@k8s-master ~]# systemctl enable etcd.service

测试:
#set 设置一队键值 数据存储
[root@k8s-master ~]# etcdctl set testdir/testkey0 xujin
Xujin
#get获取
[root@k8s-master ~]# etcdctl get testdir/testkey0
xujin
[root@k8s-master ~]#

#检测集群状态
[root@k8s-master ~]# etcdctl -C http://192.168.6.129:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://192.168.6.129:2379
cluster is healthy
[root@k8s-master ~]#

6:master节点安装kubernetes
[root@k8s-master ~]# yum install kubernetes-master.x86_64 -y
#修改配置文件如下
[root@k8s-master ~]# vim /etc/kubernetes/apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.6.129:2379"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,Securi
tyContextDeny,ResourceQuota"

#修改config文件
[root@k8s-master ~]# vim /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.6.129:8080"

启动:k8s

# 启动kube-apiserver
#这个服务用来:接受并响应用户的请求
[root@k8s-master ~]# systemctl enable kube-apiserver.service
[root@k8s-master ~]# systemctl start kube-apiserver.service

#启动 kube-controller-manager
#控制管理器的概念,保证容器存活
#每隔一段时间去扫描容器状态,看有没有死了。
#容器死了,会调度apiserver再起一个新的容器
#保证容器的个数,比如我们设定起三个nginx容器,多了就会杀掉,少了就会起
[root@k8s-master ~]# systemctl enable kube-controller-manager.service
[root@k8s-master ~]# systemctl start kube-controller-manager.service

#启动kube-scheduler
#调度器,选择启动容器的node节点,通俗点就是容器在哪一个节点服务器上面创建
[root@k8s-master ~]# systemctl enable kube-scheduler.service
[root@k8s-master ~]# systemctl start kube-scheduler.service
到此主master 129 k8s安装好了。

-----------------------------------------------------------------------

node节点安装kubernetes

(130,131两台服务器都执行如下命令)

yum install kubernetes-node.x86_64 -y

vim /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.6.129:8080"

vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1" #注意131这里需要配置k8s-node-2
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.6.129:8080"

#启动kubelet服务
#调用docker,管理容器的生命周期
systemctl enable kubelet.service
systemctl start kubelet.service

#启动kube-proxy
#提供容器网络访问
systemctl enable kube-proxy.service
systemctl start kube-proxy.service

检测node集群是否正常:
主master(129)主机执行:
[root@k8s-master ~]# kubectl get nodes #出现如下节点,说明我们节点安装正常
NAME STATUS AGE
k8s-node-1 Ready 6m
k8s-node-2 Ready 6m
[root@k8s-master ~]#

=================================
配置k8s服务器网络:

K8s支持多种网络类型,具体参考官网介绍或者百度。

我们这里选择安装flannel网络。

1: 所有k8s服务器配置flannel网络(129,130,131三台机器都执行如下操作)

[root@k8s-master ~]# yum install flannel -y
[root@k8s-master ~]# sed -i 's#http://127.0.0.1:2379#http://192.168.6.129:2379#g' /etc/sysconfig/flanneld
[root@k8s-master ~]# cat /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.6.129:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
[root@k8s-master ~]#

之后再130,131 两台节点也执行如上步骤就可以了。

2: master(129) 节点:创建网络,并重启服务
#mk 递归创建目录config, 里面存的值是: '{ "Network": "172.16.0.0/16" }'
# key -------------------- value
#网络16位,可以分配足够多的IP地址给容器
[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'

#master(129)重启服务
[root@k8s-master ~]#systemctl enable flanneld.service
[root@k8s-master ~]#systemctl restart flanneld.service
[root@k8s-master ~]#systemctl restart docker restart
[root@k8s-master ~]#systemctl restart kube-apiserver.service
[root@k8s-master ~]#systemctl restart kube-controller-manager.service
[root@k8s-master ~]#systemctl restart kube-scheduler.service

Node(130,131)节点:重启服务
systemctl enable flanneld.service
systemctl restart flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service

3:测试容器网络
#三台都起一个容器,看ip a地址,然后互相ping,发现都是通的
[root@k8s-master ~]# docker run -it busybox
[root@k8s-node-1 ~]# docker run -it busybox
[root@k8s-node-2 ~]# docker run -it busybox
/ # ping baidu.com #首先看是否外网正常
PING baidu.com (39.156.69.79): 56 data bytes
64 bytes from 39.156.69.79: seq=0 ttl=127 time=30.514 ms
/ # ip a #查看各自的自动生成的IP,互相ping会发现也是通的。

到此网络也配置正常,基础的k8s搭建完毕!

Guess you like

Origin www.cnblogs.com/jim-xu/p/11873442.html