Kubernetes (k8s) environment to build actual combat

The official version of k8s is updated too fast, many articles on the Internet are outdated, and many tools or interfaces have changed. The official website is not very well understood, here only records the process of building the k8s environment, and will not talk too much about k8s. There are various concepts, so it is recommended to understand various concepts first, then build the environment, and then compare and understand various concepts, which is a better way to learn.

broken thoughts

According to the meaning of some articles on the Internet, k8s has a version that provides installation before, and there is a yum source, but it does not need to be installed at present, and decompression is available.

Official website address: https://github.com/kubernetes/kubernetes

You can download the source package to compile by yourself, but it needs the support of go, and it will not compile if it is inside the wall. The reason is that the required image is blocked by the wall, so you can download the release version directly, address: https://github.com/ kubernetes/kubernetes/releases

The author is using Release v1.2.0-alpha.6 , this package is already 496 MB , and the previous version of Release v1.1.4 is only 182M, it can be seen that the update is fast, and the author has used 1.0.1 before. Version, some interfaces and parameters have changed. For example, the parameter exposed by kubectl was public-ip before, but now it is changed to externalIPs, so everyone should use their own version 
 
environment instructions when practicing:

2 machines, 167 and 168, both are CentOS 6.5

167 will run etcd, flannel, kube-apiserver, kube-controller-manager, kube-scheduler, and it will also act as a minion, so it will also run kube-proxy and kubelet 

Only need to run etcd, flannel, kube-proxy and kubelet on 168, etcd and flannel are to get through the network of 2 machines

k8s is built on docker, so docker is a necessary 
 
environment to build a 
 
network

k8s also needs the support of etcd and Flannel, first download these two packages, note that both machines need to be downloaded and executed

wget  https://github.com/coreos/etcd/releases/download/v2.2.4/etcd-v2.2.4-linux-amd64.tar.gz
wget  https://github.com/coreos/flannel/releases/download /v0.5.5/flannel-0.5.5-linux-amd64.tar.gz
 
are extracted separately, and then added to the environment variable

cd etcd-v2.2.4-linux-amd64/
cp etcd etcdctl /usr/bin/
cd flannel-0.5.5/
cp flanneld mk-docker-opts.sh /usr/bin

run

# 167上运行
etcd -name infra0 -initial-advertise-peer-urls http://172.16.48.167:2380 -listen-peer-urls http://172.16.48.167:2380 -listen-client-urls http://172.16.48.167:2379,http://127.0.0.1:2379 -advertise-client-urls http://172.16.48.167:2379  -discovery https://discovery.etcd.io/322a6b06081be6d4e89fd6db941c4add --data-dir /usr/local/kubernete_test/flanneldata  >> /usr/local/kubernete_test/logs/etcd.log 2>&1 &
 
# 168上运行
etcd -name infra1 -initial-advertise-peer-urls http://203.130.48.168:2380 -listen-peer-urls http://203.130.48.168:2380 -listen-client-urls http://203.130.48.168: 2379, http://127.0.0.1:2379 -advertise-client-urls http://203.130.48.168:2379 -discovery https://discovery.etcd.io/322a6b06081be6d4e89fd6db941c4add --data-dir /usr/local/kubernete_test /flanneldata >> /usr/local/kubernete_test/logs/etcd.log 2>&1 &
 
Note the -discovery parameter in the middle, this is a url address, we can access https://discovery.etcd.io/new?size= 2 to get, size represents the number of minions, here we are 2, 2 machines use the same url address, if you visit this address, you will find a bunch of json strings returned, this server we can also build by ourselves 
and start it Success, then we can execute on any machine

etcdctl ls
etcdctl cluster-health
 
to confirm that it has been successfully started, if there is an error, you can view the log file

tail -n 1000 -f /usr/local/kubernete_test/logs/etcd.log
 
then execute on either machine 

etcdctl set /coreos.com/network/config '{ "Network": "172.17.0.0/16" }'

执行

[root@w ~]# etcdctl ls /coreos.com/network/subnets
/coreos.com/network/subnets/172.17.4.0-24
/coreos.com/network/subnets/172.17.13.0-24
[root@w ~]# etcdctl get /coreos.com/network/subnets/172.17.4.0-24
{"PublicIP":"203.130.48.168"}
[root@w ~]# etcdctl get /coreos.com/network/subnets/172.17.13.0-24
{"PublicIP":"203.130.48.167"}
 
可以看到167上的网段为172.17.4.13/24 
168上的为172.17.14.0/24,我们后面建立的docker容器的IP就分别在这2个网段中 
然后2台机器上分别执行

flanneld >> /usr/local/kubernete_test/logs/flanneld.log 2>&1 &
 
在每个机器上执行: 

mk-docker-opts.sh -i
source /run/flannel/subnet.env
rm /var/run/docker.pid
ifconfig docker0 ${FLANNEL_SUBNET}
 
然后重启docker 

service docker restart
 
这样2台机器上的容器的网络就打通了,后续可以看到效果  
安装和启动k8s

wget https://github.com/kubernetes/kubernetes/releases/download/v1.2.0-alpha.6/kubernetes.tar.gz
 
然后各种解压

tar zxvf <span style="line-height: 1.5; font-size: 9pt;">kubernetes.tar.gz </span>cd kubernetes/server
tar zxvf kubernetes-server-linux-amd64.tar.gz  # 这个是我们需要执行命令的包
cd kubernetes/server/bin/

复制命令到环境变量中,这里我只复制了kubectl

cp kubectl /usr/bin/

在167上执行 

./kube-apiserver --address=0.0.0.0  --insecure-port=8080 --service-cluster-ip-range='172.16.48.167/24' --log_dir=/usr/local/kubernete_test/logs/kube --kubelet_port=10250 --v=0 --logtostderr=false --etcd_servers=http://172.16.48.167:2379 --allow_privileged=false  >> /usr/local/kubernete_test/logs/kube-apiserver.log 2>&1 &
 
./kube-controller-manager  --v=0 --logtostderr=false --log_dir=/usr/local/kubernete_test/logs/kube --master=172.16.48.167:8080 >> /usr/local/kubernete_test/logs/kube-controller-manager 2>&1 &
 
./kube-scheduler  --master='172.16.48.167:8080' --v=0  --log_dir=/usr/local/kubernete_test/logs/kube  >> /usr/local/kubernete_test/logs/kube-scheduler.log 2>&1 &
 
这样就把master跑起来了,

[root@w ~]# kubectl get componentstatuses
NAME                STATUS    MESSAGE              ERROR
scheduler            Healthy  ok                  
controller-manager  Healthy  ok                  
etcd-0              Healthy  {"health": "true"}  
etcd-1              Healthy  {"health": "true"}
 
我们可以看到都很健康的在运行 
然后我们就阔以愉快的在2台机器上跑minion需要的程序了(注意167同时也是minion)

# 167
./kube-proxy  --logtostderr=false --v=0 --master=http://172.16.48.167:8080  >> /usr/local/kubernete_test/logs/kube-proxy.log 2>&1 &
 
./kubelet  --logtostderr=false --v=0 --allow-privileged=false  --log_dir=/usr/local/kubernete_test/logs/kube  --address=0.0.0.0  --port=10250  --hostname_override=172.16.48.167  --api_servers=http://172.16.48.167:8080  >> /usr/local/kubernete_test/logs/kube-kubelet.log 2>&1 &
 
# 168
./kube-proxy  --logtostderr=false --v=0 --master=http://172.16.48.167:8080  >> /usr/local/kubernete_test/logs/kube-proxy.log 2>&1 &
 
./kubelet  --logtostderr=false --v=0 --allow-privileged=false  --log_dir=/usr/local/kubernete_test/logs/kube  --address=0.0.0.0  --port=10250  --hostname_override=172.16.48.97  --api_servers=http://172.16.48.167:8080  >> /usr/local/kubernete_test/logs/kube-kubelet.log 2>&1 &
 
来确认启动成功

[root@w ~]# kubectl get nodes
NAME            LABELS                                STATUS    AGE
172.16.48.167  kubernetes.io/hostname=172.16.48.167  Ready    1d
172.16.48.168  kubernetes.io/hostname=172.16.48.168  Ready    18h
 
2个minion都是Ready  
提交命令

k8s支持2种方式,一种是直接通过命令参数的方式,另一种是通过配置文件的方式,配置文件的话支持json和yaml,下面只讲通过命令参数的方式 
 
建立rc和pod

kubectl run nginx --image=nginx --port=80  --replicas=5
 
这样就建立了一个rc和5个pod 
通过以下命令可以查看

kubectl get rc,pods
 
如果我们手工把建立的pod删掉,k8s会自动重新启动一个,始终确保pod的数目为5  
跨机器间的通信

我们分别在167和168上用docker ps来查看,会发现2台机器上分别跑了一下nginx的容器,我们在2台机器上随意找一个容器进入,使用ip a来查看IP地址,会发现167上为172.17.13.0/24中,168为172.17.4.0/24中,我们分别ping对方的IP会发现是可以ping通的,说明网络已经通了,如果宿主机可以连上外网的话,在容器中也是可以访问外网的

如果我们不通过k8来启动容器,而是直接通过docker来启动容器,会发现启动的容器IP端也是在我们上述2个IP段之内的,并且和k8启动的容器的网络是互通的

当然IP端随机分配并且是内网的IP会给我们造成一些困扰

比如我们一般会这样做:通过docker启动容器,然后通过pipework来给其分配固定IP地址,既可以是内网IP也可以是外网IP,辣么,这样的话k8s启动的容器会和他们想通么

答案是通了一半,即通过k8s启动的容器是可以访问pipework设置的容器的内网IP和外网IP,但是反过来不行,pipework设置的容器是不能访问k8s启动的容器的,虽然是这样,但是不影响我们一般的需求,因为我们一般通过k8s启动的容器是web应用,通过pipework设置固定IP的是数据库之类,刚好可以满足从web应用访问数据库的需求 
 
暴露service

kubectl expose rc nginx --port=80 --container-port=9090 --external-ip=x.x.x.168
 
port参数是容器的端口,因为nginx使用的80,所以这里必须是80 
container-port和target-port是一个意思,指的是宿主机转发的端口,可以随意指定一个,也可以不指定 
 
external-ip指的是对外暴露的ip地址,一般用公网IP地址,执行那个命令过后,我们就可以在公网上访问了,但是这里有个问题就是这个IP地址必须是安装了k8s的机器的IP,如果你随便用一个IP是不能访问的,这里也给应用上造成了不便

查看service

kubectl get svc

可以看到CLUSTER_IP和EXTERNAL_IP 
 
后续的问题

如果用k8s做负载均衡的话,效率会怎样?怎么保持session?

由于现在k8s还不是很稳定,可能还不是很适合上生产环境

Kubernetes集群部署  http://www.linuxidc.com/Linux/2015-12/125770.htm

OpenStack, Kubernetes, Mesos 谁主沉浮  http://www.linuxidc.com/Linux/2015-09/122696.htm

Kubernetes集群搭建过程中遇到的问题及解决  http://www.linuxidc.com/Linux/2015-12/125735.htm

Detailed introduction of Kubernetes : please click here
Kubernetes download address : please click here

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326684857&siteId=291194637