[Practical exercise] kubernetes & docker-kubernetes management platform installation and use

On a stand-alone installation docker has been introduced, and then install a stand-alone is not highly available, up 2 every time manually create docker instance, can not remember which play which, after managing to build more convenient.

So it gave birth to a variety of specially created to manage the docker, the scheduling, the destruction of the platform, which kerbernetes is more mainstream solution. (Referred to k8s)


lab environment:

Operating System: CentOS7

kerbernetes cluster: experiment kerbernetes management node is a stand-alone deployment, not deployed in accordance with the cluster

node IP addresses
master 10.1.30.34
etcd 10.1.30.34
Mirror warehouse 10.1.30.34:5000
node01 10.1.30.35
node02 10.1.30.36


1, environment preparation:

1.1 turn off the firewall

systemctl stop firewalld
systemctl disable firewalld

1.2 Close selinux

setenforce 0
vi /etc/selinux/config
SELINUX=disabled

Configuring and mounting extensions source yum yum

Ali cloud using the yum source configured slightly, refer to [practice] drills Linux operating system 04- to configure yum source https://blog.51cto.com/14423403/2416049

1.4 modify the hostname

Modify hostname

vi / etc / hostname

# Modify each corresponding to its own name;

10.1.30.34 input master

10.1.30.35 input node01

10.1.30.36 input node02

Also you need to enter the hostname directly modify the hostname, and then restart the machine to take effect

1.5 modifies the hosts table

All three need to be modified

vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain
10.1.30.34 master
10.1.30.34 etcd
10.1.30.35 node01
10.1.30.36 node02

1.6 Configuration-free dense Login

ssh-keygen

  // the way to enter the default can

ssh-copy-id {ip} other machines

Other remote machine when prompted Are you sure you want to continue connecting (yes / no)?

Press yes to continue and then prompted for a password the target host.

Then ssh to other machines IP test, if there is no prompt for a password to log on remotely, the operation is complete.


2, Master installation:

The following 2.X content, only need Master (10.1.30.34 node installed), the Node required.

2.1 Installation Services etcd

yum install etcd -y

Configuration etcd

we /etc/etcd/etcd.conf
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://10.1.30.34:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://10.1.30.34:2379"

Run etcd

systemctl start etcd
systemctl enable etcd

Etcd network configuration information

etcdctl set /k8s/network/config '{"Network": "173.16.0.0/16"}'

This is used when the subsequent segment # yaml created by the docker instance, composed of a plurality of pod Service Cluster assigned IP address. And configured to be consistent with the rear flannel, in order to properly communicate with node addresses allocated.

# Note 173.16.0.0/16 This can be easily changed, but the use of 16-bit, 24-bit a problem.


2.2 Installation kurbernets master

yum install kubernetes-master -y

Configuring API Server

vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://10.1.30.34:2379" 
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=173.16.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"

# Attention to the need to remove KUBE_ADMISSION_CONTROL default in SecurityContextDeny and ServiceAccount , and configuration-related rights, do not need to experiment.

# The same network segment --service-cluster-ip-range needs etcd set

Global configuration file

vi / etc / kubernetes / config
KUBE_MASTER="--master=http://10.1.30.34:8080"

Start the service, and the service is set to boot

systemctl enable kube-apiserver kube-scheduler kube-controller-manager 
systemctl restart kube-apiserver kube-scheduler kube-controller-manager

Verification Test:

Web access http://10.1.30.34:8080/, there is return to normal.

001.png

3, Node installation:

The following 3.X content, unless otherwise noted only in Node node installation, Master without installation.

. 3 .1 mounting Docker ( Master, Node needs to be installed )

yum install docker -y
systemctl start docker

Small pull test image

docker run busybox echo "Hello world"

. 3 .2 mounting flannel ( Master, Node needs to be installed )

#flannel used for internal communication between the kurbernets master and node, node to node, kurbernets itself does not provide this function, as well as similar calico, require additional installation.

yum install flannel -y
配置flannel
vi /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://10.1.30.34:2379" 
FLANNEL_ETCD_PREFIX="/k8s/network"

#注意/k8s/network与之前etcd配置保持一致

3.3安装kubernetes node

yum install kubernetes-node -y

编辑全局配置文件

vi /etc/kubernetes/config
KUBE_MASTER="--master=http://10.1.30.34:8080"

编辑kubelet组件

#此组件很重要,配置错误yaml经常无法正常创建docker

vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"

#对所有接口开放,使master能够登录node查看pod信息;该参数默认为127.0.0.1

KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=node01"

#node01、02按实际修改 

KUBELET_API_SERVER="--api-servers=http://10.1.30.34:8080"

# pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

#修改根镜像下载地址,默认为registry.access.redhat.com/rhel7/pod-infrastructure:latest,yaml后面创建docker,kubectl get pod后面,pod一直卡在ContainerCreating不变为running,会有个报错。也可以修改为私有镜像仓库,参见“下一篇k8s私有镜像仓库的搭建”,可以修改为registry:5000/ pod-infrastructure:latest",那么所有镜像就能从registry那边pull与push了

# Add your own!

KUBELET_ARGS="--cluster_dns=173.16.0.100 --cluster_domain=cluster.local"

 #添加DNS配置项,173.16.0.100的地址,在etcd配置的网段里面,随便选择一个IP地址都可以,注意一定要是etcd配置的地址段

修改dns解析配置(注意只有这一步master也需要修改

vi /etc/resolv.conf

将内部信息改为(追加)

search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 173.16.0.100

启动服务与设置开机启动

systemctl enable kubelet kube-proxy 
systemctl restart kubelet kube-proxy

每次修改配置后的配置重载与服务重启(必须daemon-reload重载,修改才生效)

systemctl daemon-reload
systemctl restart kubelet kube-proxy

Node整体服务开启

systemctl restart flanneld
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet kube-proxy
iptables -P FORWARD ACCEPT

#不加node与master的通信会有问题


4、创建实例:

在master运行

kubectl get node

返回节点ready,表示节点已正常。

002.png

kubectl get pod表示no resource,表示还未有创建实例。

003.png

在master上配置rcyaml文件与svcyaml文件,用于在node创建docker实例。使用tomcat进行测试。

vi myweb-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: myweb
spec:
  replicas: 2
  selector:
    app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
        - name: myweb
          image: registry:5000/tomcat
          ports:
          - containerPort: 8080
          env:
            - name: MYSQL_SERVICE_HOST
              value: 'mysql'
            - name: MYSQL_SERVICE_PORT
              value: '3306'

#其中replicas: 2表示2个pod,相当于2个副本

image: tomcat,这里可以修改不同的镜像源/tomcat,注意registry:5000/tomcat需要私有仓库支持,详见“”


编辑svc文件

vi myweb-svc.yaml
apiVersion: v1
kind: Service
metadata: 
  name: myweb
spec:
  type: NodePort
  ports:
    - port: 8080
      name: myweb-svc
      nodePort: 30001
  selector:
    app: myweb

创建pod与svc

kubectl create -f myweb-rc.yaml
kubectl create -f myweb-svc.yaml

#删除可以将create改为delete

查看结果

kubectl get pod
kubectl get svc

004.png

测试docker实例是否可用

可以直接访问node:30001端口进行测试,看是否正常返回tomcat页面。

如10.1.30.35:30001

005.png

5、排错:

如果发现pod一直卡在ContainerCreating,没有变为running

到node上面运行如下命令查看日志

journalctl -u kubelet -f

发现报错

details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)

解决方法:(每个node节点都需要)

yum install *rhsm*

然后执行

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm

然后执行

rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem

然后执行

docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest

回到master,删除myweb-rc,重新创建myweb-rc

kubectl delete -f myweb-rc.yaml
kubectl create -f myweb-rc.yaml


等待一会儿,重新kubectl get pod

NAME          READY     STATUS    RESTARTS   AGE
myweb-j916d      1/1       Running     0       47m
myweb-m962p      1/1       Running     0       47m

pod正常运行,变为running状态。

kubectl get svc
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes      173.16.0.1      <none>       443/TCP          6h
myweb         173.16.133.17    <nodes>      8080:30001/TCP       5h

Revisit 10.1.30.35:30001 test.


Guess you like

Origin blog.51cto.com/14423403/2416702