Kubernetes ---- deployment process with centos

Description:

192.168.1.5:master、etcd

192.168.1.6:node1

192.168.1.7:node2

 

 

 

1.5 Configuration:

Yum configuration source:

  Each node is ready this k8s installation package, or downloaded from the Internet is too slow.

 

[root@master yum.repos.d]# mkdir yum && mv * yum

[root@master yum.repos.d]# vim cdrom.repo 

 

This yum source scp to all the node.

[root@master yum.repos.d]# scp cdrom.repo [email protected]:/etc/yum.repos.d/

[root@master yum.repos.d]# scp cdrom.repo [email protected]:/etc/yum.repos.d/

Turn off the firewall:

[root@master ~]# systemctl stop firewalld && systemctl disable firewalld

install service:

[root@master ~]# yum install -y kubernetes etcd flannel ntp

Configure the hosts file:

[root@master ~]# vim /etc/hosts

192.168.1.5 master
192.168.1.5 etcd
192.168.1.6 node1
192.168.1.7 node2

Configuration etcd:

[root@master ~]# vim  /etc/etcd/etcd.conf

改:ETCD_NAME=default

为:ETCD_NAME="etcd" 

etcd node name, if only one etcd etcd cluster, this one can be annotated with Ji configuration, the default name for the default, this name will be used later.

 

改:ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"

为:ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.1.5:2379" 

etcd Foreign Service listening address, generally designated port 2379, if 0.0.0.0 will listen on all interfaces 

 

改:ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"

为:ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.5:2379"

[root@master ~]# systemctl restart etcd 

[Root @ master ~] # netstat -antup | grep 2379 # 2379 to see whether the port is listening

[root @ master ~] # etcdctl member list # check etcd cluster member list
8e9e05c52164694d: name = etcd peerURLs = http : // localhost: 2380 clientURLs = http: //192.168.1.5: 2379 isLeader = true

Located kubernetes:

[root@master ~]#  vim /etc/kubernetes/config

改:KUBE_MASTER="--master=http://127.0.0.1:8080"

为:KUBE_MASTER="--master=http://192.168.1.5:8080" 

Configuration apiserver:

[Root @ master ~] # vim / etc / kubernetes / apiserver 

改:KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"

为:KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

Listening interface, if only listens to 127.0.0.1 localhost, configured to listen on all interfaces to 0.0.0.0 will be configured here as 0.0.0.0. 

 

改: KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

为:KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.5:2379"

etcd service address, already the throat Mai etcd Service 

 

 

改: KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExist    s,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

为:KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit"

No limit, allowing all nodes can talk access apiserver, all requests for the green light. 

Placed kube-scheduler:

[root@master ~]# vim /etc/kubernetes/scheduler

改:KUBE_SCHEDULER_ARGS=""

为:UBE_SCHEDULER_ARGS="0.0.0.0"

scheduler monitor default is 127.0.0.1

 

Set etcd network:

Create a directory / k8s / network used to store flannel network information:

[root@master ~]# etcdctl mkdir /k8s/network

To / k8s / network / config assigned a string value '{ "the Network": "10.255.0.0/16"}':
[the root Master @ ~] # etcdctl SET / k8s / network / config '{ "the Network": "10.255.0.0/16"} '

[root@master ~]# etcdctl get /k8s/network/config

Note: Before you start flannel, you need to add a network configuration record in etcd, this configuration will be used flannel assigned to the virtual IP addresses of each docker. Minion docker for configuring the IP address.
Since the address on the docker0 flannel covered, so flannel service first launched in docker services. If docker service has been launched, the first stop docker service, and then start lannel, then start docker

 

Configuration flanneld:

[root@master ~]# vim /etc/sysconfig/flanneld

改:FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"

为:FLANNEL_ETCD_ENDPOINTS="http://192.168.1.5:2379"

 

改:FLANNEL_ETCD_PREFIX="/atomic.io/network"

Is: FLANNEL_ETCD_PREFIX = "/ k8s / network" # Note wherein / k8s / network Network does not correspond to the above etcd

 

改:#FLANNEL_OPTIONS=""

Is: FLANNEL_OPTIONS = "- iface = ens33" # physical NIC name

 

[root@master ~]#  systemctl restart  flanneld 

[root@master ~]# ifconfig flannel0

 

1.6 Configuration:

Turn off the firewall:

[root@node1 ~]# systemctl stop firewalld && systemctl disable firewalld

install service:

[root@node1 ~]# yum install -y kubernetes etcd flannel ntp

Configuration hosts:

[root@node1 ~]# vim /etc/hosts

192.168.1.5 master
192.168.1.5 etcd
192.168.1.6 node1
192.168.1.7 node2

[root@node1 ~]# vim /etc/sysconfig/flanneld

改:FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"

为:FLANNEL_ETCD_ENDPOINTS="http://192.168.1.5:2379" 

 

改:FLANNEL_ETCD_PREFIX="/atomic.io/network"

为:FLANNEL_ETCD_PREFIX="/k8s/network" 

 

改: #FLANNEL_OPTIONS=""

为: FLANNEL_OPTIONS="--iface=ens33" 

 

[root@node1 ~]# vim /etc/kubernetes/config

改:22 KUBE_MASTER="--master=http://127.0.0.1:8080"

为:22 KUBE_MASTER="--master=http://192.168.1.5:8080" 

 

[Root @ node1 ~] # vim / etc / kubernetes / kubelet

改:5 KUBELET_ADDRESS="--address=127.0.0.1"

为: 5 KUBELET_ADDRESS="--address=0.0.0.0"   

By default only listen 127.0.0.1, I wanted to change: 0.0.0.0, since the late kubectl process to be used to connect to the service kubelet to view the status pod and pod in the container. If you can not process 127 is connected kubelet service.

 

改:11 KUBELET_HOSTNAME="--hostname-override=127.0.0.1"

为:11 KUBELET_HOSTNAME="--hostname-override=node1" 

 minion host name, and configured to present the same host machine name for easy identification.

 

改:14 KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"

为:14 KUBELET_API_SERVER="--api-servers=http://192.168.1.5::8080" 

Address for a given batch of apiserver 

 

[root@node1 ~]# systemctl restart flanneld
[root@node1 ~]# systemctl restart kube-proxy
[root@node1 ~]# systemctl restart docker

[root@node1 ~]# systemctl restart kubelet

[root@node1 ~]# systemctl status flanneld kube-proxy kubelet docker | grep running

 

1.7 Configuration:

Turn off the firewall:

[root@node2 ~]# systemctl stop firewalld && systemctl disable firewalld

install service:

[root@node2 ~]# yum install -y kubernetes etcd flannel ntp

Configuration hosts:

[root@node2 ~]# vim /etc/hosts

192.168.1.5 master
192.168.1.5 etcd
192.168.1.6 node1
192.168.1.7 node2

 On node1 node: 

 [root@node1 ~]#  scp /etc/sysconfig/flanneld [email protected]:/etc/sysconfig/

 [root @ node1 ~] # scp / etc / kubernetes / [email protected] config: / etc / kubernetes /
 [root @ node1 ~] # scp / etc / kubernetes / kubelet [email protected]: / etc / kubernetes /

 

[Root @ node2 ~] # vim / etc / kubernetes / kubelet

node1 node2 changed

[root@node2 ~]# systemctl restart flanneld
[root@node2 ~]# systemctl restart kube-proxy
[root@node2 ~]# systemctl restart docker
[root@node2 ~]# systemctl restart kubelet

   masetr restart the service in

[root@master ~]# systemctl restart kube-apiserver kube-controller-manager kube-scheduler flanneld

 

test:

[root@master ~]# kubectl get node

NAME STATUS AGE
node1 Ready 10s
node2 Ready 9s

 

 

Upload a mirror to try

 

Guess you like

Origin www.cnblogs.com/meml/p/12606739.html