kubernetes high availability deployment HA

 
2.  192.168.200.12  node
     192.168.200.14  master
     192.168.200.15  master
     192.168.200.16  master
 
     172.16.59.5>>172.16.56.56>> 200.14
 
3. Install kubelet on the master, the configuration is as follows, pay attention to --register-node=false, you can make the kubelet node not automatically register itself with the apiserver, --config=/etc/kubernetes/manifests The directory monitored by kubelet can be automatically started inside pod's yaml file
###
# kubernetes kubelet (minion) config
 
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"
 
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
 
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.200.14"
 
# location of the api-server
KUBELET_API_SERVER=""
 
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=index.tenxcloud.com/google_containers/pause:0.1.0"
 
# Add your own!
KUBELET_ARGS="--config=/etc/kubernetes/manifests --register-node=false"
 
4. Configure etcd cluster
 
5. Create pod version of apiserver, controller-manager and scheduler
 
(1) The mirror image in the official document cannot be pulled down, and the mirror image is downloaded in the mirror square of Speed ​​Cloud, and it cannot run continuously. So I decided to package the image with dockerfile myself
 
The relevant dockerfile is:
1.apiserver:
FROM index.tenxcloud.com/google_containers/kube-apiserver:v1.2.0
 
MAINTAINER msxu [email protected]
 
CMD ["/usr/local/bin/kube-apiserver","--service-cluster-ip-range=10.254.0.0/16","--insecure-bind-address=0.0.0.0","--etcd-servers=http://192.168.200.14:2379,http://192.168.200.1
5:2379,http://192.168.200.16:2379","--admission-control=AlwaysAdmit"]
 
2.controller-manager:注意参数--leader-elect=true,用来做controller-manager和scheduler的选主的
FROM index.tenxcloud.com/google_containers/kube-controller-manager:v1.2.2
 
MAINTAINER msxu [email protected]
 
CMD ["/usr/local/bin/kube-controller-manager","--master=192.168.200.14:8081","--cluster-cidr=10.245.0.0/16","--leader-elect=true"]
 
3.scheduler:
FROM index.tenxcloud.com/google_containers/kube-scheduler:v1.2.2
 
MAINTAINER msxu [email protected]
 
CMD ["/usr/local/bin/kube-scheduler","--master=192.168.200.14:8081","--leader-elect=true"]
 
(2)相关的pod文件,放在/etc/kubernetes/manifest目录下,kubelet会自动启动相关的pod
1.kube-apiserver.yaml文件
apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: index.tenxcloud.com/google_containers/kube-apiserver:msxu0.3.5
    ports:
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /var/log/kube-apiserver.log
      name: logfile
  volumes:
  - hostPath:
      path: /var/log/kube-apiserver.log
    name: logfile
 
2.kube-scheduler.yaml文件
apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
spec:
  hostNetwork: true
  containers:
  - name: kube-scheduler
    image: index.tenxcloud.com/google_containers/kube-scheduler:msxu0.3.2
    livenessProbe:
      httpGet:
        path: /healthz
        port: 10251
      initialDelaySeconds: 15
      timeoutSeconds: 1
    volumeMounts:
    - mountPath: /var/log/kube-scheduler.log
      name: logfile
  volumes:
  - hostPath:
      path: /var/log/kube-scheduler.log
    name: logfile
 
3.kube-controller-manager.yam文件
apiVersion: v1
kind: Pod
metadata:
  name: kube-controller-manager
spec:
  hostNetwork: true
  containers:
  - name: kube-controller-manager
    image: index.tenxcloud.com/google_containers/kube-controller-manager:msxu0.3.2
    livenessProbe:
      httpGet:
        path: /healthz
        port: 10252
      initialDelaySeconds: 15
      timeoutSeconds: 1
    volumeMounts:
    - mountPath: /var/log/kube-controller-manager.log
      name: logfile
  volumes:
  - hostPath:
      path: /var/log/kube-controller-manager.log
    name: logfile
 
6.配置nginx服务器
安装略,配置如下:/usr/local/nginx-1.5.1/conf/nginx.conf
 
7.测试:主要验证当其中一台master机器挂了,其他master能否接管,执行pod保持能力、弹性调度等。。。测试的pod的yaml文件如下:
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-slave
  labels:
    name: redis-slave
spec:
  replicas: 1
  selector:
    name: redis-slave
  template:
    metadata:
      labels:
        name: redis-slave
    spec:
      containers:
      - name: slave
        image: docker.io/kubeguide/guestbook-redis-slave
        ports:
        - containerPort: 6379
 
http://blog.csdn.net/u012214983/article/details/52267476

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326574310&siteId=291194637