kubeadm部署1.11.1的k8s集群

K8s简介
1.背景介绍
  云计算飞速发展
    - IaaS
    - PaaS
    - SaaS
  Docker技术突飞猛进
    - 一次构建,到处运行
    - 容器的快速轻量
    - 完整的生态环境
2.什么是kubernetes
  首先,他是一个全新的基于容器技术的分布式架构领先方案。Kubernetes(k8s)是Google开源的容器集群管理系统(谷歌内部:Borg)。在Docker技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等一系列完整功能,提高了大规模容器集群管理的便捷性。
  Kubernetes是一个完备的分布式系统支撑平台,具有完备的集群管理能力,多扩多层次的安全防护和准入机制、多租户应用支撑能力、透明的服务注册和发现机制、內建智能负载均衡器、强大的故障发现和自我修复能力、服务滚动升级和在线扩容能力、可扩展的资源自动调度机制以及多粒度的资源配额管理能力。同时Kubernetes提供完善的管理工具,涵盖了包括开发、部署测试、运维监控在内的各个环节。
Kubernetes中,Service是分布式集群架构的核心,一个Service对象拥有如下关键特征:
• 拥有一个唯一指定的名字
• 拥有一个虚拟IP(Cluster IP、Service IP、或VIP)和端口号
• 能够体统某种远程服务能力
• 被映射到了提供这种服务能力的一组容器应用上
  Service的服务进程目前都是基于Socket通信方式对外提供服务,比如Redis、Memcache、MySQL、Web Server,或者是实现了某个具体业务的一个特定的TCP Server进程,虽然一个Service通常由多个相关的服务进程来提供服务,每个服务进程都有一个独立的Endpoint(IP+Port)访问点,但Kubernetes能够让我们通过服务连接到指定的Service上。有了Kubernetes内奸的透明负载均衡和故障恢复机制,不管后端有多少服务进程,也不管某个服务进程是否会由于发生故障而重新部署到其他机器,都不会影响我们队服务的正常调用,更重要的是这个Service本身一旦创建就不会发生变化,意味着在Kubernetes集群中,我们不用为了服务的IP地址的变化问题而头疼了。
  容器提供了强大的隔离功能,所有有必要把为Service提供服务的这组进程放入容器中进行隔离。为此,Kubernetes设计了Pod对象,将每个服务进程包装到相对应的Pod中,使其成为Pod中运行的一个容器。为了建立Service与Pod间的关联管理,Kubernetes给每个Pod贴上一个标签Label,比如运行MySQL的Pod贴上name=mysql标签,给运行PHP的Pod贴上name=php标签,然后给相应的Service定义标签选择器Label Selector,这样就能巧妙的解决了Service于Pod的关联问题。
  在集群管理方面,Kubernetes将集群中的机器划分为一个Master节点和一群工作节点Node,其中,在Master节点运行着集群管理相关的一组进程kube-apiserver、kube-controller-manager和kube-scheduler,这些进程实现了整个集群的资源管理、Pod调度、弹性伸缩、安全控制、系统监控和纠错等管理能力,并且都是全自动完成的。Node作为集群中的工作节点,运行真正的应用程序,在Node上Kubernetes管理的最小运行单元是Pod。Node上运行着Kubernetes的kubelet、kube-proxy服务进程,这些服务进程负责Pod的创建、启动、监控、重启、销毁以及实现软件模式的负载均衡器。
  在Kubernetes集群中,它解决了传统IT系统中服务扩容和升级的两大难题。你只需为需要扩容的Service关联的Pod创建一个Replication Controller简称(RC),则该Service的扩容及后续的升级等问题将迎刃而解。在一个RC定义文件中包括以下3个关键信息。
• 目标Pod的定义
• 目标Pod需要运行的副本数量(Replicas)
• 要监控的目标Pod标签(Label)
  在创建好RC后,Kubernetes会通过RC中定义的的Label筛选出对应Pod实例并实时监控其状态和数量,如果实例数量少于定义的副本数量,则会根据RC中定义的Pod模板来创建一个新的Pod,然后将新Pod调度到合适的Node上启动运行,知道Pod实例的数量达到预定目标,这个过程完全是自动化。
  
 Kubernetes优势:
    - 容器编排
    - 轻量级
    - 开源
    - 弹性伸缩
    - 负载均衡
•Kubernetes的核心概念
1.Master
  k8s集群的管理节点,负责管理集群,提供集群的资源数据访问入口。拥有Etcd存储服务(可选),运行Api Server进程,Controller Manager服务进程及Scheduler服务进程,关联工作节点Node。Kubernetes API server提供HTTP Rest接口的关键服务进程,是Kubernetes里所有资源的增、删、改、查等操作的唯一入口。也是集群控制的入口进程;Kubernetes Controller Manager是Kubernetes所有资源对象的自动化控制中心;Kubernetes Schedule是负责资源调度(Pod调度)的进程

2.Node
  Node是Kubernetes集群架构中运行Pod的服务节点(亦叫agent或minion)。Node是Kubernetes集群操作的单元,用来承载被分配Pod的运行,是Pod运行的宿主机。关联Master管理节点,拥有名称和IP、系统资源信息。运行docker eninge服务,守护进程kunelet及负载均衡器kube-proxy.
• 每个Node节点都运行着以下一组关键进程
• kubelet:负责对Pod对于的容器的创建、启停等任务
• kube-proxy:实现Kubernetes Service的通信与负载均衡机制的重要组件
• Docker Engine(Docker):Docker引擎,负责本机容器的创建和管理工作
  Node节点可以在运行期间动态增加到Kubernetes集群中,默认情况下,kubelet会想master注册自己,这也是Kubernetes推荐的Node管理方式,kubelet进程会定时向Master汇报自身情报,如操作系统、Docker版本、CPU和内存,以及有哪些Pod在运行等等,这样Master可以获知每个Node节点的资源使用情况,冰实现高效均衡的资源调度策略。、

3.Pod
  运行于Node节点上,若干相关容器的组合。Pod内包含的容器运行在同一宿主机上,使用相同的网络命名空间、IP地址和端口,能够通过localhost进行通。Pod是Kurbernetes进行创建、调度和管理的最小单位,它提供了比容器更高层次的抽象,使得部署和管理更加灵活。一个Pod可以包含一个容器或者多个相关容器。
  Pod其实有两种类型:普通Pod和静态Pod,后者比较特殊,它并不存在Kubernetes的etcd存储中,而是存放在某个具体的Node上的一个具体文件中,并且只在此Node上启动。普通Pod一旦被创建,就会被放入etcd存储中,随后会被Kubernetes Master调度到摸个具体的Node上进行绑定,随后该Pod被对应的Node上的kubelet进程实例化成一组相关的Docker容器冰启动起来,在。在默认情况下,当Pod里的某个容器停止时,Kubernetes会自动检测到这个问起并且重启这个Pod(重启Pod里的所有容器),如果Pod所在的Node宕机,则会将这个Node上的所有Pod重新调度到其他节点上。

4.Replication Controller
  Replication Controller用来管理Pod的副本,保证集群中存在指定数量的Pod副本。集群中副本的数量大于指定数量,则会停止指定数量之外的多余容器数量,反之,则会启动少于指定数量个数的容器,保证数量不变。Replication Controller是实现弹性伸缩、动态扩容和滚动升级的核心。

5.Service
  Service定义了Pod的逻辑集合和访问该集合的策略,是真实服务的抽象。Service提供了一个统一的服务访问入口以及服务代理和发现机制,关联多个相同Label的Pod,用户不需要了解后台Pod是如何运行。
外部系统访问Service的问题
  首先需要弄明白Kubernetes的三种IP这个问题
    Node IP:Node节点的IP地址
    Pod IP: Pod的IP地址
    Cluster IP:Service的IP地址
  首先,Node IP是Kubernetes集群中节点的物理网卡IP地址,所有属于这个网络的服务器之间都能通过这个网络直接通信。这也表明Kubernetes集群之外的节点访问Kubernetes集群之内的某个节点或者TCP/IP服务的时候,必须通过Node IP进行通信
  其次,Pod IP是每个Pod的IP地址,他是Docker Engine根据docker0网桥的IP地址段进行分配的,通常是一个虚拟的二层网络。
  最后Cluster IP是一个虚拟的IP,但更像是一个伪造的IP网络,原因有以下几点
• Cluster IP仅仅作用于Kubernetes Service这个对象,并由Kubernetes管理和分配P地址
• Cluster IP无法被ping,他没有一个“实体网络对象”来响应
• Cluster IP只能结合Service Port组成一个具体的通信端口,单独的Cluster IP不具备通信的基础,并且他们属于Kubernetes集群这样一个封闭的空间。
Kubernetes集群之内,Node IP网、Pod IP网于Cluster IP网之间的通信,采用的是Kubernetes自己设计的一种编程方式的特殊路由规则。

6.Label
 Kubernetes中的任意API对象都是通过Label进行标识,Label的实质是一系列的Key/Value键值对,其中key于value由用户自己指定。Label可以附加在各种资源对象上,如Node、Pod、Service、RC等,一个资源对象可以定义任意数量的Label,同一个Label也可以被添加到任意数量的资源对象上去。Label是Replication Controller和Service运行的基础,二者通过Label来进行关联Node上运行的Pod。
我们可以通过给指定的资源对象捆绑一个或者多个不同的Label来实现多维度的资源分组管理功能,以便于灵活、方便的进行资源分配、调度、配置等管理工作。
一些常用的Label如下:
• 版本标签:"release":"stable","release":"canary"......
• 环境标签:"environment":"dev","environment":"qa","environment":"production"
• 架构标签:"tier":"frontend","tier":"backend","tier":"middleware"
• 分区标签:"partition":"customerA","partition":"customerB"
• 质量管控标签:"track":"daily","track":"weekly"
  Label相当于我们熟悉的标签,给某个资源对象定义一个Label就相当于给它大了一个标签,随后可以通过Label Selector(标签选择器)查询和筛选拥有某些Label的资源对象,Kubernetes通过这种方式实现了类似SQL的简单又通用的对象查询机制。

  Label Selector在Kubernetes中重要使用场景如下:

o   kube-Controller进程通过资源对象RC上定义Label Selector来筛选要监控的Pod副本的数量,从而实现副本数量始终符合预期设定的全自动控制流程
o   kube-proxy进程通过Service的Label Selector来选择对应的Pod,自动建立起每个Service岛对应Pod的请求转发路由表,从而实现Service的智能负载均衡
o   通过对某些Node定义特定的Label,并且在Pod定义文件中使用Nodeselector这种标签调度策略,kuber-scheduler进程可以实现Pod”定向调度“的特性

•Kubernetes架构和组件

•Kubernetes 组件:
  Kubernetes Master控制组件,调度管理整个系统(集群),包含如下组件:
  1.Kubernetes API Server
    作为Kubernetes系统的入口,其封装了核心对象的增删改查操作,以RESTful API接口方式提供给外部客户和内部组件调用。维护的REST对象持久化到Etcd中存储。
  2.Kubernetes Scheduler
    为新建立的Pod进行节点(node)选择(即分配机器),负责集群的资源调度。组件抽离,可以方便替换成其他调度器。
  3.Kubernetes Controller
    负责执行各种控制器,目前已经提供了很多控制器来保证Kubernetes的正常运行。
  4. Replication Controller
    管理维护Replication Controller,关联Replication Controller和Pod,保证Replication Controller定义的副本数量与实际运行Pod数量一致。
  5. Node Controller
    管理维护Node,定期检查Node的健康状态,标识出(失效|未失效)的Node节点。
  6. Namespace Controller
    管理维护Namespace,定期清理无效的Namespace,包括Namesapce下的API对象,比如Pod、Service等。
  7. Service Controller
    管理维护Service,提供负载以及服务代理。
  8.EndPoints Controller
    管理维护Endpoints,关联Service和Pod,创建Endpoints为Service的后端,当Pod发生变化时,实时更新Endpoints。
  9. Service Account Controller
    管理维护Service Account,为每个Namespace创建默认的Service Account,同时为Service Account创建Service Account Secret。
  10. Persistent Volume Controller
    管理维护Persistent Volume和Persistent Volume Claim,为新的Persistent Volume Claim分配Persistent Volume进行绑定,为释放的Persistent Volume执行清理回收。
  11. Daemon Set Controller
    管理维护Daemon Set,负责创建Daemon Pod,保证指定的Node上正常的运行Daemon Pod。
  12. Deployment Controller
    管理维护Deployment,关联Deployment和Replication Controller,保证运行指定数量的Pod。当Deployment更新时,控制实现Replication Controller和 Pod的更新。
  13.Job Controller
    管理维护Job,为Jod创建一次性任务Pod,保证完成Job指定完成的任务数目
  14. Pod Autoscaler Controller
    实现Pod的自动伸缩,定时获取监控数据,进行策略匹配,当满足条件时执行Pod的伸缩动作。

•Kubernetes Node运行节点,运行管理业务容器,包含如下组件:
  1.Kubelet
    负责管控容器,Kubelet会从Kubernetes API Server接收Pod的创建请求,启动和停止容器,监控容器运行状态并汇报给Kubernetes API Server。
  2.Kubernetes Proxy
    负责为Pod创建代理服务,Kubernetes Proxy会从Kubernetes API Server获取所有的Service信息,并根据Service的信息创建代理服务,实现Service到Pod的请求路由和转发,从而实现Kubernetes层级的虚拟转发网络。
  3.Docker
    Node上需要运行容器服务
部署k8s

环境描述:

操作系统 IP地址 主机名 软件包列表
CentOS7.3-x86_64 192.168.200.200 Master Docker kubeadim
CentOS7.3-x86_64 192.168.200.201 Minion-1 Docker
CentOS7.3-x86_64 192.168.200.202 Minion-2 Docker

部署基础环境

1.1 安装 Docker-CE
1.查看master系统信息:
[root@master ~]# hostname
master
[root@master ~]# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core)
[root@master ~]# uname -r
3.10.0-514.el7.x86_64
2.查看minion系统信息:
[root@master ~]# hostname
master
[root@master ~]# cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)
[root@master ~]# uname -r
3.10.0-862.el7.x86_64
3.安装依赖包:
[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
4.设置阿里云镜像源
[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
5.安装 Docker-CE
[root@master ~]# yum install docker-ce -y
6.启动 Docker-CE
[root@master ~]# systemctl enable docker
[root@master ~]# systemctl start docker
1.2 安装 Kubeadm

  1. 安装 Kubeadm 首先我们要配置好阿里云的国内源,执行如下命令:
    [root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    EOF
  2. 执行以下命令来重建 Yum 缓存
    [root@master ~]# yum -y install epel-release
    [root@master ~]# yum clean all
    [root@master ~]# yum makecache
  3. 安装 Kubeadm
    [root@master ~]# yum -y install kubelet kubeadm kubectl kubernetes-cni
  4. 启用 Kubeadm 服务
    [root@master ~]# systemctl enable kubelet && systemctl start kubelet
    1.3 配置 Kubeadm 所用到的镜像
    [root@master ~]# vim k8s.sh
    #!/bin/bash
    images=(kube-proxy-amd64:v1.11.0 kube-scheduler-amd64:v1.11.0 kube-controller-manager-amd64:v1.11.0 kube-apiserver-amd64:v1.11.0
    etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9
    k8s-dns-dnsmasq-nanny-amd64:1.14.9 )
    for imageName in ${images[@]} ; do
    docker pull keveon/$imageName
    docker tag keveon/$imageName k8s.gcr.io/$imageName
    docker rmi keveon/$imageName
    done

    个人新加的一句,V 1.11.0 必加

    docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1
    [root@master ~]# sh k8s.sh
    1.4 关闭 Swap
    [root@master ~]# swapoff -a
    [root@master ~]# vi /etc/fstab
    #

    /etc/fstab

    Created by anaconda on Sun May 27 06:47:13 2018

    #

    Accessible filesystems, by reference, are maintained under '/dev/disk'

    See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

    #
    /dev/mapper/cl-root / xfs defaults 0 0
    UUID=07d1e156-eba8-452f-9340-49540b1c2bbb /boot xfs defaults 0 0
    #/dev/mapper/cl-swap swap swap defaults 0 0
    不关闭swap也是可以的,初始化时需要跳过swap错误,修改配置文件如下:
    [root@master manifors]# vim /etc/sysconfig/kubelet
    KUBELET_EXTRA_ARGS="--fail-swap-on=false" #不关闭swap
    KUBE_PROXY_MODE=ipvs #启用IPvs,不定义会降级Iptables
    启用ipvs需要提前将模块安装好并启用

1.5 关闭 SELinux
[root@master ~]# setenforce 0
1.6 配置转发参数
[root@master ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
[root@master ~]# sysctl –system
#上述步骤minion端也需要做
主机正式安装 Kuberentes
2.1 初始化相关镜像
要初始化镜像,请运行以下命令:
[root@master ~]# kubeadm init --kubernetes-version=v1.11.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors='SystemVerification'
#上面的操作会产生下面命令,下面这条命令是将minion端加入master端的,在minion上执行:
kubeadm join 192.168.200.200:6443 --token uyicwj.akb6hgdryfo1dtij --discovery-token-ca-cert-hash sha256:f26b1a713f1b10adb1e22aa129b23ea266bde550a2570e2b460070a080b42e08
2.2 配置 kubectl 认证信息
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
2.3 安装 Flannel 网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
执行完成之后,我们可以运行一下命令,查看现在的节点信息
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 9m v1.11.3

2.4.Node 节点配置
1.执行上述master初始化生成的命令:
[root@minion ~]# kubeadm join 192.168.200.200:6443 --token uyicwj.akb6hgdryfo1dtij --discovery-token-ca-cert-hash sha256:f26b1a713f1b10adb1e22aa129b23ea266bde550a2570e2b460070a080b42e08
#执行完后没有报错说明成功了
2.导入后再在master端查看下:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 38m v1.11.3
minion Ready <none> 27m v1.11.3
到这里master和minion配置完成,但是还没有创建pod及管理pod的权限需要创建用户,并授权

2.5.创建nginx 的pod测试
1.创建nginx的pod
[root@master ~]# kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1 --dry-run=true
deployment.apps/nginx-deploy created (dry run)

nginx-deploy:pod名称

#--image=nginx:1.14-alpine:什么镜像
#--port=80:暴露的端口号,默认也会暴露
#--replicas=1:创建pod数量
#--dry-run=tru:使用dry-run模式创建,类似于测试,并不会真正创建
2下面命令是真正创建pod,去掉--dry-run=true即可
[root@master ~]# kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1
deployment.apps/nginx-deploy created
3查看下pod:
[root@master ~]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deploy 1 1 1 1 4m

DESIRED:期望创建的数量

CURRENT:已经创建的数量

UP-TO-DATE:更新的数量

AVAILABLE:正在运行的数量

[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deploy-5b595999-5p496 1/1 Running 0 6m
4查看pod运行的详细信息:
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deploy-5b595999-5p496 1/1 Running 0 7m 10.244.2.2 minion-2
5验证:
[root@minion-1 ~]# curl 10.244.2.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>;
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>;

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
注意:上面那个地址只能在集群内部使用,在集群外部无法使用,集群内部的pod虽然在同一网段但也不会直接通信,因为当pod挂了,k8s会重启启动一个pod,新pod名称和IP都可能会改变。
使用并操作 Kuberentes
3.1将nginx端口对外映射
1将nginx对外暴露
[root@master ~]# kubectl expose deployment nginx-deploy --name nginx --port=80 --target-port=80 --protocol=TCP
service/nginx exposed

kubectl expose:创建一个服务

deployment nginx-deploy --name nginx:将nginx-deploy的控制器创建一个服务名叫nginx的

#--port=80:服务的端口

target-port=80:容器的端口

#--protocol=TCP:使用的协议,默认就是TCP
2查看服务信息:
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
nginx ClusterIP 10.101.101.195 <none> 80/TCP 3m
3测试:
[root@master ~]# curl 10.101.101.195
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>;
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>;

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
注:上述地址是动态分配的集群地址,用于集群内部访问的,pod之间可同过集群地址进行听信。这样解决的pod地址改变的问题。
3.2创建一个交互式pod
1.创建
[root@master ~]# kubectl run client --image=busybox --replicas=1 -it --restart=Never

client :pod名

--image:选择镜像

#--replicas:创建多少个pod
#--restart:是否重启
2.测试用服务名称可以访问pod服务,并且用k8s带的dns可以自动将服务名解析成集群地址,这样即便pod重新创建也不会影响服务访问
/ # wget -O - -q http://nginx:80/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>;
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>;

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
3.删除pod,会自动生成新的pod,测试访问:
[root@master ~]# kubectl delete pod nginx-deploy-5b595999-5p496
pod "nginx-deploy-5b595999-5p496" deleted
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 19m
nginx-deploy-5b595999-5wxpj 1/1 Running 0 20s

wget -O - -q http://nginx:80/

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>;
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>;

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ #
通过上面发现访问不受影响
4.修改创建好的pod:
[root@master ~]# kubectl edit pod myapp-74c94dcb8c-2s2ks

Please edit the object below. Lines beginning with a '#' will be ignored,

and an empty file will abort the edit. If an error occurs while saving this file will be

reopened with the relevant failures.

#
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2018-09-26T10:29:35Z
generateName: myapp-74c94dcb8c-
labels:
pod-template-hash: "3075087647"
run: myapp
name: myapp-74c94dcb8c-2s2ks
namespace: default
ownerReferences:

  • apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: myapp-74c94dcb8c
    uid: 01f5a169-c177-11e8-b2c9-000c2929855b
    resourceVersion: "29582"
    selfLink: /api/v1/namespaces/default/pods/myapp-74c94dcb8c-2s2ks
    uid: 1042d000-c177-11e8-b2c9-000c2929855b
    spec:
    containers:
  • image: ikubernetes/myapp:v2
    imagePullPolicy: IfNotPresent
    name: myapp
    resources: {}
    。。。。。。。。。。。。。。。。。。。。。。。。。。。。此处省略后面内容
    #edit:修改后面跟上pod就可以,server一样,所有的都可以通过这种办法修改
    3.3 pod的扩容、缩容、升级、回滚操作及简单修改文件方式使其被外部访问
    1.动态扩容和缩容:
    [root@master ~]# kubectl scale --replicas=5 deployment myapp
    deployment.extensions/myapp scaled
    将myapp扩展出5个
    [root@master ~]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    client 1/1 Running 0 49m
    myapp-848b5b879b-cdql2 1/1 Running 0 14m
    myapp-848b5b879b-d2xtr 1/1 Running 0 3m
    myapp-848b5b879b-lg45w 1/1 Running 0 3m
    myapp-848b5b879b-pfxvp 1/1 Running 0 3m
    myapp-848b5b879b-wfp6k 1/1 Running 0 14m
    nginx-deploy-5b595999-5wxpj 1/1 Running 0 29m
    [root@master ~]# kubectl scale --replicas=3 deployment myapp
    [root@master ~]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    client 1/1 Running 0 49m
    myapp-848b5b879b-cdql2 1/1 Running 0 15m
    myapp-848b5b879b-d2xtr 1/1 Running 0 4m
    myapp-848b5b879b-wfp6k 1/1 Running 0 15m
    nginx-deploy-5b595999-5wxpj 1/1 Running 0 30m
    2.pod升级和回滚:
    / # while true;do sleep 1 && wget -O - -q myapp;done
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    目前版本都是1现在升级成2
    [root@master ~]# kubectl set image deployment myapp myapp=ikubernetes/myapp:v2

    kubectl set image关键字

    deployment:控制器,后面跟上控制器名

    myapp:控制器名

    myapp=ikubernetes/myapp:v2:pod镜像的新的版本

    3.查看效果:
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    开始由1变成2,他应该是有过程的一个一个变化的,由于之前升过,所以他就一次性都变化了,
    4.下面是回滚:
    [root@master ~]# kubectl rollout undo deployment myapp

    kubectl rollout关键字

    undo:默认回滚到上一版本,后面可以指明版本可以回滚到指定的版本

    deployment:控制器

    myapp:控制器名

    显示更新或回滚过程:
    [root@master ~]# kubectl rollout status deployment myapp
    5.查看:
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

6..查看service状态:
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
myapp ClusterIP 10.105.254.230 <none> 80/TCP 17h
nginx ClusterIP 10.101.101.195 <none> 80/TCP 18h
7修改myapp的服务配置
[root@master ~]# kubectl edit svc myapp

Please edit the object below. Lines beginning with a '#' will be ignored,

and an empty file will abort the edit. If an error occurs while saving this file will be

reopened with the relevant failures.

#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-09-26T10:19:31Z
labels:
run: myapp
name: myapp
namespace: default
resourceVersion: "19523"
selfLink: /api/v1/namespaces/default/services/myapp
uid: a8a1b74a-c175-11e8-b2c9-000c2929855b
spec:
clusterIP: 10.105.254.230
ports:

  • port: 80
    protocol: TCP
    targetPort: 80
    selector:
    run: myapp
    sessionAffinity: None
    type: NodePort #将ClusterIP改成NodePort
    status:
    loadBalancer: {}
    #保存退出后查看:
    [root@master ~]# kubectl get svc
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
    myapp NodePort 10.105.254.230 <none> 80:30108/TCP 17h
    nginx ClusterIP 10.101.101.195 <none> 80/TCP 18h
    会多出现一个端口,其端口是随机产生的,外部客户端可用过集群节点的IP对应开放的端口即可访问:

3.4 编写yaml文件,通过yaml文件操作
1.指定pod输出yaml格式信息:
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 2h
myapp-848b5b879b-gd4ll 1/1 Running 0 2h
myapp-848b5b879b-jn5xt 1/1 Running 0 2h
myapp-848b5b879b-lhp74 1/1 Running 0 2h
myapp-ser-759b978dcf-d7fvg 1/1 Running 0 2h
myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 2h
nginx-deploy-5b595999-5wxpj 1/1 Running 1 19h
[root@master ~]# kubectl get pod myapp-848b5b879b-gd4ll -o yaml
apiVersion: v1
kind: Pod #类型
metadata: #元数据
creationTimestamp: 2018-09-27T03:24:31Z
generateName: myapp-848b5b879b-
labels:
pod-template-hash: "4046164356"
run: myapp
name: myapp-848b5b879b-gd4ll
namespace: default
ownerReferences:

  • apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: myapp-848b5b879b
    uid: 72d6f86d-c174-11e8-b2c9-000c2929855b
    resourceVersion: "39758"
    selfLink: /api/v1/namespaces/default/pods/myapp-848b5b879b-gd4ll
    uid: d96ce307-c204-11e8-b6f4-000c2929855b
    spec: #规范
    containers:
  • image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    name: myapp
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    • mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-fnqdb
      readOnly: true
      dnsPolicy: ClusterFirst
      nodeName: minion-2
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: default
      serviceAccountName: default
      terminationGracePeriodSeconds: 30
      tolerations:
  • effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  • effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
    volumes:
  • name: default-token-fnqdb
    secret:
    defaultMode: 420
    secretName: default-token-fnqdb
    status: #当前状态
    conditions:
  • lastProbeTime: null
    lastTransitionTime: 2018-09-27T03:24:31Z
    status: "True"
    type: Initialized
  • lastProbeTime: null
    lastTransitionTime: 2018-09-27T03:24:33Z
    status: "True"
    type: Ready
  • lastProbeTime: null
    lastTransitionTime: null
    status: "True"
    type: ContainersReady
  • lastProbeTime: null
    lastTransitionTime: 2018-09-27T03:24:31Z
    status: "True"
    type: PodScheduled
    containerStatuses:
  • containerID: docker://dcb9cbf45d178e4f3515a68d8a0c90393c517655e46adf5e5c27c6ef9a057952
    image: ikubernetes/myapp:v1
    imageID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
    lastState: {}
    name: myapp
    ready: true
    restartCount: 0
    state:
    running:
    startedAt: 2018-09-27T03:24:32Z
    hostIP: 192.168.200.202
    phase: Running
    podIP: 10.244.2.16
    qosClass: BestEffort
    startTime: 2018-09-27T03:24:31Z
    2.大部分资源的配置都需要一下五个字段:
    (1).apiVersion :定义方式:group/version
    [root@master ~]# kubectl api-versions #显示可定义的组和版本
    (2).kind :资源类别
    (3)metadata :元数据
    name:必须唯一
    namespace:命名空间,同一个空间name要唯一
    labels: 标签

(4)spec:定义用户期望的状态
(2)status:当前的状态,本字段由kubernetes自己维护,无需修改
3.可通过下面命令或许每个字段定义的方式,含义:
[root@master ~]# kubectl explain pods.apiVersion #获取帮助,kubectl explain关键字
KIND: Pod
VERSION: v1

FIELD: apiVersion <string>

DESCRIPTION:
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
4.编写yaml文件
[root@master manifors]# vim myapp.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-daemo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
  • name: busybox
    image: busybox:latest
    command:
    • "/bin/bash"
    • "-c"
    • "sleep 3600"
      [root@master manifors]# kubectl create -f myapp.yaml
      [root@master manifors]# kubectl get pods -w
      NAME READY STATUS RESTARTS AGE
      client 0/1 Error 0 3h
      myapp-848b5b879b-gd4ll 1/1 Running 0 3h
      myapp-848b5b879b-jn5xt 1/1 Running 0 3h
      myapp-848b5b879b-lhp74 1/1 Running 0 3h
      myapp-ser-759b978dcf-d7fvg 1/1 Running 0 3h
      myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 3h
      nginx-deploy-5b595999-5wxpj 1/1 Running 1 20h
      pod-daemo 2/2 Running 0 1m
      3.6 使用标签、nodename及注解添加
      1.过滤指定类别标签的pod
      [root@master manifors]# kubectl get pods -l app
      NAME READY STATUS RESTARTS AGE
      pod-daemo 2/2 Running 1 1h
      #-l 过滤指定的标签的类别
      [root@master manifors]# kubectl get pods -l app --show-labels
      NAME READY STATUS RESTARTS AGE LABELS
      pod-daemo 2/2 Running 1 1h app=myapp,tier=frontend
      #--show-labels:显示完整标签信息
      [root@master manifors]# kubectl get pods -L app
      NAME READY STATUS RESTARTS AGE APP
      client 0/1 Error 0 5h
      myapp-848b5b879b-gd4ll 1/1 Running 0 4h
      myapp-848b5b879b-jn5xt 1/1 Running 0 4h
      myapp-848b5b879b-lhp74 1/1 Running 0 4h
      myapp-ser-759b978dcf-d7fvg 1/1 Running 0 5h
      myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 5h
      nginx-deploy-5b595999-5wxpj 1/1 Running 1 22h
      pod-daemo 2/2 Running 1 1h myapp
      #-L 显示符合标签类别的标签
      2.给一个pod打标签的命令:
      [root@master manifors]# kubectl label pod pod-daemo release=canary

      kubectl label 关键字

      pod pod-daemo :pod pod名

      release=canary:标签类型和标签名 key=values

      3.查看:
      [root@master manifors]# kubectl get pods -l app --show-labels
      NAME READY STATUS RESTARTS AGE LABELS
      pod-daemo 2/2 Running 1 1h app=myapp,release=canary,tier=frontend
      4.修改标签:
      [root@master manifors]# kubectl label pod pod-daemo release=stable --overwrite
      [root@master manifors]# kubectl get pods -l app --show-labels
      NAME READY STATUS RESTARTS AGE LABELS
      pod-daemo 2/2 Running 1 1h app=myapp,release=stable,tier=frontend
      5.给node节点打标签,让pod只允许在指定标签的节点上
      [root@master manifors]# kubectl label node minion-1 dsiktype=ssd
      node/minion-1 labeled
      [root@master manifors]# kubectl get node --show-labels
      NAME STATUS ROLES AGE VERSION LABELS
      master Ready master 1d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
      minion-1 Ready <none> 1d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dsiktype=ssd,kubernetes.io/hostname=minion-1
      minion-2 Ready <none> 1d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=minion-2
      6.修改pod文件:
      [root@master manifors]# vim myapp.yaml
      apiVersion: v1
      kind: Pod
      metadata:
      name: pod-daemo
      namespace: default
      labels:
      app: myapp
      tier: frontend
      spec:
      containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
    • name: https
      containerPort: 443
  • name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command:
    • "/bin/sh"
    • "-c"
    • "sleep 3600"
      nodeSelector:
      dsiktype: ssd
      [root@master manifors]# kubectl describe pods pod-daemo
      Name: pod-daemo
      Namespace: default
      Node: minion-1/192.168.200.201
      Start Time: Thu, 27 Sep 2018 14:25:30 +0800
      Labels: app=myapp
      release=stable
      7.查看:
      [root@master manifors]# kubectl describe pod pod-daemo
      Name: pod-daemo
      Namespace: default
      Node: minion-1/192.168.200.201
      Start Time: Thu, 27 Sep 2018 16:46:28 +0800
      Labels: app=myapp
      tier=frontend
      会始终允许在minion-1上
      使用nodeName会绑定在对应的节点上,而标签可能会有范围性
      8.将pod绑定在minion-2上运行:
      [root@master manifors]# vim myapp.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-daemo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
    • name: https
      containerPort: 443
  • name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command:
    • "/bin/sh"
    • "-c"
    • "sleep 3600"
      nodeName:
      minion-2
      [root@master manifors]# kubectl create -f myapp.yaml
      pod/pod-daemo created
      [root@master manifors]# kubectl describe pod pod-daemo
      Name: pod-daemo
      Namespace: default
      Node: minion-2/192.168.200.202
      Start Time: Thu, 27 Sep 2018 16:52:44 +0800

9..Annotations资源注解的添加
[root@master manifors]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-daemo
namespace: default
labels:
app: myapp
tier: frontend
annotations:
minion/created-by: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
    • name: https
      containerPort: 443
  • name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command:
    • "/bin/sh"
    • "-c"
    • "sleep 3600"
      nodeName:
      minion-2
      [root@master manifors]# kubectl create -f myapp.yaml
      pod/pod-daemo created
      [root@master manifors]# kubectl describe pod pod-daemo
      Name: pod-daemo
      Namespace: default
      Node: minion-2/192.168.200.202
      Start Time: Thu, 27 Sep 2018 17:01:04 +0800
      Labels: app=myapp
      tier=frontend
      Annotations: minion/created-by=cluster admin
      Status: Running
      IP: 10.244.2.18

3.7 POD生命周期中的重要行为:
1.探测的简要介绍
初始化容器
容器探测:
Liveness:探测容器是否存活
Readliness:探测主容器是否可以提供服务
探针类型:
(1) exec
(2) httpGet
(3) tcpsocket
2.用exec探针实例:
[root@master manifors]# vim liveness-exec.yaml

apiVersion: v1
kind: Pod
metadata:
name: liveness-exec
namespace: default
spec:
containers:

  • name: liveness-exec-pod
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","touch /tmp/healthy;sleep 30; rm -f /tmp/healthy;sleep 3600"]
    livenessProbe:
    exec:
    command: ["test","-e","/tmp/healthy"] #执行命令判断文件是否存在
    initialDelaySeconds: 1 #容器启动多长时间后探测
    periodSeconds: 3 #探测次数
    1. 用HTTPget探针实例:
      [root@master manifors]# vim liveness-exec.yaml

apiVersion: v1
kind: Pod
metadata:
name: liveness-httpget
namespace: default
spec:
containers:

  • name: liveness-httpget-pod
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    name: http #定义开放端口名
    containerPort: 80 #开放的端口
    livenessProbe:
    httpGet:
    port: http #探测对应的端口名
    path: /index.html #探测页面
    initialDelaySeconds: 1 #容器启动多长时间后探测
    periodSeconds: 3 #容器探测的次数
    1. 使用readliessProbe方式探测实例
      [root@master manifors]# cat readliness-httpget.yaml
      apiVersion: v1
      kind: Pod
      metadata:
      name: readliness-httpget
      namespace: default
      spec:
      containers:
  • name: readliness-httpget-pod
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    • name: http
      containerPort: 80
      readinessProbe:
      httpGet:
      port: http
      path: /index.html
      initialDelaySeconds: 1
      periodSeconds: 3
      当探测页面出现问题时,k8s就不会对这个容器进行调度,直到页面恢复正常,而livenessProbe探测服务是否正常,如果有问题就重启。
      Pod控制器创建
      4.1 创建控制器
      Pod控制器:
      ReplicationController:
      ReplicaSet:
      Deployment:
      1.编写ReplicaSet控制器的yaml文件
      [root@master manifors]# vim rs-demo-yaml
      apiVersion: apps/v1
      kind: ReplicaSet
      metadata:
      name: myapp
      spec:
      replicas: 2
      selector:
      matchLabels:
      app: myapp
      template:
      metadata:
      name: m namespace: default
      yapp-pod
      labels:
      app: myapp
      spec:
      containers:
      • name: myapp-container
        image: ikubernetes/myapp:v1
        ports:
      • name: http
        containerPort: 80
        2.编写deployment的yaml文件:
        [root@master manifors]# cat deploy-daemo.yaml
        apiVersion: apps/v1
        kind: Deployment
        metadata:
        name: myapp-deploy
        namespace: default
        spec:
        replicas: 2
        selector:
        matchLabels:
        app: myapp
        release: canary
        template:
        metadata:
        labels:
        app: myapp
        release: canary
        spec:
        containers:
      • name: myapp
        image: ikubernetes/myapp:v1
        ports:
      • name: http
        containerPort: 80
        [root@master manifors]# kubectl apply -f deploy-daemo.yaml
        [root@master manifors]# kubectl get pods
        NAME READY STATUS RESTARTS AGE
        client 0/1 Error 0 2d
        myapp-848b5b879b-gd4ll 1/1 Running 2 2d
        myapp-848b5b879b-jn5xt 1/1 Running 2 2d
        myapp-848b5b879b-lhp74 1/1 Running 2 2d
        myapp-deploy-69b47bc96d-bc6bw 1/1 Running 0 11m
        myapp-deploy-69b47bc96d-tj55r 1/1 Running 0 11m
        myapp-j6n4g 1/1 Running 1 23h
        myapp-ser-759b978dcf-d7fvg 1/1 Running 2 2d
        myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 2d
        nginx-deploy-5b595999-5wxpj 1/1 Running 3 2d
        pod-daemo 2/2 Running 18 1d
        readliness-httpget 1/1 Running 1 1d
        3.进行扩容,可以修改其配置文件
        [root@master manifors]# vim deploy-daemo.yaml
        修改下面内容
        spec:
        replicas: 3 #2修改成3
        [root@master manifors]# kubectl apply -f deploy-daemo.yaml
        #apply可以执行多次,而create只能执行一次,可以列apply是重新加载配置文件
        [root@master manifors]# kubectl get pods
        NAME READY STATUS RESTARTS AGE
        client 0/1 Error 0 2d
        myapp-848b5b879b-gd4ll 1/1 Running 2 2d
        myapp-848b5b879b-jn5xt 1/1 Running 2 2d
        myapp-848b5b879b-lhp74 1/1 Running 2 2d
        myapp-deploy-69b47bc96d-bc6bw 1/1 Running 0 14m
        myapp-deploy-69b47bc96d-ppgvf 1/1 Running 0 1m
        myapp-deploy-69b47bc96d-tj55r 1/1 Running 0 14m
        myapp-j6n4g 1/1 Running 1 23h
        myapp-ser-759b978dcf-d7fvg 1/1 Running 2 2d
        myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 2d
        nginx-deploy-5b595999-5wxpj 1/1 Running 3 2d
        pod-daemo 2/2 Running 18 1d
        readliness-httpget 1/1 Running 1 1d
        4.查看详细信息:
        [root@master manifors]# kubectl describe deploy myapp-deploy
        Name: myapp-deploy
        Namespace: default
        CreationTimestamp: Sat, 29 Sep 2018 15:53:52 +0800
        Labels: <none>
        Annotations: deployment.kubernetes.io/revision=1
        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"myapp-deploy","namespace":"default"},"spec":{"replicas":3,"selector":{...
        Selector: app=myapp,release=canary
        Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
        StrategyType: RollingUpdate #默认更新策略是滚动更新
        MinReadySeconds: 0
        RollingUpdateStrategy: 25% max unavailable, 25% max surge #最大不能超过25%最少不能少于25%
        Pod Template:
        Labels: app=myapp
        release=canary
        4.2 pod的滚动更新

测试滚动更新:

  1. 监控pod看是否存在变化
    [root@master manifors]# kubectl get pods -l app=myapp -w
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-69b47bc96d-bc6bw 1/1 Running 0 21m
    myapp-deploy-69b47bc96d-ppgvf 1/1 Running 0 8m
    myapp-deploy-69b47bc96d-tj55r 1/1 Running 0 21m
  2. 修改配置文件的方法更新版本,当然还有其他更好的办法
    [root@master manifors]# vim deploy-daemo.yaml
    修改下面内容:
    spec:
    containers:
    • name: myapp
      image: ikubernetes/myapp:v2 #v1修改成v2即可
  3. 使用apply命令
    [root@master manifors]# kubectl apply -f deploy-daemo.yaml
  4. 查看pod是否有变化
    [root@master manifors]# kubectl get pods -l app=myapp -w
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-69b47bc96d-bc6bw 1/1 Running 0 21m
    myapp-deploy-69b47bc96d-ppgvf 1/1 Running 0 8m
    myapp-deploy-69b47bc96d-tj55r 1/1 Running 0 21m
    myapp-deploy-67f6f6b4dc-tvxvw 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-tvxvw 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-tvxvw 0/1 ContainerCreating 0 0s
    myapp-deploy-67f6f6b4dc-tvxvw 1/1 Running 0 1s
    myapp-deploy-69b47bc96d-ppgvf 1/1 Terminating 0 13m
    myapp-deploy-67f6f6b4dc-gsndw 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-gsndw 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-gsndw 0/1 ContainerCreating 0 1s
    myapp-deploy-69b47bc96d-ppgvf 0/1 Terminating 0 13m
    myapp-deploy-67f6f6b4dc-gsndw 1/1 Running 0 2s
    myapp-deploy-69b47bc96d-bc6bw 1/1 Terminating 0 27m
    myapp-deploy-67f6f6b4dc-z7hlj 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-z7hlj 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-z7hlj 0/1 ContainerCreating 0 1s
    myapp-deploy-69b47bc96d-bc6bw 0/1 Terminating 0 27m
    。。。。。。。。。。。。
    会发现pod会弹出很多信息
    上面信息顺序是
  5. 先Pending等待完成调度
  6. ContainerCreating调度完成后开始创建
  7. Running创建好了运行
  8. Terminating在停止一个运行的老版本的pod
    整个更新过程就是这样
  9. 查看下rs
    [root@master manifors]# kubectl get rs -o wide
    NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
    myapp-74c94dcb8c 0 0 0 2d myapp ikubernetes/myapp:v2 pod-template-hash=3075087647,run=myapp
    myapp-848b5b879b 3 3 3 2d myapp ikubernetes/myapp:v1 pod-template-hash=4046164356,run=myapp
    myapp-deploy-67f6f6b4dc 3 3 3 7m myapp ikubernetes/myapp:v2 app=myapp,pod-template-hash=2392926087,release=canary
    myapp-deploy-69b47bc96d 0 0 0 34m myapp ikubernetes/myapp:v1 app=myapp,pod-template-hash=2560367528,release=canary
    通过×××部分可以看出,v1的模板还没有删除,方便回滚
  10. 看历史的版本保留
    [root@master manifors]# kubectl rollout history deployment myapp-deploy
    deployments "myapp-deploy"
    REVISION CHANGE-CAUSE
    1 <none>
    2 <none>
  11. 用补丁的方式扩容:
    [root@master manifors]# kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}'
    [root@master manifors]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    client 0/1 Error 0 2d
    myapp-848b5b879b-gd4ll 1/1 Running 2 2d
    myapp-848b5b879b-jn5xt 1/1 Running 2 2d
    myapp-848b5b879b-lhp74 1/1 Running 2 2d
    myapp-deploy-67f6f6b4dc-5wbvm 1/1 Running 0 20s
    myapp-deploy-67f6f6b4dc-d2frs 1/1 Running 0 20s
    myapp-deploy-67f6f6b4dc-gsndw 1/1 Running 0 30m
    myapp-deploy-67f6f6b4dc-tvxvw 1/1 Running 0 30m
    myapp-deploy-67f6f6b4dc-z7hlj 1/1 Running 0 30m
    myapp-ser-759b978dcf-d7fvg 1/1 Running 2 2d
    myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 2d
    nginx-deploy-5b595999-5wxpj 1/1 Running 3 2d
    readliness-httpget 1/1 Running 1 1d

  12. 用补丁的方式修改更新策略
    [root@master manifors]# kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}'
    [root@master manifors]# kubectl describe deployment myapp-deploy
    Name: myapp-deploy
    Namespace: default
    CreationTimestamp: Sat, 29 Sep 2018 15:53:52 +0800
    Labels: app=myapp
    release=canary
    Annotations: deployment.kubernetes.io/revision=2
    kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"myapp-deploy","namespace":"default"},"spec":{"replicas":3,"selector":{...
    Selector: app=myapp,release=canary
    Replicas: 5 desired | 5 updated | 5 total | 5 available | 0 unavailable
    StrategyType: RollingUpdate
    MinReadySeconds: 0
    RollingUpdateStrategy: 0 max unavailable, 1 max surge

  13. 更新暂停操作(金丝雀发布)
    将其一个应用更新到版本3:
    [root@master ~]# kubectset image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy
  14. 查看状态变化:
    [root@master ~]# kubectl get pod -w
    NAME READY STATUS RESTARTS AGE
    client 0/1 Error 0 5d
    myapp-848b5b879b-4b2ft 1/1 Running 0 14m
    myapp-848b5b879b-nvqwt 1/1 Running 0 14m
    myapp-848b5b879b-vr9d6 1/1 Running 0 14m
    myapp-deploy-67f6f6b4dc-cfpnt 1/1 Running 0 7m
    myapp-deploy-67f6f6b4dc-gcvd8 1/1 Running 0 7m
    myapp-deploy-67f6f6b4dc-hs6cn 1/1 Running 0 13m
    myapp-deploy-67f6f6b4dc-lt6d2 1/1 Running 0 7m
    myapp-deploy-67f6f6b4dc-ptg2k 1/1 Running 0 7m
    myapp-ser-759b978dcf-d7fvg 1/1 Running 3 5d
    myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 5d
    nginx-deploy-5b595999-5wxpj 1/1 Running 4 6d
    readliness-httpget 1/1 Running 2 4d
    myapp-deploy-6bdcd6755d-jxs5s 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-jxs5s 0/1 Pending 0 1s
    myapp-deploy-6bdcd6755d-jxs5s 0/1 ContainerCreating 0 1s
    myapp-deploy-6bdcd6755d-jxs5s 1/1 Running 0 3s
    会发现创建一个新的pod出来,查看其版本和其他pod的版本:
    [root@master ~]# kubectl describe pod myapp-deploy-6bdcd6755d-jxs5s
    Image: ikubernetes/myapp:v3
    其他pod:
    [root@master ~]# kubectl describe pod myapp-deploy-67f6f6b4dc-cfpnt
    Image: ikubernetes/myapp:v2
    由此可见,它新创建出来一个新的pod,我们将它暂停,然后将这新的pod当做金丝雀,如果没有问题,可以恢复更新将所有的pod都更新。如下
    更新所有pod
    [root@master ~]# kubectl rollout resume deployment myapp-deploy
    查看过程:
    [root@master ~]# kubectl get pod -w
    myapp-deploy-6bdcd6755d-jxs5s 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-jxs5s 0/1 Pending 0 1s
    myapp-deploy-6bdcd6755d-jxs5s 0/1 ContainerCreating 0 1s
    myapp-deploy-6bdcd6755d-jxs5s 1/1 Running 0 3s
    myapp-server-6ff967596f-nxjlb 0/1 ErrImagePull 0 5d
    myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 5d
    myapp-deploy-67f6f6b4dc-lt6d2 1/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-jtjz9 0/1 Pending 0 1s
    myapp-deploy-6bdcd6755d-jtjz9 0/1 Pending 0 1s
    myapp-deploy-6bdcd6755d-jtjz9 0/1 ContainerCreating 0 1s
    myapp-deploy-67f6f6b4dc-lt6d2 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-lt6d2 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-lt6d2 0/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-jtjz9 1/1 Running 0 3s
    myapp-deploy-67f6f6b4dc-cfpnt 1/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-nnprv 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-nnprv 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-nnprv 0/1 ContainerCreating 0 0s
    myapp-deploy-67f6f6b4dc-cfpnt 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-cfpnt 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-cfpnt 0/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-nnprv 1/1 Running 0 3s
    myapp-deploy-67f6f6b4dc-ptg2k 1/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-4f82b 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-4f82b 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-4f82b 0/1 ContainerCreating 0 0s
    myapp-deploy-67f6f6b4dc-ptg2k 0/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-4f82b 1/1 Running 0 2s
    myapp-deploy-67f6f6b4dc-gcvd8 1/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-qpq87 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-qpq87 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-qpq87 0/1 ContainerCreating 0 1s
    myapp-deploy-67f6f6b4dc-gcvd8 0/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-qpq87 1/1 Running 0 3s
    myapp-deploy-67f6f6b4dc-hs6cn 1/1 Terminating 0 22m
    myapp-deploy-67f6f6b4dc-hs6cn 0/1 Terminating 0 22m
    myapp-deploy-67f6f6b4dc-hs6cn 0/1 Terminating 0 22m
    myapp-deploy-67f6f6b4dc-hs6cn 0/1 Terminating 0 22m
    myapp-deploy-67f6f6b4dc-ptg2k 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-ptg2k 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-gcvd8 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-gcvd8 0/1 Terminating 0 16m
    [root@master ~]# kubectl rollout status deployment myapp-deploy
    deployment "myapp-deploy" successfully rolled out
    上述命令也可查看过程
    [root@master ~]# kubectl get rs -o wide #可以看到有三个版本,其中v3在使用
    myapp-deploy-67f6f6b4dc 0 0 0 3d myapp ikubernetes/myapp:v2 app=myapp,pod-template-hash=2392926087,release=canary
    myapp-deploy-69b47bc96d 0 0 0 3d myapp ikubernetes/myapp:v1 app=myapp,pod-template-hash=2560367528,release=canary
    myapp-deploy-6bdcd6755d 5 5 5 3d myapp ikubernetes/myapp:v3 app=myapp,pod-template-hash=2687823118,release=canary
  15. 回滚到第一版:
    [root@master ~]# kubectl rollout history deployment myapp-deploy
    deployments "myapp-deploy"
    REVISION CHANGE-CAUSE
    1 <none>
    5 <none>
    6 <none>
    7 <none>
    所有的版本信息,
    [root@master ~]# kubectl rollout undo deployment myapp-deploy --to-revision=1
    查看
    [root@master ~]# kubectl get rs -o wide
    myapp-deploy-67f6f6b4dc 0 0 0 3d myapp ikubernetes/myapp:v2 app=myapp,pod-template-hash=2392926087,release=canary
    myapp-deploy-69b47bc96d 5 5 5 3d myapp ikubernetes/myapp:v1 app=myapp,pod-template-hash=2560367528,release=canary
    myapp-deploy-6bdcd6755d 0 0 0 3d myapp ikubernetes/myapp:v3 app=myapp,pod-template-hash=2687823118,release=canary
    发现当前工作的是v1
    整个回滚过程和更新过程相同
    [root@master ~]# kubectl rollout history deployment myapp-deploy
    deployments "myapp-deploy"
    REVISION CHANGE-CAUSE
    5 <none>
    6 <none>
    7 <none>
    8 <none>
    会发现1变成8了
    4.3 创建daemonSet控制器
    1.创建daemonSet控制器
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
    name: filebeat-ds
    namespace: default
    spec:
    selector:
    matchLabels:
    app: filebeat
    release: stable
    template:
    metadata:
    labels:
    app: filebeat
    release: stable
    spec:
    containers:
    • name: filebeat
      image: ikubernetes/filebeat:5.6.5-alpine
      env:
      • name: REDIS_HOST
        value: redis.default.svc.cluster.local
      • name: REDIS_LOG_LEVEL
        value: info
  16. 创建nginx的pod
    apiVersion: v1
    kind: Pod
    metadata:
    name: nginx
    namespace: default
    spec:
    containers:
    • name: nginx
      image: nginx:1.15
      hostNetwork: true #使用节点的网络空间
      Kubernetes service资源:
  17. 定义Redis service
    [root@master manifors]# vim redis-svc.yaml

apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
spec:
clusterIP: 10.97.97.97
type: ClusterIP
selector:
app: redis
role: ds
ports:

  • port: 6379
    targetPort: 6379
    4.4 部署ingress做代理:
    1. 将下面文件下载到本地(根据情况下载文件)
      文件的地址:https://github.com/kubernetes/ingress-nginx/tree/master/deploy
      [root@master ingress-nginx]# for n in namespace.yaml configmap.yaml tcp-services-configma.yaml rbac.yaml udp-services-configmap.yaml with-rbac.yaml ;do wget https://raw.githubusecontent.com/kubernetes/ingress-nginx/master/deploy/$n;done
      [root@master ingress-nginx]# ls
      configmap.yaml rbac.yaml udp-services-configmap.yaml
      namespace.yaml tcp-services-configmap.yaml with-rbac.yaml
    2. 创建命名空间:
      [root@master ingress-nginx]# kubectl apply -f namespace.yaml
      注:也可以使用手动方式创建:
      [root@master ingress-nginx]# kubectl create namespace 命名空间名
    3. 创建所有的yaml:
      [root@master ingress-nginx]# kubectl apply -f ./
      注:使用上面命令创建ingress-nginx可能会有问题,可以用这条命令创建:kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
      或者使用下命令将镜像拖到本地:
      [root@minion-2 ~]# docker pull siriuszg/nginx-ingress-controller:0.19.0
      在将镜像名修改成配置文件里的名称:
      [root@minion-2 ~]# docker tag siriuszg/nginx-ingress-controller:0.19.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
      这个defaultbackend镜像一样:
      [root@minion-2 ~]# docker pull chenliujin/defaultbackend:1.4
      [root@minion-2 ~]# docker tag chenliujin/defaultbackend:1.4 k8s.gcr.io/defaultbackend-amd64:1.4
  1. 查看ingress-nginx命名空间的pod
    [root@master ingress-nginx]# kubectl get pods -n ingress-nginx
    NAME READY STATUS RESTARTS AGE
    nginx-ingress-controller-6bd7c597cb-pv7wn 0/1 ContainerCreating 0 3m
  2. 创建无头服务:(即service没有IP)
    [root@master ~]# mkdir ingress
    [root@master ~]# cd ingress
    [root@master ingress]# vim deploy-demon.yaml

apiVersion: v1
kind: Service
metadata:
name: myapp
namespaec: default
spec:
selector:
app: myapp
release: canary
ports:

  • name: http
    port: 80
    targetPort: 80

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: myapp-deploy
    namespace: default
    spec:
    replicas: 3
    selector:
    matchLabels:
    app: myapp
    release: canary
    template:
    metadata:
    labels:
    app: myapp
    release: canary
    spec:
    containers:

    • name: myapp
      image: ikubernetes/myapp:v1
      ports:
      • name: http
        containerPort: 80
        [root@master ingress]# kubectl apply -f deploy-demon.yaml
        service/default created
        deployment.apps/myapp-deploy created
        1. 查看下pod和service
          [root@master ingress]# kubectl get svc
          NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
          kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
          myapp ClusterIP 10.111.83.183 <none> 80/TCP 5s
          redis ClusterIP 10.97.97.97 <none> 6379/TCP 22h
          [root@master ingress]# kubectl get pods
          NAME READY STATUS RESTARTS AGE
          myapp-deploy-69b47bc96d-4nr8w 1/1 Running 0 20s
          myapp-deploy-69b47bc96d-l68hk 1/1 Running 0 20s
          myapp-deploy-69b47bc96d-p44gx 1/1 Running 0 20s
          redis-5d5494cb7-6hrs5 1/1 Running 1 22h
        2. 创建ingress-nginx的service
          [root@master ingress-nginx]# vim service-nodeport.yaml
          apiVersion: v1
          kind: Service
          metadata:
          name: ingress-nginx
          namespace: ingress-nginx
          spec:
          type: NodePort
          ports:
      • name: http
        port: 80
        targetPort: 80
        protocol: TCP
        nodePort: 30080
      • name: https
        port: 443
        targetPort: 443
        protocol: TCP
        nodePort: 30443
        selector:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        [root@master ingress-nginx]# kubectl apply -f service-nodeport.yaml
        [root@master ingress-nginx]# kubectl get svc -n ingress-nginx
        NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
        default-http-backend ClusterIP 10.110.120.191 <none> 80/TCP 3h
        ingress-nginx NodePort 10.109.113.104 <none> 80:30080/TCP,443:30443/TCP 45s
        访问测试:
  1. 创建通过ingress发布service的yaml文件:
    [root@master manifors]# vim ingress-myapp.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
    name: ingress-nginx
    namespace: default
    spec:
    rules:
    • host: myapp.zhouhao.com
      http:
      paths:
      • path:
        backend:
        serviceName: myapp
        servicePort: 80

[root@master manifors]# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-myapp myapp.zhouhao.com 80 1m
[root@master manifors]# kubectl describe ingress
Name: ingress-myapp
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends


myapp.zhouhao.com
myapp:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-myapp","namespace":"default"},"spec":{"rules":[{"host":"myapp.zhouhao.com","http":{"paths":[{"backend":{"serviceName":"myapp","servicePort":80},"path":null}]}}]}}

kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message


Normal CREATE 2m nginx-ingress-controller Ingress default/ingress-myapp

  1. 查看ingress-nginx的配置文件是否自动写入了内容,并在主机上解析域名验证:

    Kuberneter存储卷:
    5.1 本地存储持久化

  2. Pod挂载本地目录:
    [root@master volumes]# vim pod-vol-deploy.yaml

apiVersion: v1
kind: Pod
metadata:
name: myapp-deploy
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      volumeMounts: #挂载卷
    • name: html #挂载点名称
      mountPath: /usr/share/nginx/html/ #路径
  • name: busybox
    image: busybox:latest
    volumeMounts:
    • name: html
      mountPath: /data/
      command: ['/bin/sh','-c']
      args:
    • "while true;do echo $$(date) >> /data/index.html;done"
      volumes:
  • name: html #定义名称
    emptyDir: {} #定义大小,{}表示不限制
    [root@master volumes]# kubectl apply -f pod-vol-deploy.yaml
    [root@master volumes]# curl 10.244.2.11
    Tue Oct 9 09:06:06 UTC 2018
    Tue Oct 9 09:06:06 UTC 2018
    Tue Oct 9 09:06:06 UTC 2018
    Tue Oct 9 09:06:06 UTC 2018
    Tue Oct 9 09:06:06 UTC 2018
    Tue Oct 9 09:06:06 UTC 2018
    1. 基于主机路径的共享存储:
      [root@master volumes]# vim pod-hostpath.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-vol-hostpath
namespace: default
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    • name: html
      mountPath: /usr/share/nginx/html
      volumes:
  • name: html #定义名称
    hostPath:
    path: /data/pod/volume1 #主机共享的路径
    type: DirectoryOrCreate #使用的类型
    1. 类型区别参考:

5.2使用nfs做持久化存储

  1. 所有节点安装nfs:
    [root@master volumes]# yum install -y nfs-utils
    注:这里master做nfs服务端
  2. 配置共享存储:
    [root@master volumes]# vim /etc/exports
    /data/volumes 192.168.200.0/24(rw,no_root_squash)
  3. node节点测试是否可以挂在上:
    [root@minion-2 ~]# mount -t nfs 192.168.200.200:/data/volumes /mnt
    [root@minion-2 ~]# df -h
    192.168.200.200:/data/volumes 17G 3.5G 14G 21% /mnt
  4. 开始写yaml文件
    [root@master volumes]# vim pod-nfs.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-vol-nfs
namespace: default
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    • name: html
      mountPath: /usr/share/nginx/html
      volumes:
  • name: html
    nfs:
    path: /data/volumes
    server: master #确保主机名可以被解析
    1. 创建
      [root@master volumes]# kubectl apply -f pod-nfs.yaml
    2. 写个index.html测试页
      [root@master volumes]# vim /data/volumes/index.html
      <h1> NFS.zhouhao.com <h1\>
    3. 访问pod
      [root@master volumes]# kubectl get pods -o wide
      [root@master volumes]# curl 10.244.2.14
      <h1> NFS.zhouhao.com <h1\>
      5.3创建PVC和PV
      1.创建存储空间:
      [root@master volumes]# mkdir v{1,2,3,4,5}
      [root@master volumes]# ls
      pod-deploy.yaml pod-nfs.yaml v1 v3 v5
      pod-hostpath.yaml pod-vol-deploy.yaml v2 v4
      2.配置nfs共享出去
      [root@master volumes]# vim /etc/exports
      /data/volumes/v1 192.168.200.0/24(rw,no_root_squash)
      /data/volumes/v2 192.168.200.0/24(rw,no_root_squash)
      /data/volumes/v3 192.168.200.0/24(rw,no_root_squash)
      /data/volumes/v4 192.168.200.0/24(rw,no_root_squash)
      /data/volumes/v5 192.168.200.0/24(rw,no_root_squash)
      [root@master volumes]# exportfs -arv
      exporting 192.168.200.0/24:/data/volumes/v5
      exporting 192.168.200.0/24:/data/volumes/v4
      exporting 192.168.200.0/24:/data/volumes/v3
      exporting 192.168.200.0/24:/data/volumes/v2
      exporting 192.168.200.0/24:/data/volumes/v1
      [root@master volumes]# showmount -e
      Export list for master:
      /data/volumes/v5 192.168.200.0/24
      /data/volumes/v4 192.168.200.0/24
      /data/volumes/v3 192.168.200.0/24
      /data/volumes/v2 192.168.200.0/24
      /data/volumes/v1 192.168.200.0/24
      3.定义PV:
      PV的访问模式:
      #单路读写
      • ReadWriteOnce – the volume can be mounted as read-write by a single node
      #多路只读
      • ReadOnlyMany – the volume can be mounted read-only by many nodes
      #多路读写
      • ReadWriteMany – the volume can be mounted as read-write by many nodes
      下面是简写:
      • RWO - ReadWriteOnce
      • ROX - ReadOnlyMany
      • RWX - ReadWriteMany
      注:不同的存储卷支持的访问模式不一样:

4.编写yaml文件
[root@master volumes]# vim pv-daemon.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs:
path: /data/volumes/v1
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 2Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
labels:
name: pv002
spec:
nfs:
path: /data/volumes/v2
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 5Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
labels:
name: pv003
spec:
nfs:
path: /data/volumes/v3
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 20Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
name: pv004
spec:
nfs:
path: /data/volumes/v4
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 10Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/volumes/v5
server: master
accessModes: ["ReadWriteOnce","ReadWriteMany"]
capacity:
storage: 10Gi
[root@master volumes]# kubectl apply -f pv-daemon.yaml
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 2Gi RWO,ROX,RWX Retain Available 19s
pv002 5Gi RWO,ROX,RWX Retain Available 19s
pv003 20Gi RWO,ROX,RWX Retain Available 19s
pv004 10Gi RWO,ROX,RWX Retain Available 19s
pv005 10Gi RWO,RWX Retain Available 19s
5.定义pvc:
[root@master volumes]# vim pvc-daemon.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
namespace: default
spec:
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
resources:
requests:
storage: 6Gi

apiVersion: v1
kind: Pod
metadata:
name: pod-vol-pvc
namespace: default
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    • name: html
      mountPath: /usr/share/nginx/html
      volumes:
  • name: html
    persistentVolumeClaim:
    claimName: mypvc

[root@master volumes]# kubectl apply -f pvc-daemon.yaml
persistentvolumeclaim/mypvc unchanged
pod/pod-vol-pvc created
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 2Gi RWO,ROX,RWX Retain Available 21m
pv002 5Gi RWO,ROX,RWX Retain Available 21m
pv003 20Gi RWO,ROX,RWX Retain Available 21m
pv004 10Gi RWO,ROX,RWX Retain Bound default/mypvc 21m
pv005 10Gi RWO,RWX Retain Available 21m
[root@master volumes]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc Bound pv004 10Gi RWO,ROX,RWX 6m
5.4创建configmap:
配置容器化应用的方式:
1.自定义命令行参数:
Args:
2.把配置文件直接放进镜像;
3.环境变量
(1)cloud Native的应用程序一般可直接通过环境变量加载配置;
(2)通过entrypoint脚本来预处理变量为配置文件中配置信息;
4.存储卷
方式一:
1.命令行创建configmap:
[root@master volumes]# kubectl create configmap nginx --from-literal=nginx_port=8080 --from-literal=server_name=myapp.zhouhao.com
configmap/nginx created
[root@master volumes]# kubectl get cm
NAME DATA AGE
nginx 2 8s
[root@master volumes]# kubectl describe cm nginx
Name: nginx
Namespace: default
Labels: <none>
Annotations: <none>

Data

nginx_port:

8080
server_name:

myapp.zhouhao.com
Events: <none>
方式二:
1.创建出一个配置文件:
[root@master configmap]# vim www.conf

server {
server_name myapp.zhouhao.com;
listen 80;
root /data/web/html;
}
2.创建configmap:
[root@master configmap]# kubectl create configmap nginx-www --from-file=./www.conf
configmap/nginx-www created
[root@master configmap]# kubectl get cm
NAME DATA AGE
nginx 2 5m
nginx-www 1 8s
[root@master configmap]# kubectl describe cm nginx-www
Name: nginx-www
Namespace: default
Labels: <none>
Annotations: <none>

Data

www.conf:

server {
server_name myapp.zhouhao.com;
listen 80;
root /data/web/html;
}
方式二是基于文件的
1.将上nginx里定义的变量应用到下面的pod中:
[root@master configmap]# vim pod-deploy.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-cm-1
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      env:
    • name: NGINX_SERVER_PORT
      valueFrom:
      configMapKeyRef:
      name: nginx
      key: nginx_port
    • name: NGINX_SERVER_NAME
      valueFrom:
      configMapKeyRef:
      name: nginx
      key: server_name
      [root@master configmap]# kubectl apply -f pod-deploy.yaml
      pod/pod-cm-1 created
      [root@master configmap]# kubectl get pods
      NAME READY STATUS RESTARTS AGE
      myapp-deploy-67f6f6b4dc-4ngzc 1/1 Running 2 2d
      myapp-deploy-67f6f6b4dc-p4m5b 1/1 Running 2 2d
      myapp-deploy-67f6f6b4dc-p5scb 1/1 Running 2 2d
      pod-cm-1 1/1 Running 0 12s
      pod-vol-hostpath 1/1 Running 1 22h
      pod-vol-nfs 1/1 Running 1 22h
      pod-vol-pvc 1/1 Running 0 5h
      tomcat-deploy-7bc5d6bc58-9vw5t 1/1 Running 2 1d
      tomcat-deploy-7bc5d6bc58-tflzt 1/1 Running 2 1d
      tomcat-deploy-7bc5d6bc58-zfnm2 1/1 Running 2 1d
      [root@master configmap]# kubectl exec -it pod-cm-1 -- /bin/sh
      / # printenv
      MYAPP_SVC_PORT_80_TCP_ADDR=10.98.57.156
      KUBERNETES_PORT=tcp://10.96.0.1:443
      KUBERNETES_SERVICE_PORT=443
      MYAPP_SERVICE_PORT_HTTP=80
      TOMCAT_PORT_8080_TCP=tcp://10.103.236.4:8080
      MYAPP_SVC_PORT_80_TCP_PORT=80
      HOSTNAME=pod-cm-1
      SHLVL=1
      MYAPP_SVC_PORT_80_TCP_PROTO=tcp
      HOME=/root
      MYAPP_SERVICE_HOST=10.110.111.0
      NGINX_SERVER_PORT=8080
      NGINX_SERVER_NAME=myapp.zhouhao.com
      。。。。。。。。
      2.将上述nginx的cm在pod中生成文件:
      [root@master configmap]# vim pod-cm-2.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-cm-2
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      volumeMounts:
    • name: nginxconf
      mountPath: /etc/nginx/config.d/
      readOnly: true
      volumes:
  • name: nginxconf
    configMap:
    name: nginx
    [root@master configmap]# kubectl apply -f pod-cm-2.yaml
    pod/pod-cm-2 created
    [root@master configmap]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-67f6f6b4dc-4ngzc 1/1 Running 2 2d
    myapp-deploy-67f6f6b4dc-p4m5b 1/1 Running 2 2d
    myapp-deploy-67f6f6b4dc-p5scb 1/1 Running 2 2d
    pod-cm-2 1/1 Running 0 6s
    pod-vol-hostpath 1/1 Running 1 23h
    pod-vol-nfs 1/1 Running 1 22h
    pod-vol-pvc 1/1 Running 0 6h
    tomcat-deploy-7bc5d6bc58-9vw5t 1/1 Running 2 1d
    tomcat-deploy-7bc5d6bc58-tflzt 1/1 Running 2 1d
    tomcat-deploy-7bc5d6bc58-zfnm2 1/1 Running 2 1d
    [root@master configmap]# kubectl exec -it pod-cm-2 -- /bin/sh
    / # cd /etc/nginx/config.d/
    /etc/nginx/config.d # ls
    nginx_port server_name
    /etc/nginx/config.d # cat nginx_port
    8080/etc/nginx/config.d #
    /etc/nginx/config.d # cat server_name
    myapp.zhouhao.com/etc/nginx/config.d #
    3.修改下nginx的cm看pod内容是否改变
    [root@master ~]# kubectl edit cm nginx

Please edit the object below. Lines beginning with a '#' will be ignored,

and an empty file will abort the edit. If an error occurs while saving this file will be

reopened with the relevant failures.

#
apiVersion: v1
data:
nginx_port: "8080" #将8080修改成80
server_name: myapp.zhouhao.com
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-10T07:29:15Z
name: nginx
namespace: default
resourceVersion: "125157"
selfLink: /api/v1/namespaces/default/configmaps/nginx
uid: 30c4a9d7-cc5e-11e8-b4a9-000c2929855b
4.查看:
/etc/nginx/config.d # cat nginx_port
8080/etc/nginx/config.d #
发现没有改变,其实需要退出目录重新进入在查看:
8080/etc/nginx/config.d # cd ../
/etc/nginx # cd config.d/
/etc/nginx/config.d # cat nginx_port
80/etc/nginx/config.d #
发现改变了,这改变也是有一定时间的,以为这中间需要过程。
5.下面以上面nginx-www的cm为例,创建一个pod用里面内容做配置:
[root@master configmap]# vim pod-cm-3.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-cm-3
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      volumeMounts:
    • name: nginxconf
      mountPath: /etc/nginx/conf.d/
      readOnly: true
      volumes:
  • name: nginxconf
    configMap:
    name: nginx-www
    [root@master configmap]# kubectl apply -f pod-cm-3.yaml
    pod/pod-cm-3 created
    [root@master configmap]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-67f6f6b4dc-4ngzc 1/1 Running 2 2d
    myapp-deploy-67f6f6b4dc-p4m5b 1/1 Running 2 2d
    myapp-deploy-67f6f6b4dc-p5scb 1/1 Running 2 2d
    pod-cm-3 1/1 Running 0 9s
    pod-vol-hostpath 1/1 Running 1 1d
    pod-vol-nfs 1/1 Running 1 23h
    pod-vol-pvc 1/1 Running 0 6h
    tomcat-deploy-7bc5d6bc58-9vw5t 1/1 Running 2 1d
    tomcat-deploy-7bc5d6bc58-tflzt 1/1 Running 2 1d
    tomcat-deploy-7bc5d6bc58-zfnm2 1/1 Running 2 1d
    [root@master configmap]# kubectl exec -it pod-cm-3 -- /bin/sh
    / # cd /etc/nginx/conf.d/
    /etc/nginx/conf.d # ls
    www.conf
    /etc/nginx/conf.d # nginx -T
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful

    configuration file /etc/nginx/nginx.conf:

user nginx;
worker_processes 1;
。。。。。。。。。。。。。

configuration file /etc/nginx/conf.d/www.conf:

server {
server_name myapp.zhouhao.com;
listen 80;
root /data/web/html;
}
6.根据配置里面的信息,创建站点目录和测试然后访问:
/etc/nginx/conf.d # mkdir /data/web/html -p
/etc/nginx/conf.d # vi /data/web/html/index.html
<h1> myapp.zhouhao.com<h1\>
在任意一个节点上做域名解析然后访问测试
[root@minion-1 ~]# vim /etc/hosts
10.244.1.18 myapp.zhouhao.com
[root@minion-1 ~]# curl myapp.zhouhao.com
<h1> myapp.zhouhao.com<h1\>
7.修改下nginx-www测试
[root@master ~]# kubectl edit cm nginx-www

Please edit the object below. Lines beginning with a '#' will be ignored,

and an empty file will abort the edit. If an error occurs while saving this file will be

reopened with the relevant failures.

#
apiVersion: v1
data:
www.conf: |
server {
server_name myapp.zhouhao.com;
listen 80; #将80改成8080
root /data/web/html;
}
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-10T07:34:56Z
name: nginx-www
namespace: default
resourceVersion: "125678"
selfLink: /api/v1/namespaces/default/configmaps/nginx-www
uid: fbfebc90-cc5e-11e8-b4a9-000c2929855b
/etc/nginx/conf.d # cd ../
/etc/nginx # cd conf.d/
/etc/nginx/conf.d # ls
www.conf
/etc/nginx/conf.d # cat www.conf
server {
server_name myapp.zhouhao.com;
listen 8080;
root /data/web/html;
}
文件虽然改了但是监听端口没有改
/etc/nginx/conf.d # netstat -lnpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0: LISTEN 1/nginx: master pro
重新加载服务
/etc/nginx/conf.d # nginx -s relaod
nginx: invalid option: "-s relaod"
/etc/nginx/conf.d # nginx -s reload
2018/10/10 10:09:50 [notice] 19#19: signal process started
/etc/nginx/conf.d # netstat -lnpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:
LISTEN 1/nginx: master pro
8.访问测试:
[root@minion-1 ~]# curl myapp.zhouhao.com
curl: (7) Failed connect to myapp.zhouhao.com:80; 拒绝连接
[root@minion-1 ~]# curl myapp.zhouhao.com:8080
<h1> myapp.zhouhao.com<h1\>
5.5创建statefulset控制器:
1.建好PV,便于PVC匹配到对应的PV:
[root@master volumes]# vim pv-daemon.yaml
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs:
path: /data/volumes/v1
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 5Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
labels:
name: pv002
spec:
nfs:
path: /data/volumes/v2
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 5Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
labels:
name: pv003
spec:
nfs:
path: /data/volumes/v3
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 5Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
name: pv004
spec:
nfs:
path: /data/volumes/v4
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 10Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/volumes/v5
server: master
accessModes: ["ReadWriteOnce","ReadWriteMany"]
capacity:
storage: 10Gi
[root@master volumes]# kubectl apply -f pv-daemon.yaml
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,ROX,RWX Retain Available 10s
pv002 5Gi RWO,ROX,RWX Retain Available 10s
pv003 5Gi RWO,ROX,RWX Retain Available 10s
pv004 10Gi RWO,ROX,RWX Retain Available 10s
pv005 10Gi RWO,RWX Retain Available 10s
2.创建statefulset控制器:
[root@master mandor]# vim statefulSet-daemon-yaml

apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:

  • name: web
    port: 80
    clusterIP: None
    selector:
    app: myapp-pod
    --- #上面是service控制器,下面是StatefulSet控制器
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
    name: myapp
    spec:
    serviceName: myapp #service名
    replicas: 3 #pod数量
    selector: #匹配pod标签
    matchLabels:
    app: myapp-pod
    template: #pod创建模板
    metadata: #pod元信息
    labels: #pod标签
    app: myapp-pod
    spec:
    containers:
    • name: myapp
      image: ikubernetes/myapp:v1
      ports:
      • name: web
        containerPort: 80
        volumeMounts: #存储卷挂载
      • name: myappdata #卷名
        mountPath: /use/share/nginx/html #挂载容器的目录
        volumeClaimTemplates: #存储卷模板
  • metadata:
    name: myappdata #存储卷名,即上面挂载的卷名
    spec:
    accessModes: [ "ReadWriteOnce" ] #挂载的访问权限
    resources:
    requests:
    storage: 5Gi #请求的PV大小
    [root@master mandor]# kubectl apply -f statefulSet-daemon-yaml
    service/myapp unchanged
    statefulset.apps/myapp created
    [root@master mandor]# kubectl get pod
    NAME READY STATUS RESTARTS AGE
    myapp-0 1/1 Running 0 5m
    myapp-1 1/1 Running 0 5m
    myapp-2 1/1 Running 0 5m
    #注:上面可以看出pod名是有顺序的
    [root@master mandor]# kubectl get svc #上面创建的service要是无头服务
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d
    myapp ClusterIP None <none> 80/TCP 20m
    [root@master mandor]# kubectl get sts
    NAME DESIRED CURRENT AGE
    myapp 3 3 18m
    [root@master mandor]# kubectl get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    myappdata-myapp-0 Bound pv005 10Gi RWO,RWX 19m
    myappdata-myapp-1 Bound pv001 5Gi RWO,ROX,RWX 19m
    myappdata-myapp-2 Bound pv003 5Gi RWO,ROX,RWX 19m
    [root@master mandor]# kubectl get pv
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    pv001 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-1 5h
    pv002 5Gi RWO,ROX,RWX Retain Available 5h
    pv003 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-2 5h
    pv004 10Gi RWO,ROX,RWX Retain Available 5h
    pv005 10Gi RWO,RWX Retain Bound default/myappdata-myapp-0 5h
    删除看看pod杀死的顺序:
    1.先监控pod
    [root@master mandor]# kubectl get pods -w
    NAME READY STATUS RESTARTS AGE
    myapp-0 1/1 Running 0 22m
    myapp-1 1/1 Running 0 22m
    myapp-2 1/1 Running 0 22m
    2,删除pod
    [root@master mandor]# kubectl delete -f statefulSet-daemon-yaml
    service "myapp" deleted
    statefulset.apps "myapp" deleted
    [root@master mandor]# kubectl get pods -w
    NAME READY STATUS RESTARTS AGE
    myapp-0 1/1 Running 0 22m
    myapp-1 1/1 Running 0 22m
    myapp-2 1/1 Running 0 22m
    myapp-0 1/1 Terminating 0 23m
    myapp-1 1/1 Terminating 0 23m
    myapp-2 1/1 Terminating 0 23m
    myapp-0 0/1 Terminating 0 23m
    myapp-1 0/1 Terminating 0 23m
    myapp-2 0/1 Terminating 0 23m
    myapp-0 0/1 Terminating 0 23m
    myapp-0 0/1 Terminating 0 23m
    myapp-2 0/1 Terminating 0 23m
    myapp-2 0/1 Terminating 0 23m
    myapp-1 0/1 Terminating 0 23m
    myapp-1 0/1 Terminating 0 23m
    可以看出杀死pod会从2开始
    看下创建的顺序:
    [root@master mandor]# kubectl apply -f statefulSet-daemon-yaml
    service/myapp created
    statefulset.apps/myapp created
    myapp-0 0/1 Pending 0 0s
    myapp-0 0/1 Pending 0 0s
    myapp-0 0/1 ContainerCreating 0 0s
    myapp-0 1/1 Running 0 2s
    myapp-1 0/1 Pending 0 0s
    myapp-1 0/1 Pending 0 0s
    myapp-1 0/1 ContainerCreating 0 0s
    myapp-1 1/1 Running 0 3s
    myapp-2 0/1 Pending 0 0s
    myapp-2 0/1 Pending 0 0s
    myapp-2 0/1 ContainerCreating 0 0s
    myapp-2 1/1 Running 0 1s
    创建会从0开始,
    查看下删除pod,PVC是否存在:
    [root@master mandor]# kubectl delete -f statefulSet-daemon-yaml
    service "myapp" deleted
    statefulset.apps "myapp" deleted
    [root@master mandor]# kubectl get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    myappdata-myapp-0 Bound pv005 10Gi RWO,RWX 29m
    myappdata-myapp-1 Bound pv001 5Gi RWO,ROX,RWX 29m
    myappdata-myapp-2 Bound pv003 5Gi RWO,ROX,RWX 29m
    会发现PVC依然存在,PVC名关联着pod名,所以对应的pod启动数据依然存在。
    5.6 Pod名称解析
    1.在k8s中每个pod名都可以被解析出来:
    [root@master mandor]# kubectl exec -it myapp-1 -- /bin/sh
    / # nslookup myapp-0.myapp.default.svc.cluster.local
    nslookup: can't resolve '(null)': Name does not resolve

Name: myapp-0.myapp.default.svc.cluster.local
Address 1: 10.244.2.33 myapp-0.myapp.default.svc.cluster.local
/ # nslookup myapp-1.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve

Name: myapp-1.myapp.default.svc.cluster.local
Address 1: 10.244.1.28 myapp-1.myapp.default.svc.cluster.local
/ # nslookup myapp-2.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve

Name: myapp-2.myapp.default.svc.cluster.local
Address 1: 10.244.2.34 myapp-2.myapp.default.svc.cluster.local
会发现都能解析出来pod的IP,
Pod名解析格式:
myapp-1.myapp.default.svc.cluster.local
pod名. Service名.命名空间名.后缀
5.7 Pod扩容pvc会自动创建匹配pv
1.进行扩容:
[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
[root@master mandor]# kubectl scale sts myapp --replicas=5
statefulset.apps/myapp scaled

kubectl path sts myapp -p "{"spec":{"replicas":5}}" 打补丁的方式也可以,效果一样

[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
2.会发现会扩出3和4
[root@master mandor]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv005 10Gi RWO,RWX 46m
myappdata-myapp-1 Bound pv001 5Gi RWO,ROX,RWX 46m
myappdata-myapp-2 Bound pv003 5Gi RWO,ROX,RWX 46m
myappdata-myapp-3 Bound pv002 5Gi RWO,ROX,RWX 59s
myappdata-myapp-4 Bound pv004 10Gi RWO,ROX,RWX 57s
[root@master mandor]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-1 6h
pv002 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-3 6h
pv003 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-2 6h
pv004 10Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-4 6h
pv005 10Gi RWO,RWX Retain Bound default/myappdata-myapp-0 6h
pvc也会自动被创建,PV也会自动被匹配
5.8 Pod分区更新
sts支持分区更新,分区就是pod名后边的数字比如myapp-1,1就是分区,分区更新是定义一个区(数字),大于或等的会进行更新,比如定义4,大于或等于4的会更新,定义0就是全部更新,如下:
1.查看下默认更新策略:
[root@master mandor]# kubectl describe sts myapp
Name: myapp
Namespace: default
CreationTimestamp: Thu, 11 Oct 2018 16:58:31 +0800
Selector: app=myapp-pod
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"replicas":3,"selector":{"match...
Replicas: 5 desired | 5 total
Update Strategy: RollingUpdate #默认滚动更新,没设置分区
Pods Status: 5 Running / 0 Waiting / 0 Succeeded / 0 Failed
。。。。。。
2.定义分区:
[root@master mandor]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}'
statefulset.apps/myapp patched
上述是打补丁的方式,注意引号
[root@master mandor]# kubectl describe sts myapp
Name: myapp
Namespace: default
CreationTimestamp: Thu, 11 Oct 2018 16:58:31 +0800
Selector: app=myapp-pod
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"replicas":3,"selector":{"match...
Replicas: 5 desired | 5 total
Update Strategy: RollingUpdate
Partition: 4 #这有了分区值,大于等于4的会更新
Pods Status: 5 Running / 0 Waiting / 0 Succeeded / 0 Failed
3.开始更新测试
[root@master mandor]# kubectl set image sts myapp myapp=ikubernetes/myapp:v2
statefulset.apps/myapp image updated
[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
myapp-4 1/1 Terminating 0 24m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
会发现它先把4停了然后重新创建启动。
4.验证版本:
[root@master mandor]# kubectl describe pod myapp-4
。。。。。。。。。。。
Containers:
myapp:
Container ID: docker://bb8b5d4e73459dd39ad6abce52c72402a80dfbbc938fa7758766f3e377f845af
Image: ikubernetes/myapp:v2
Image ID: docker-pullable://ikubernetes/myapp@sha256:85a2b81a62f09a414ea33b74fb8aa686ed9b168294b26b4c819df0be0712d358
。。。。。。。
[root@master mandor]# kubectl describe pod myapp-2
Name: myapp-2
Namespace: default
Node: minion-1/192.168.200.201
Start Time: Thu, 11 Oct 2018 16:58:35 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-854b598c58
statefulset.kubernetes.io/pod-name=myapp-2
Annotations: <none>
Status: Running
IP: 10.244.2.34
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://4fd66e973b1bb74be30b9d3ff9ceb9515a57197669389784e6e80449e788203d
Image: ikubernetes/myapp:v1
[root@master mandor]# kubectl describe pod myapp-0
Name: myapp-0
Namespace: default
Node: minion-1/192.168.200.201
Start Time: Thu, 11 Oct 2018 16:58:31 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-854b598c58
statefulset.kubernetes.io/pod-name=myapp-0
Annotations: <none>
Status: Running
IP: 10.244.2.33
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://1886176fc8e698327497e15eb3e452e04092805fc3c11b71ea844d26e439ad86
Image: ikubernetes/myapp:v1
[root@master mandor]# kubectl describe pod myapp-3
Name: myapp-3
Namespace: default
Node: minion-2/192.168.200.202
Start Time: Thu, 11 Oct 2018 17:12:38 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-854b598c58
statefulset.kubernetes.io/pod-name=myapp-3
Annotations: <none>
Status: Running
IP: 10.244.1.29
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://426436b9f8ea96c6e55e8af00790a1cec7b9620bb0a3843c0fc8df869106d86f
Image: ikubernetes/myapp:v1
5.通过上面发现只有4的版本更新了,如果想把所有的都更新,可以通过上面打补丁的方式,将数值改为0更新即可.
如下:
[root@master mandor]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}'
statefulset.apps/myapp patched
[root@master mandor]# kubectl set image sts myapp myapp=ikubernetes/myapp:v2
[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
myapp-4 1/1 Terminating 0 24m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
myapp-3 1/1 Terminating 0 41m
myapp-3 0/1 Terminating 0 41m
myapp-3 0/1 Terminating 0 41m
myapp-3 0/1 Terminating 0 41m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-2 1/1 Terminating 0 55m
myapp-2 0/1 Terminating 0 55m
myapp-2 0/1 Terminating 0 55m
myapp-2 0/1 Terminating 0 55m
myapp-2 0/1 Pending 0 0s
myapp-2 0/1 Pending 0 0s
myapp-2 0/1 ContainerCreating 0 0s
myapp-2 1/1 Running 0 3s
myapp-1 1/1 Terminating 0 55m
myapp-1 0/1 Terminating 0 55m
myapp-1 0/1 Terminating 0 55m
myapp-1 0/1 Terminating 0 55m
myapp-1 0/1 Pending 0 0s
myapp-1 0/1 Pending 0 0s
myapp-1 0/1 ContainerCreating 0 0s
myapp-1 1/1 Running 0 1s
myapp-0 1/1 Terminating 0 55m
myapp-0 0/1 Terminating 0 55m
myapp-0 0/1 Terminating 0 55m
myapp-0 0/1 Terminating 0 55m
myapp-0 0/1 Pending 0 0s
myapp-0 0/1 Pending 0 0s
myapp-0 0/1 ContainerCreating 0 0s
myapp-0 1/1 Running 0 2s
会从3开始.
[root@master mandor]# kubectl describe pod myapp-0
Name: myapp-0
Namespace: default
Node: minion-1/192.168.200.201
Start Time: Thu, 11 Oct 2018 17:54:24 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-58656f57bf
statefulset.kubernetes.io/pod-name=myapp-0
Annotations: <none>
Status: Running
IP: 10.244.2.38
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://d59df10c758f1164a21b070cd4aa3783cb3a2c6aa32e90688e0575cacd069c86
Image: ikubernetes/myapp:v2
K8s RABC权限控制
6.1 创建用户并测试
1.K8s的sa账号创建:
[root@master mandor]# kubectl create serviceaccount admin
serviceaccount/admin created
[root@master mandor]# kubectl get sa
NAME SECRETS AGE
admin 1 9s
default 1 4d
[root@master mandor]# kubectl describe sa admin
Name: admin
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: admin-token-v8p8k
Tokens: admin-token-v8p8k
Events: <none>

2.创建私钥:
[root@master mandor]# (umask 077;openssl genrsa -out zhouhao.key 2048)
Generating RSA private key, 2048 bit long modulus
.................................................................+++
...................................................................................+++
e is 65537 (0x10001)
[root@master mandor]# openssl req -new -key zhouhao.key -out zhouhao.csr -subj "/CN=zhouhao"
[root@master pki]# openssl x509 -req -in zhouhao.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out zhouhao.crt -days 365
Signature ok
subject=/CN=zhouhao
Getting CA Private Key
[root@master pki]# openssl x509 -in zhouhao.crt -text -noout
Certificate:
Data:
Version: 1 (0x0)
Serial Number: 15289891927309345937 (0xd4309bb2d562e491)
Signature Algorithm: sha1WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Oct 12 10:14:41 2018 GMT
Not After : Oct 12 10:14:41 2019 GMT
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
3.创建用户
[root@master pki]# kubectl config set-credentials zhouhao --client-certificate=./zhouhao.crt --client-key=./zhouhao.key --embed-certs=true
User "zhouhao" set.
[root@master pki]# kubectl config view
apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.200.200:6443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    user: kubernetes-admin
    name: kubernetes-admin@kubernetes
    current-context: kubernetes-admin@kubernetes
    kind: Config
    preferences: {}
    users:
  • name: kubernetes-admin
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
  • name: zhouhao
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    4.授权用户:
    [root@master pki]# kubectl config set-context zhouhao@kubernetes --cluster=kubernetes --user=zhouhao
    Context "zhouhao@kubernetes" created.
    [root@master pki]# kubectl config view
    apiVersion: v1
    clusters:
  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.200.200:6443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    user: kubernetes-admin
    name: kubernetes-admin@kubernetes
  • context:
    cluster: kubernetes
    user: zhouhao
    name: zhouhao@kubernetes
    current-context: kubernetes-admin@kubernetes
    kind: Config
    preferences: {}
    users:
  • name: kubernetes-admin
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
  • name: zhouhao
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    5.切换用户:
    [root@master pki]# kubectl config use-context zhouhao@kubernetes
    Switched to context "zhouhao@kubernetes".
    6.查看pod会发现权限不够会报错
    [root@master pki]# kubectl get pods
    No resources found.
    Error from server (Forbidden): pods is forbidden: User "zhouhao" cannot list pods in the namespace "default"
    6.2 创建配置文件
    1.切换回管理员账号:
    [root@master pki]# kubectl config use-context kubernetes-admin@kubernetes
    Switched to context "kubernetes-admin@kubernetes".
    2.创建配置文件并查看:
    [root@master pki]# kubectl config set-cluster mycluster --kubeconfig=/tmp/test.conf --server="https://192.168.200.200:6443" --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
    Cluster "mycluster" set.
    [root@master pki]# kubectl config view --kubeconfig=/tmp/test.conf
    apiVersion: v1
    clusters:
  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.200.200:6443
    name: mycluster
    contexts: []
    current-context: ""
    kind: Config
    preferences: {}
    users: []

6.3 创建一个角色并绑定用户:
1.用命令行生成yaml格式的文件在做修改:
[root@master pki]# kubectl create role pods-reader --verb=get,list,watch --resource=pods --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: pods-reader
rules:

  • apiGroups:
    • ""
      resources:
    • pods
      verbs:
    • get
    • list
    • watch
      [root@master pki]# kubectl create role pods-reader --verb=get,list,watch --resource=pods --dry-run -o yaml >~/mandor/role-demo.yaml
      2.修改并创建:
      [root@master mandor]# vim role-demo.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pods-reader
namespace: default
rules:

  • apiGroups:
    • ""
      resources:
    • pods
      verbs:
    • get
    • list
    • watch
      [root@master mandor]# kubectl create -f role-demo.yaml
      role.rbac.authorization.k8s.io/pods-reader created
      [root@master mandor]# kubectl get role
      NAME AGE
      pods-reader 10s
      [root@master mandor]# kubectl describe pods-reade
      error: the server doesn't have a resource type "pods-reade"
      [root@master mandor]# kubectl describe role pods-reade
      Name: pods-reader
      Labels: <none>
      Annotations: <none>
      PolicyRule:
      Resources Non-Resource URLs Resource Names Verbs

      pods [] [] [get list watch]
      3.创建rolebinding让用户绑定角色:
      [root@master mandor]# kubectl create rolebinding zhouhao-read-pods --role=pods-reader --user=zhouhao
      rolebinding.rbac.authorization.k8s.io/zhouhao-read-pods created
      [root@master mandor]# kubectl create rolebinding zhouhao-read-pods --role=pods-reader --user=zhouhao --dry-run -o yaml
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
      creationTimestamp: null
      name: zhouhao-read-pods
      roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: pods-reader
      subjects:

  • apiGroup: rbac.authorization.k8s.io
    kind: User
    name: zhouhao
    [root@master mandor]# kubectl create rolebinding zhouhao-read-pods --role=pods-reader --user=zhouhao --dry-run -o yaml > rolebinding-demo.yaml
    [root@master mandor]# kubectl describe rolebinding zhouhao-read-pods
    Name: zhouhao-read-pods
    Labels: <none>
    Annotations: <none>
    Role:
    Kind: Role
    Name: pods-reader
    Subjects:
    Kind Name Namespace

    User zhouhao
    4.切换用户验证权限:
    [root@master ~]# kubectl config use-context zhouhao@kubernetes
    Switched to context "zhouhao@kubernetes".
    [root@master ~]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-67f6f6b4dc-pz4bd 1/1 Running 0 4h
    myapp-deploy-67f6f6b4dc-smw9t 1/1 Running 0 4h
    myapp-deploy-67f6f6b4dc-twgh6 1/1 Running 0 4h
    5.只授权了default命名空间的权限所以查看其它空间的会报错;
    [root@master ~]# kubectl get pods -n kube-system
    No resources found.
    Error from server (Forbidden): pods is forbidden: User "zhouhao" cannot list pods in the namespace "kube-system"
    6.4通过clusterrole授权
    1.创建clusterrole:
    [root@master ~]# kubectl create clusterrole cluster-readers --verb=get,list,watch --resource=pods -o yaml --dry-run >clusterrole-yaml
    [root@master ~]# vim clusterrole-yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-readers
rules:

  • apiGroups:
    • ""
      resources:
    • pods
      verbs:
    • get
    • list
    • watch
      [root@master ~]# kubectl config use-context kubernetes-admin@kubernetes
      Switched to context "kubernetes-admin@kubernetes".
      [root@master ~]# kubectl apply -f clusterrole-yaml
      clusterrole.rbac.authorization.k8s.io/cluster-readers created
      2.删除授权绑定
      [root@master ~]# kubectl get rolebinding
      NAME AGE
      zhouhao-read-pods 23m
      [root@master ~]# kubectl delete rolebinding zhouhao-read-pods
      rolebinding.rbac.authorization.k8s.io "zhouhao-read-pods" deleted
      [root@master ~]# kubectl config use-context zhouhao@kubernetes
      Switched to context "zhouhao@kubernetes".
      [root@master ~]# kubectl get pods
      No resources found.
      Error from server (Forbidden): pods is forbidden: User "zhouhao" cannot list pods in the namespace "default"
      3.会发现权限有没有了
      [root@master ~]# useradd ik8s
      [root@master ~]# cp -r .kube/ /home/ik8s/
      [root@master ~]# chown -R ik8s.ik8s /home/ik8s/
      [root@master ~]# su - ik8s
      [ik8s@master ~]$ kubectl config use-context zhouhao@kubernetes
      Switched to context "zhouhao@kubernetes".
      [ik8s@master ~]$ kubectl config view
      apiVersion: v1
      clusters:
  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.200.200:6443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    user: kubernetes-admin
    name: kubernetes-admin@kubernetes
  • context:
    cluster: kubernetes
    user: zhouhao
    name: zhouhao@kubernetes
    current-context: zhouhao@kubernetes
    kind: Config
    preferences: {}
    users:
  • name: kubernetes-admin
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
  • name: zhouhao
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

4.绑定clusterrole:
[root@master ~]# kubectl create clusterrolebinding zhouhao-read-all-pods --clusterrole= cluster-readers --user=zhouhao --dry-run -o yaml>clusterrolebinding-demo.yaml
[root@master mandor]# vim ~/clusterrolebinding-demo.yaml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: zhouhao-read-all-pods
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-readers
subjects:

  • apiGroup: rbac.authorization.k8s.io
    kind: User
    name: zhouhao
    [ik8s@master ~]$ kubectl get pods
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-67f6f6b4dc-rgsj8 1/1 Running 0 22m
    myapp-deploy-67f6f6b4dc-smw9t 1/1 Running 0 5h
    myapp-deploy-67f6f6b4dc-twgh6 1/1 Running 0 5h
    [ik8s@master ~]$ kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    coredns-78fcdf6894-mwfdj 1/1 Running 23 7d
    coredns-78fcdf6894-nm2q8 1/1 Running 23 7d
    etcd-master 1/1 Running 5 7d
    kube-apiserver-master 1/1 Running 5 7d
    kube-controller-manager-master 1/1 Running 5 7d
    kube-flannel-ds-amd64-2wcrq 1/1 Running 7 7d
    kube-flannel-ds-amd64-hpqch 1/1 Running 6 7d
    kube-flannel-ds-amd64-th26t 1/1 Running 6 7d
    kube-proxy-47jz2 1/1 Running 5 7d
    kube-proxy-pqswg 1/1 Running 5 7d
    kube-proxy-tdpmw 1/1 Running 5 7d
    kube-scheduler-master 1/1 Running 5 7d
    5.资源都可以查看,没有给删除权限
    [ik8s@master ~]$ kubectl delete pods myapp-deploy-67f6f6b4dc-rgsj8
    Error from server (Forbidden): pods "myapp-deploy-67f6f6b4dc-rgsj8" is forbidden: User "zhouhao" cannot delete pods in the namespace "default"
    使用rolebindging绑定clusterrole
    [root@master mandor]# kubectl delete -f ~/clusterrolebinding-demo.yaml
    clusterrolebinding.rbac.authorization.k8s.io "zhouhao-read-all-pods" deleted

[root@master mandor]# vim rolebinding-cluster.yaml
[root@master mandor]# kubectl create rolebinding zhouhao-read-pods --clusterrole=cluster-readers --user=zhouhao --dry-run -o yaml >rolebinding-cluster.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: zhouhao-read-pods
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-readers
subjects:

  • apiGroup: rbac.authorization.k8s.io
    kind: User
    name: zhouhao
    [root@master mandor]# kubectl apply -f rolebinding-cluster.yaml
    rolebinding.rbac.authorization.k8s.io/zhouhao-read-pods created
    6.访问测试:
    [ik8s@master ~]$ kubectl get pods
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-67f6f6b4dc-rgsj8 1/1 Running 0 38m
    myapp-deploy-67f6f6b4dc-smw9t 1/1 Running 0 5h
    myapp-deploy-67f6f6b4dc-twgh6 1/1 Running 0 5h
    [ik8s@master ~]$ kubectl get pods -n kube-system
    No resources found.
    Error from server (Forbidden): pods is forbidden: User "zhouhao" cannot list pods in the namespace "kube-system"
    部署dashboard
    7.1 部署dashboard使其可以被访问
    1.先在node节点上把镜像导入并修改tag,源文件中的镜像pull不了
    [root@minion-1 ~]# docker pull siriuszg/kubernetes-dashboard-amd64:v1.10.0
    v1.10.0: Pulling from siriuszg/kubernetes-dashboard-amd64
    833563f653b3: Pull complete
    Digest: sha256:5170d3ad1d3b7e9d6424c7a1309692ccffbb2d3c410a3f894bcd2e5066ce169c
    Status: Downloaded newer image for siriuszg/kubernetes-dashboard-amd64:v1.10.0
    [root@minion-1 ~]# docker tag siriuszg/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
    2.创建dashboard
    [root@master mandor]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
    secret/kubernetes-dashboard-certs created
    serviceaccount/kubernetes-dashboard created
    role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    deployment.apps/kubernetes-dashboard created
    service/kubernetes-dashboard created
    3.查看:
    [root@master dashboard]# kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    coredns-78fcdf6894-mwfdj 1/1 Running 23 7d
    coredns-78fcdf6894-nm2q8 1/1 Running 23 7d
    etcd-master 1/1 Running 5 7d
    kube-apiserver-master 1/1 Running 5 7d
    kube-controller-manager-master 1/1 Running 5 7d
    kube-flannel-ds-amd64-2wcrq 1/1 Running 7 7d
    kube-flannel-ds-amd64-hpqch 1/1 Running 6 7d
    kube-flannel-ds-amd64-th26t 1/1 Running 6 7d
    kube-proxy-47jz2 1/1 Running 5 7d
    kube-proxy-pqswg 1/1 Running 5 7d
    kube-proxy-tdpmw 1/1 Running 5 7d
    kube-scheduler-master 1/1 Running 5 7d
    kubernetes-dashboard-767dc7d4d-2mw4r 1/1 Running 0 1m
    4.通过打补丁的访问使服务可以被访问
    [root@master dashboard]# kubectl get svc -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 7d
    kubernetes-dashboard ClusterIP 10.100.132.159 <none> 443/TCP 7m
    [root@master dashboard]# kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system
    service/kubernetes-dashboard patched
    [root@master dashboard]# kubectl get svc -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 7d
    kubernetes-dashboard NodePort 10.100.132.159 <none> 443:32626/TCP 15m
    5.浏览器访问:

6.需要认证登录,将系统中config文件传到主机上
[root@master dashboard]# ls ~/.kube/
cache config http-cache
[root@master dashboard]# sz ~/.kube/config
然后在选中:

7.2 token方式登录dashboard
1.为dashboard创建证书和私钥:
[root@master dashboard]# cd /etc/kubernetes/pki/
[root@master pki]# (umask 077;openssl genrsa -out dashboard.key 2048)
Generating RSA private key, 2048 bit long modulus
...+++
..............+++
e is 65537 (0x10001)
[root@master pki]# openssl req -new -key dashboard.key -out dashboard.csr -subj "/O=zhouhao/CN=dashboard"
[root@master pki]# openssl x509 -req -in dashboard.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out dashboard.csr -days 365
Signature ok
subject=/O=zhouhao/CN=dashboard
Getting CA Private Key
[root@master pki]# kubectl create secret generic dashboard-cert -n kube-system --from-file=dashboard.crt=./dashboard.csr --from-file=dashboard.key=./dashboard.key
secret/dashboard-cert created

2.使用token方式登录:
[root@master pki]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@master pki]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created
3.获取token值:
[root@master pki]# kubectl describe secret dashboard-admin-token-d8mc4 -n kube-system
Name: dashboard-admin-token-d8mc4
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=dashboard-admin
kubernetes.io/service-account.uid=ab682221-d058-11e8-8f2d-000c2929855b

Type: kubernetes.io/service-account-token

Data

ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZDhtYzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYWI2ODIyMjEtZDA1OC0xMWU4LThmMmQtMDAwYzI5Mjk4NTViIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.jClu0HHKv81G7SSaxxAb_-i0cXhR1_BkAUqjxKgLjH98w_Z4OE_amhvZu93S4uYM4F3nDGfMgXp5Vt2i4vkS3pnLgO2wdcfzMr0--VzAPhywLR2BBGL9N0u9wokSH4znp1KFmmvPy8KdAjlXi_IMp7hcNrSYgGSnF9XBKWLo2JiMsE4YTA_mgLIml8rAIjw-5REyG9o4RPNL0VtBDO1Ny4NA7fpYWj-r_iKlsXHPvnX0Pe7AtzY62MPRXR0Q_VvEwbH32DiYl6ciXMJxQnPi6mxgHQRXk6luY-_EERGvo9pn3dBmJs_moPSsNjSIE7EP0F-W7tsUtcOEMX15L4e8Ow
×××部分即使taken值
4.Token登录:

5.选择token登录将值复制上去选择登录:

K8s网络及高级调度
8.1 管理flannel和calico
1.配置flannel网络插件:
[root@master ~]# vim kube-flannel.yml
。。。。。
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"Directrouting": true #找到上面内容添加这行改成直接路由模式,默认是false。
。。。。。。。。。。。。。。
或者
[root@master ~]# vim kube-flannel.yml
。。。。。
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "host-gw" #将vxlan改成host-gw也是一样,区别,这种形式节点不能跨网段,而上述可以跨网段。
。。。。。。。。。。。。。。
2.网络策略:
#支持rabc的话就做这个如果没有rabc可以跳过直接执行下一步
[root@master ~]# kubectl apply -f \

https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/rbac.yaml
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
3.安装部署calico
官网地址:https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/flannel
[root@master ~]# kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
[root@master ~]# kubectapply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/canal/canal.yaml
制定策略让不同命名空间的pod不能随意访问
4.创建两个命名空间:
[root@master networkpolicy]# kubectl create namespace dev
namespace/dev created
[root@master networkpolicy]# kubectl create namespace port
namespace/port created
5.创建pod
[root@master networkpolicy]# vim pod_a.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-1
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    6.在两个命名空间分别创建pod:
    [root@master namespace]# kubectl apply -f pod.yaml -n dev
    [root@master namespace]# kubectl apply -f pod.yaml -n port
    [root@master ~]# kubectl get pod -n dev -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    pod-1 1/1 Running 1 29m 10.244.1.6 minion-1
    [root@master ~]# kubectl get pod -n port -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    pod-1 1/1 Running 1 26m 10.244.2.5 minion-2
    [root@master ~]# curl 10.244.1.6
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    [root@master ~]# curl 10.244.2.5
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    7.编写策略:
    [root@master networkpolicy]# vim ingree-def.yaml
    #下面内容是dev命令空间的pod拒绝所有请求
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
    name: deny-all-ingress
    namespace: dev
    spec:
    podSelector: {}
    policyTypes:
  • Ingress
    [root@master networkpolicy]# kubectl apply -f ingree-def.yaml -n dev
    networkpolicy.networking.k8s.io/deny-all-ingress created
    8.访问测试:
    [root@master namespace]# curl 10.244.1.6
    访问不到
    [root@master namespace]# curl 10.244.2.5
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    9.修改下策略:
    [root@master namespace]# vim ingree-def.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: dev
spec:
podSelector: {}
ingress: #ingress入站规则

  • {} #允许所有
    policyTypes:
  • Ingress #类型是入站
    [root@master namespace]# kubectl apply -f ingree-def.yaml
    networkpolicy.networking.k8s.io/deny-all-ingress configured
    10.访问测试:
    [root@master namespace]# curl 10.244.1.6 #dev命名空间的可以被访问了
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    [root@master namespace]# curl 10.244.2.5
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    11.给dev的pod打个标签:
    [root@master namespace]# kubectl label pod pod-1 app=myapp -n dev
    pod/pod-1 labeled
    12.编写策略匹配标签进行限制:
    [root@master namespace]# vim allow-myapp-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: all-myapp-ingress
spec:
podSelector:
matchLabels:
app: myapp #匹配标签是app=myapp的
ingress: #定义入站规则,出站策略将ingress改成egress

  • from: #来自什么IP
    • ipBlock:
      cidr: 192.168.200.0/24 #允许这个网段的访问
      except: #排除IP
      • 192.168.200.202/32 #这个IP除外
        ports: #允许端口和协议
    • protocol: TCP
      port: 80 #允许访问80,默认都是拒绝
      policyTypes:
    • Ingress #类型是入站, 出站策略将Ingress改成Egress

[root@master namespace]# kubectl apply -f allow-myapp-ingress.yaml
networkpolicy.networking.k8s.io/all-myapp-ingress created
13.访问测试:
[root@master namespace]# curl 10.244.1.6
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master namespace]# curl 10.244.1.6:443
#80可以访问到,443端口无法访问
[root@minion-2 ~]# curl 10.244.1.6
192.168.200.202地址的minion-2 80端口无法访问

8.2高级调度方式
1.节点选择器:nodeSelector, nodeName
2.节点亲和调度:nodeAffinity分为硬亲和和软亲和,硬亲和就是必须满足条件才能完成调度,软亲和就是,满足条件最好,不满足也可以调度。
实例:
8.2.1 通过node标签调度pod
1.使用nodeSelector
[root@master schedule]# vim pod-demon

apiVersion: v1
kind: Pod
metadata:
name: pod-demon
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/vreated-by: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    nodeSelector:
    disktype: ssd
    [root@master schedule]# kubectl apply -f pod-demon
    pod/pod-demon created
    [root@master schedule]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    pod-2 1/1 Running 2 1d
    pod-demon 0/1 Pending 0 1m
    Pending:调度失败,因为现在没有节点的标签是disktype: ssd
    2.查看下节点的变迁:
    [root@master schedule]# kubectl get nodes --show-labels
    NAME STATUS ROLES AGE VERSION LABELS
    master Ready master 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
    minion-1 Ready <none> 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=minion-1
    minion-2 Ready <none> 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=minion-2
    3.我们给minion-1打个标签,然后在看下:
    [root@master schedule]# kubectl label nodes minion-1 disktype=ssd
    node/minion-1 labeled
    [root@master schedule]# kubectl get nodes --show-labels
    NAME STATUS ROLES AGE VERSION LABELS
    master Ready master 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
    minion-1 Ready <none> 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=minion-1
    minion-2 Ready <none> 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=minion-2
    [root@master schedule]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    pod-2 1/1 Running 2 1d
    pod-demon 1/1 Running 0 6m
    [root@master schedule]# kubectl get pods -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    pod-2 1/1 Running 2 1d 10.244.2.9 minion-2
    pod-demon 1/1 Running 0 7m 10.244.1.12 minion-1
    发现调度到minion-1上运行了。
    8.2.2 通过亲和度进行调度
    1.节点亲和实例:(硬亲和)
    [root@master schedule]# vim pod-affinity-demon

apiVersion: v1
kind: Pod
metadata:
name: pod-affinity-demo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    affinity: #亲和度
    nodeAffinity: #采用节点亲和
    requiredDuringSchedulingIgnoredDuringExecution: #硬亲和
    nodeSelectorTerms:
    • matchExpressions:
      • key: zone #标签的key
        operator: In #in就是=
        values:
        • foo #值是foo或bar
        • bar
          [root@master schedule]# kubectl get pods #因为节点中没有zone这个标签所以无法调度
          NAME READY STATUS RESTARTS AGE
          pod-2 1/1 Running 2 2d
          pod-affinity-demo 0/1 Pending 0 42s
          pod-demon 1/1 Running 0 2h
          [root@master schedule]# kubectl label nodes minion-1 zone=foo #给minion-1打个标签
          node/minion-1 labeled
          [root@master schedule]# kubectl get pods -o wide #发现调度到minion-1上了
          NAME READY STATUS RESTARTS AGE IP NODE
          pod-2 1/1 Running 2 2d 10.244.2.9 minion-2
          pod-affinity-demo 1/1 Running 0 2m 10.244.1.13 minion-1
          pod-demon 1/1 Running 0 2h 10.244.1.12 minion-1
          2.节点亲和实例:(软亲和)
          [root@master schedule]# vim pod-affinity-demon-2
          apiVersion: v1
          kind: Pod
          metadata:
          name: pod-affinity-demo-2
          namespace: default
          labels:
          app: myapp
          tier: frontend
          spec:
          containers:
  • name: myapp
    image: ikubernetes/myapp:v1
    affinity:
    nodeAffinity:
    preferredDuringSchedulingIgnoredDuringExecution: #采用软亲和
    • preference:
      matchExpressions:
      • key: zone-1 #标签名
        operator: In
        values:
        • foo #标签值
        • bar
          weight: 60 #权重,1-100之间
          [root@master schedule]# kubectl apply -f pod-affinity-demon-2
          pod/pod-affinity-demo-2 created
          [root@master schedule]# kubectl get pods -o wide #发现运行在minion-2上
          NAME READY STATUS RESTARTS AGE IP NODE
          pod-2 1/1 Running 2 2d 10.244.2.9 minion-2
          pod-affinity-demo 1/1 Running 0 36m 10.244.1.13 minion-1
          pod-affinity-demo-2 1/1 Running 0 31s 10.244.2.11 minion-2
          pod-demon 1/1 Running 0 2h 10.244.1.12 minion-1
          [root@master schedule]# kubectl get nodes minion-2 --show-labels #查看标签没有
          NAME STATUS ROLES AGE VERSION LABELS
          minion-2 Ready <none> 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=minion-2
          3.创建pod亲和度的实例:
          [root@master schedule]# vim pod-addinity-pod-re.yaml
          apiVersion: v1
          kind: Pod
          metadata:
          name: pod-first
          labels:
          app: myapp
          tier: frontend
          spec:
          containers:
  • name: myapp
    image: ikubernetes/myapp:v1

    4.上述pod正常创建即可
    apiVersion: v1
    kind: Pod
    metadata:
    name: pod-second
    labels:
    app: db
    tier: db
    spec:
    containers:

  • name: busybox
    image: busybox:latest
    command: ["sh","-c","sleep 3600"]
    affinity: #设置亲和度
    podAffinity: #采用pod亲和度
    requiredDuringSchedulingIgnoredDuringExecution: #硬亲和
    • labelSelector: #匹配标签
      matchExpressions:
      • key: app
        operator: In
        values:
        • myapp
          topologyKey: kubernetes.io/hostname #标签相同配在匹配对应key相同的值的节点上运行
          [root@master schedule]# kubectl apply -f pod-addinity-pod-re.yaml
          pod/pod-first created
          pod/pod-second created
          [root@master schedule]# kubectl get pods -o wide
          NAME READY STATUS RESTARTS AGE IP NODE
          pod-first 1/1 Running 0 1m 10.244.1.18 minion-1
          pod-second 1/1 Running 0 1m 10.244.1.19 minion-1
          5.反亲和调度:
          [root@master schedule]# vim pod-anti-addinity-pod-re.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-first
labels:
app: myapp
tier: frontend
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1

    apiVersion: v1
    kind: Pod
    metadata:
    name: pod-second
    labels:
    app: db
    tier: db
    spec:
    containers:

  • name: busybox
    image: busybox:latest
    command: ["sh","-c","sleep 3600"]
    affinity:
    podAntiAffinity: #采用pod反亲和调度
    requiredDuringSchedulingIgnoredDuringExecution:
    • labelSelector:
      matchExpressions:
      • key: app
        operator: In
        values:
        • myapp
          topologyKey: kubernetes.io/hostname
          [root@master schedule]# kubectl apply -f pod-anti-addinity-pod-re.yaml
          pod/pod-first created
          pod/pod-second created
          [root@master schedule]# kubectl get pods -o wide
          NAME READY STATUS RESTARTS AGE IP NODE
          pod-first 1/1 Running 0 22s 10.244.1.20 minion-1
          pod-second 1/1 Running 0 22s 10.244.2.18 minion-2
          反亲和就是先配置pod标签,匹配相同后再匹配节点标签,节点相同key则不会调度到此节点上运行。

8.2.4污点调度
Taint的effect定义对Pod排斥效果:
NoSchedule:仅影响调度过程,对现存的pod对象不产生影响
NoExecute: 既影响调度过程,也影响现存的pod,对不容忍污点的pod将被驱逐
PreferNoSchedule:对于不能容忍污点的pod,如果pod实在没有节点被调度也可以运行在此节点上
1.运行deployment:
[root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-69b47bc96d-479xv 1/1 Running 0 9s 10.244.2.19 minion-2
myapp-deploy-69b47bc96d-dqg8h 1/1 Running 0 9s 10.244.2.20 minion-2
myapp-deploy-69b47bc96d-w8ksl 1/1 Running 0 9s 10.244.1.24 minion-1
会发现两个节点上都会运行,这时我们给两个节点都打上标签,看看效果
2.给minion-1打上污点
[root@master schedule]# kubectl taint node minion-1 node-type=prod:NoSchedule
node/minion-1 tainted
3.运行下deployment看效果
[root@master schedule]# kubectl apply -f pod-deployment.yaml
deployment.apps/myapp-deploy created
[root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-69b47bc96d-72d6p 1/1 Running 0 12s 10.244.2.21 minion-2
myapp-deploy-69b47bc96d-fmbj7 1/1 Running 0 12s 10.244.2.22 minion-2
myapp-deploy-69b47bc96d-v8h99 1/1 Running 0 12s 10.244.2.23 minion-2
全部调度minion-2上了,我们给minion-2打上污点,污点效果是pod不能容忍污点会被4.驱逐:
[root@master schedule]# kubectl taint node minion-2 node-type=dev:NoExecute
node/minion-2 tainted
[root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-69b47bc96d-2jz87 0/1 Pending 0 10s <none> <none>
myapp-deploy-69b47bc96d-8w9l4 0/1 Pending 0 10s <none> <none>
myapp-deploy-69b47bc96d-x4ccd 0/1 Pending 0 10s <none> <none>
会发现pod都被驱逐了,因为节点都有污点所以pod状态为Pending了。
5.给pod加上污点容忍度:
[root@master schedule]# vim pod-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      tolerations: #污点容忍
  • key: node-type #污点的key
    operator: Equal #匹配污点,Equal是等于的意思
    value: prod #污点的值
    effect: NoSchedule #容忍的效果,要和打上污点的效果一致
    [root@master schedule]# kubectl apply -f pod-deployment.yaml
    deployment.apps/myapp-deploy configured
    6.可以发现pod容忍了minion-1上的污点
    [root@master schedule]# kubectl get pods -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    myapp-deploy-6657b7d689-j2bxx 1/1 Running 0 7s 10.244.1.26 minion-1
    myapp-deploy-6657b7d689-v6kl5 1/1 Running 0 5s 10.244.1.27 minion-1
    myapp-deploy-6657b7d689-xpdvr 1/1 Running 0 9s 10.244.1.25 minion-1
    [root@master schedule]# vim pod-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      tolerations:
  • key: node-type
    operator: Exists # Exists是做判断,只要key存在,值可以为空,效果也可以为空,这样就是,只要污点key存在不管是什么值、什么效果都可以容忍。
    value:
    effect:
    [root@master schedule]# kubectl apply -f pod-deployment.yaml
    deployment.apps/myapp-deploy configured
    [root@master schedule]# kubectl get pods -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    myapp-deploy-f9f87c46d-6plfg 1/1 Running 0 2m 10.244.2.24 minion-2
    myapp-deploy-f9f87c46d-97zvs 1/1 Running 0 1m 10.244.1.28 minion-1
    myapp-deploy-f9f87c46d-slzms 1/1 Running 0 1m 10.244.2.25 minion-2
    会发现节点都被调度了。
    7.去除污点:
    [root@master ~]# kubectl taint node minion-1 node-type-
    node/minion-1 untainted
    [root@master ~]# kubectl taint node minion-2 node-type-
    node/minion-2 untainted
    8.容器的资源限制和需求:
    [root@master resources]# vim pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-demo
labels:
app: myapp
tier: frontend
spec:
containers:

  • name: myapp
    image: ikubernetes/stress-ng
    command: ["/usr/bin/stress-ng","-c 1","--metrics-brief"] #pod里面做CPU压测
    resources:
    requests: #定于pod需要多少CPU和内存
    cpu: 200m
    memory: 128Mi
    limits:
    cpu: 500m #定义最多使用值
    memory: 512Mi
    [root@master resources]# kubectl apply -f pod.yaml
    pod/pod-demo created
    [root@master resources]# kubectl exec pod-demo -- top
    Mem: 1166264K used, 699020K free, 12356K shrd, 2104K buff, 684008K cached
    CPU: 62% usr 0% sys 0% nic 37% idle 0% io 0% irq 0% sirq
    Load average: 1.45 0.57 0.38 3/351 11
    PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
    6 1 root R 6892 0% 0 63% {stress-ng-cpu} /usr/bin/stress-ng
    1 0 root S 6244 0% 0 0% /usr/bin/stress-ng -c 1 --metrics-
    7 0 root R 1500 0% 0 0% top
    [root@master resources]# kubectl describe pod pod-demo
    Name: pod-demo
    Namespace: default
    Node: minion-1/192.168.200.201
    Start Time: Mon, 22 Oct 2018 10:57:29 +0800
    。。。。。。。。。。。。。。。。。。。。。
    Volumes:
    default-token-pcqd6:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-pcqd6
    Optional: false
    QoS Class: Burstable 质量类型
    Node-Selectors: <none>
    Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
    node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type Reason Age From Message

    Normal Scheduled 28s default-scheduler Successfully assigned default/pod-demo to minion-1
    Normal Pulling 27s kubelet, minion-1 pulling image "ikubernetes/stress-ng"
    Normal Pulled 23s kubelet, minion-1 Successfully pulled image "ikubernetes/stress-ng"
    Normal Created 23s kubelet, minion-1 Created container
    Normal Started 23s kubelet, minion-1 Started container
    Qos类型有下面三类:
    Guranteed:当requests和limits设置相同时,则Qos是此类型,此类型pod优先级最高,当资源不够时,会优先运行此类型pod
    Burstable:至少一个容器设置了CPU或内存资源的requests属性
    BestEffort:没有任何一个容器设置requests或limits属性
    K8s资源监控及自动扩容
    9.1部署heapster:
    1.先下载influxdb的yaml文件并做修改:
    [root@master resources]# wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
    [root@master resources]# vim influxdb.yaml
    #修改成如下×××部分
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: monitoring-influxdb
    namespace: kube-system
    spec:
    replicas: 1
    selector:
    matchLabels:
    task: monitoring
    k8s-app: influxdb
    2.在节点上先将镜像拉出来并修改标签:
    [root@minion-2 ~]# docker pull influxdb:1.5.2
    1.5.2: Pulling from library/influxdb
    cc1a78bfd46b: Pull complete
    6861473222a6: Pull complete
    7e0b9c3b5ae0: Pull complete
    ef1cd6af9147: Pull complete
    fe4486e82c7c: Pull complete
    d5f280025ad5: Pull complete
    7b3aaccfccbb: Pull complete
    73454d972cf2: Pull complete
    Digest: sha256:4c782a464f03c9714b9d5456cc6057f4cd4a81bafc75b9b604bc27090c565036
    Status: Downloaded newer image for influxdb:1.5.2
    [root@minion-2 ~]# docker tag influxdb:1.5.2 k8s.gcr.io/heapster-influxdb-amd64:v1.5.2
    3.镜像准备好了,执行创建的命令
    [root@master resources]# kubectl apply -f influxdb.yaml
    deployment.apps/monitoring-influxdb created
    [root@master resources]# kubectl get svc -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5d
    monitoring-influxdb ClusterIP 10.109.26.130 <none> 8086/TCP 31m
    4.将rabc的yaml文件下载:
    [root@master resources]# wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
    5.创建rabc:
    [root@master resources]# kubectl apply -f heapster-rbac.yaml
    6.下载hearster的yaml文件并创建:
    [root@master resources]# wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
    [root@master resources]# vim heapster.yaml
    #修改下×××部分,不改也可以
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: heapster
    namespace: kube-system

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: heapster
    namespace: kube-system
    spec:
    replicas: 1
    selector:
    matchLabels:
    task: monitoring
    k8s-app: heapster
    。。。。。。。。。。。。。。。。。。。。
    spec:
    ports:

  • port: 80
    targetPort: 8082
    type: NodePort
    7.要加节点上把镜像拉下来并打上标签:
    [root@minion-2 ~]# docker pull fishchen/heapster-amd64:v1.5.4
    v1.5.4: Pulling from fishchen/heapster-amd64
    c0b4198b9e96: Pull complete
    b0c38d9b6f16: Pull complete
    Digest: sha256:dccaabb0c20cf05c29baefa1e9bf0358b083ccc0fab492b9b3b47fb7e4db5472
    Status: Downloaded newer image for fishchen/heapster-amd64:v1.5.4
    [root@minion-2 ~]# docker tag fishchen/heapster-amd64:v1.5.4 k8s.gcr.io/heapster-amd64:v1.5.4
    8.开始创建:
    [root@master resources]# kubectl apply -f heapster.yaml
    serviceaccount/heapster created
    deployment.apps/heapster created
    service/heapster created
    [root@master resources]# kubectl get svc -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    heapster NodePort 10.106.127.123 <none> 80:30600/TCP 21s
    kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5d
    monitoring-influxdb ClusterIP 10.109.26.130 <none> 8086/TCP 47m
    [root@master resources]# kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    canal-8pxjq 3/3 Running 24 4d
    canal-bfl74 3/3 Running 20 4d
    canal-rtw55 3/3 Running 20 4d
    coredns-78fcdf6894-kqjqt 1/1 Running 12 5d
    coredns-78fcdf6894-w2c7j 1/1 Running 6 4d
    etcd-master 1/1 Running 6 5d
    heapster-84c9bc48c4-tc46l 1/1 Running 0 18s
    kube-apiserver-master 1/1 Running 8 5d
    kube-controller-manager-master 1/1 Running 8 5d
    kube-flannel-ds-amd64-5wwdm 1/1 Running 9 5d
    kube-flannel-ds-amd64-rhhx4 1/1 Running 11 5d
    kube-flannel-ds-amd64-s9jlj 1/1 Running 0 4h
    kube-proxy-j8lkl 1/1 Running 6 5d
    kube-proxy-wf2ss 1/1 Running 6 5d
    kube-proxy-xxdr4 1/1 Running 5 5d
    kube-scheduler-master 1/1 Running 8 5d
    monitoring-influxdb-848b9b66f6-v67n6 1/1 Running 0 53m
    9.访问测试:

9.2部署grafana图形化展示
1.将grafana的yaml文件下载到本地:
[root@master resources]# wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml
2.修改配置文件:
[root@master resources]# vim grafana.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: grafana
。。。。。。。。。。。。。。。。。
ports:

  • port: 80
    targetPort: 3000
    selector:
    k8s-app: grafana
    type: NodePort
    3.如上先在节点上将镜像拖下来然后修改成和配置文件里一样的标签:
    [root@minion-1 ~]# docker pull grafana/grafana:5.0.4
    5.0.4: Pulling from grafana/grafana
    f65523718fc5: Pull complete
    a3ed95caeb02: Pull complete
    4838ae75cd3d: Pull complete
    eec7aa0e332c: Pull complete
    Digest: sha256:9c66c7c01a6bf56023126a0b6f933f4966e8ee795c5f76fa2ad81b3c6dadc1c9
    Status: Downloaded newer image for grafana/grafana:5.0.4
    [root@minion-1 ~]# docker tag grafana/grafana:5.0.4 k8s.gcr.io/heapster-grafana-amd64:v5.0.4
    4.创建grafana:
    [root@master resources]# kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    canal-8pxjq 3/3 Running 24 4d
    canal-bfl74 3/3 Running 20 4d
    canal-rtw55 3/3 Running 20 4d
    coredns-78fcdf6894-kqjqt 1/1 Running 12 5d
    coredns-78fcdf6894-w2c7j 1/1 Running 6 5d
    etcd-master 1/1 Running 6 5d
    heapster-84c9bc48c4-tc46l 1/1 Running 0 36m
    kube-apiserver-master 1/1 Running 8 5d
    kube-controller-manager-master 1/1 Running 8 5d
    kube-flannel-ds-amd64-5wwdm 1/1 Running 9 5d
    kube-flannel-ds-amd64-rhhx4 1/1 Running 11 5d
    kube-flannel-ds-amd64-s9jlj 1/1 Running 0 5h
    kube-proxy-j8lkl 1/1 Running 6 5d
    kube-proxy-wf2ss 1/1 Running 6 5d
    kube-proxy-xxdr4 1/1 Running 5 5d
    kube-scheduler-master 1/1 Running 8 5d
    monitoring-grafana-555545f477-vq2wd 1/1 Running 0 24m
    monitoring-influxdb-848b9b66f6-v67n6 1/1 Running 0 1h
    [root@master resources]# kubectl get svc -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    heapster NodePort 10.106.127.123 <none> 80:30600/TCP 42m
    kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5d
    monitoring-grafana NodePort 10.110.48.235 <none> 80:31536/TCP 24m
    monitoring-influxdb ClusterIP 10.109.26.130 <none> 8086/TCP 1h
    5.访问测试:

9.3部署metrics-server
1将所有文件下载到本地:地址https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server

[root@master resources]# git clone https://github.com/kubernetes-incubator/metrics-server.git
[root@master 1.8+]# cd /root/resources/metrics-server/deploy/1.8+
2.将目录下yaml文件中的镜像手动在节点上拉下来并打上配置文件里的标签
[root@minion-2 ~]# docker pull rancher/metrics-server-amd64:v0.3.1
v0.3.1: Pulling from rancher/metrics-server-amd64
8c5a7da1afbc: Pull complete
e2b7e44cc2bf: Pull complete
Digest: sha256:78938f933822856f443e6827fe5b37d6cc2f74ae888ac8b33d06fdbe5f8c658b
Status: Downloaded newer image for rancher/metrics-server-amd64:v0.3.1
[root@minion-2 ~]# docker tag rancher/metrics-server-amd64:v0.3.1 k8s.gcr.io/metrics-server-amd64:v0.3.1
[root@master 1.8+]# kubectl apply -f .
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader configured
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator configured
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured
serviceaccount/metrics-server created
deployment.extensions/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@master 1.8+]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
canal-8pxjq 3/3 Running 27 5d
canal-bfl74 3/3 Running 23 5d
canal-rtw55 3/3 Running 24 5d
coredns-78fcdf6894-kqjqt 1/1 Running 13 5d
coredns-78fcdf6894-w2c7j 1/1 Running 7 5d
etcd-master 1/1 Running 7 5d
kube-apiserver-master 1/1 Running 13 5d
kube-controller-manager-master 1/1 Running 12 5d
kube-flannel-ds-amd64-5wwdm 1/1 Running 10 5d
kube-flannel-ds-amd64-rhhx4 1/1 Running 13 5d
kube-flannel-ds-amd64-s9jlj 1/1 Running 1 23h
kube-proxy-j8lkl 1/1 Running 7 5d
kube-proxy-wf2ss 1/1 Running 7 5d
kube-proxy-xxdr4 1/1 Running 6 5d
kube-scheduler-master 1/1 Running 11 5d
metrics-server-5d78f796fd-wn79b 1/1 Running 0 23s
[root@master 1.8+]# kubectl top nodes
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
会发现还是用不了,
解决方法:
[root@master 1.8+]# vim metrics-server-deployment.yaml
#添加×××部分
containers:

  • name: metrics-server
    image: k8s.gcr.io/metrics-server-amd64:v0.3.1
    imagePullPolicy: IfNotPresent
    command:
    • /metrics-server
    • --kubelet-insecure-tls
    • --kubelet-preferred-address-types=InternalIP
      然后重新创建:
      [root@master 1.8+]# kubectl apply -f metrics-server-deployment.yaml
      serviceaccount/metrics-server unchanged
      deployment.extensions/metrics-server configured
      [root@master 1.8+]# kubectl top node
      NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
      master 194m 19% 1116Mi 64%
      minion-1 78m 7% 432Mi 25%
      minion-2 66m 6% 443Mi 25%
      9.3部署prometheus
      1.将prometheus地址拉到本地并运行
      [root@master ~]# git clone https://github.com/iKubernetes/k8s-prom.git
      [root@master ~]# cd k8s-prom/
      [root@master k8s-prom]# kubectl apply -f namespace.yaml
      namespace/prom created
      [root@master k8s-prom]# cd node_exporter/
      [root@master node_exporter]# ls
      node-exporter-ds.yaml node-exporter-svc.yaml
      [root@master node_exporter]# vim node-exporter-ds.yaml
      [root@master node_exporter]# kubectl apply -f .
      daemonset.apps/prometheus-node-exporter created
      service/prometheus-node-exporter created
      [root@master node_exporter]# kubectl get pods -n prom
      NAME READY STATUS RESTARTS AGE
      prometheus-node-exporter-5llld 1/1 Running 0 1m
      prometheus-node-exporter-lw7xv 1/1 Running 0 1m
      prometheus-node-exporter-qsbrs 1/1 Running 0 1m
      [root@master node_exporter]# cd ../prometheus/
      [root@master prometheus]# ls
      prometheus-cfg.yaml prometheus-deploy.yaml prometheus-rbac.yaml prometheus-svc.yaml
      [root@master prometheus]# kubectl apply -f .
      configmap/prometheus-config created
      deployment.apps/prometheus-server created
      clusterrole.rbac.authorization.k8s.io/prometheus created
      serviceaccount/prometheus created
      clusterrolebinding.rbac.authorization.k8s.io/prometheus created
      service/prometheus created
      [root@master prometheus]# vim prometheus-deploy.yaml
      #×××最大限制部分删除了,否则内存不足运行不起来
      ports:
    • containerPort: 9090
      protocol: TCP
      resources:
      limits:
      memory: 2Gi
      [root@master prometheus]# kubectl apply -f prometheus-deploy.yaml
      deployment.apps/prometheus-server created
      [root@master prometheus]# kubectl get pods -n prom
      NAME READY STATUS RESTARTS AGE
      prometheus-node-exporter-5llld 1/1 Running 0 11m
      prometheus-node-exporter-lw7xv 1/1 Running 0 11m
      prometheus-node-exporter-qsbrs 1/1 Running 0 11m
      prometheus-server-7c8554cf-gkrs9 1/1 Running 0 2m

[root@master prometheus]# kubectl get all -n prom
NAME READY STATUS RESTARTS AGE
pod/prometheus-node-exporter-5llld 1/1 Running 0 13m
pod/prometheus-node-exporter-lw7xv 1/1 Running 0 13m
pod/prometheus-node-exporter-qsbrs 1/1 Running 0 13m
pod/prometheus-server-7c8554cf-gkrs9 1/1 Running 0 3m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus NodePort 10.98.60.233 <none> 9090:30090/TCP 10m
service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 13m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prometheus-node-exporter 3 3 3 3 3 <none> 13m

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-server 1 1 1 1 3m

NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-server-7c8554cf 1 1 1 3m
2.访问30090测试:

[root@master prometheus]# cd ../
[root@master k8s-prom]# cd kube-state-metrics/
[root@master kube-state-metrics]# ls
kube-state-metrics-deploy.yaml kube-state-metrics-svc.yaml
kube-state-metrics-rbac.yaml
4.在节点上把镜像拉下来
[root@minion-1 ~]# ./pull-google.sh gcr.io/google_containers/kube-state-metrics-amd64:v1.3.1
[root@master kube-state-metrics]# kubectl apply -f .
[root@master kube-state-metrics]# kubectl get all -n prom
NAME READY STATUS RESTARTS AGE
pod/kube-state-metrics-58dffdf67d-lvhtl 1/1 Running 0 1m
pod/prometheus-node-exporter-5llld 1/1 Running 0 45m
pod/prometheus-node-exporter-lw7xv 1/1 Running 0 45m
pod/prometheus-node-exporter-qsbrs 1/1 Running 0 45m
pod/prometheus-server-7c8554cf-gkrs9 1/1 Running 0 35m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-state-metrics ClusterIP 10.105.251.81 <none> 8080/TCP 8m
service/prometheus NodePort 10.98.60.233 <none> 9090:30090/TCP 42m
service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 45m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prometheus-node-exporter 3 3 3 3 3 <none> 45m

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-state-metrics 1 1 1 1 1m
deployment.apps/prometheus-server 1 1 1 1 35m

NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-state-metrics-58dffdf67d 1 1 1 1m
replicaset.apps/prometheus-server-7c8554cf 1 1 1 35m
[root@master kube-state-metrics]# cd ../k8s-prometheus-adapter/
[root@master k8s-prometheus-adapter]# ls
custom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml
custom-metrics-apiserver-auth-reader-role-binding.yaml
custom-metrics-apiserver-deployment.yaml
custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml
custom-metrics-apiserver-service-account.yaml
custom-metrics-apiserver-service.yaml
custom-metrics-apiservice.yaml
custom-metrics-cluster-role.yaml
custom-metrics-resource-reader-cluster-role.yaml
hpa-custom-metrics-cluster-role-binding.yaml
5.需要做证书认证:
[root@master ~]# cd /etc/kubernetes/pki/
[root@master pki]# (umask 077;openssl genrsa -out serving.key 2048)
Generating RSA private key, 2048 bit long modulus
......................................................................+++
...................+++
e is 65537 (0x10001)
[root@master pki]# openssl req -new -key serving.key -out serving.csr -subj "/CN=serving"
[root@master pki]# openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650
Signature ok
subject=/CN=serving
Getting CA Private Key
[root@master pki]# ls
apiserver.crt ca.crt front-proxy-client.key
apiserver-etcd-client.crt ca.key sa.key
apiserver-etcd-client.key etcd sa.pub
apiserver.key front-proxy-ca.crt serving.crt
apiserver-kubelet-client.crt front-proxy-ca.key serving.csr
apiserver-kubelet-client.key front-proxy-client.crt serving.key
[root@master pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key=./serving.key -n prom
secret/cm-adapter-serving-certs created
[root@master pki]# kubectl get secrets -n prom
NAME TYPE DATA AGE
cm-adapter-serving-certs Opaque 2 26s
default-token-svkpd kubernetes.io/service-account-token 3 1h
kube-state-metrics-token-47zdn kubernetes.io/service-account-token 3 25m
prometheus-token-brldq kubernetes.io/service-account-token 3 58m
[root@master k8s-prometheus-adapter]# kubectl apply -f .
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created
deployment.apps/custom-metrics-apiserver created
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader created
serviceaccount/custom-metrics-apiserver created
service/custom-metrics-apiserver created
apiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created
clusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader created
clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created
[root@master k8s-prometheus-adapter]# mv custom-metrics-apiserver-deployment.yaml{,.bak}
6.下载新版的配置文件:
[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-apiserver-deployment.yaml
7.修改配置文件:
[root@master k8s-prometheus-adapter]# vim custom-metrics-apiserver-deployment.yaml
#×××部分的命名空间修改成自己定义的
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
namespace: prom
spec:
8.将confgmap拉下载:
[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-config-map.yaml
9.修改下里面的命名空间:
[root@master k8s-prometheus-adapter]# vim custom-metrics-config-map.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: prom
[root@master k8s-prometheus-adapter]# kubectl apply -f custom-metrics-config-map.yaml
configmap/adapter-config created
[root@master k8s-prometheus-adapter]# kubectl apply -f custom-metrics-apiserver-deployment.yaml
deployment.apps/custom-metrics-apiserver created
[root@master k8s-prometheus-adapter]# kubectl get pod -n prom
NAME READY STATUS RESTARTS AGE
custom-metrics-apiserver-65f545496-srtdr 1/1 Running 0 16s
kube-state-metrics-58dffdf67d-lvhtl 1/1 Running 0 1h
prometheus-node-exporter-5llld 1/1 Running 0 1h
prometheus-node-exporter-lw7xv 1/1 Running 0 1h
prometheus-node-exporter-qsbrs 1/1 Running 0 1h
prometheus-server-7c8554cf-gkrs9 1/1 Running 0 1h
[root@master k8s-prometheus-adapter]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
crd.projectcalico.org/v1
custom.metrics.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
10.配置grafana,修改配置文件命名空间修改成prom
[root@master resources]# vim grafana.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: prom
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: grafana
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:

  • name: grafana
    • containerPort: 3000
      protocol: TCP
      volumeMounts:
    • mountPath: /etc/ssl/certs
      name: ca-certificates
      readOnly: true
      env:

      - name: INFLUXDB_HOST #注释这两行内容,这两行是使用influxdb

      value: monitoring-influxdb

    • name: GF_SERVER_HTTP_PORT
      value: "3000"

      The following env variables are required to make Grafana accessible via

      the kubernetes api-server proxy. On production clusters, we recommend

      removing these env variables, setup auth for grafana, and expose the grafana

      service using a LoadBalancer or a public IP.

    • name: GF_AUTH_BASIC_ENABLED
      value: "false"
    • name: GF_AUTH_ANONYMOUS_ENABLED
      value: "true"
    • name: GF_AUTH_ANONYMOUS_ORG_ROLE
      value: Admin
    • name: GF_SERVER_ROOT_URL

      If you're only using the API Server proxy, set this value instead:

      value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

      value: /
      volumes:

  • name: ca-certificates
    hostPath:
    path: /etc/ssl/certs
  • name: grafana-storage
    emptyDir: {}

    apiVersion: v1
    kind: Service
    metadata:
    labels:

    For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

    If you are NOT using this as an addon, you should comment out this line.

    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
    name: monitoring-grafana
    namespace: prom
    spec:

    In a production setup, we recommend accessing Grafana through an external Loadbalancer

    or through a public IP.

    type: LoadBalancer

    You could also use NodePort to expose the service at a randomly-generated port

    type: NodePort

    ports:

    • port: 80
      targetPort: 3000
      selector:
      k8s-app: grafana
      type: NodePort
      [root@master resources]# kubectl apply -f grafana.yaml
      deployment.apps/monitoring-grafana created
      service/monitoring-grafana created
      [root@master resources]# kubectl get pods -n prom -w
      NAME READY STATUS RESTARTS AGE
      custom-metrics-apiserver-65f545496-srtdr 1/1 Running 0 21m
      kube-state-metrics-58dffdf67d-lvhtl 1/1 Running 0 1h
      monitoring-grafana-ffb4d59bd-sl72s 1/1 Running 0 2m
      prometheus-node-exporter-5llld 1/1 Running 0 2h
      prometheus-node-exporter-lw7xv 1/1 Running 0 2h
      prometheus-node-exporter-qsbrs 1/1 Running 0 2h
      prometheus-server-7c8554cf-gkrs9 1/1 Running 0 1h
      [root@master resources]# kubectl get svc -n prom
      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      custom-metrics-apiserver ClusterIP 10.96.75.182 <none> 443/TCP 1h
      kube-state-metrics ClusterIP 10.105.251.81 <none> 8080/TCP 1h
      monitoring-grafana NodePort 10.107.187.39 <none> 80:31504/TCP 3m
      prometheus NodePort 10.98.60.233 <none> 9090:30090/TCP 2h
      prometheus-node-exporter ClusterIP None <none> 9100/TCP 2h
      11.访问grafana:

9.4 K8s自动扩容
1.主动扩容:
[root@master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi' --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80
service/myapp created
deployment.apps/myapp created
2.命令行配置:
[root@master ~]# kubectl autoscale deployment myapp --min=1 --max=8 --cpu-percent=50
kubectl autoscale:关键字
deployment :类型type,这里是deployment
myapp:名称
--min:最少多少个
--max:最多多少个
--cpu-percent:CPU的阈值百分比,这里50就是50%
[root@master ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp <unknown>/50% 1 8 1 3m
3.压测测试:
[root@master ~]# kubectl patch svc myapp -p '{"spec":{"type":"NodePort"}}'
service/myapp patched
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h
myapp NodePort 10.97.180.218 <none> 80:30417/TCP 12m
[root@minion-1 ~]# yum install -y httpd-tools

4.Minion-1用ab命令压测
[root@minion-1 ~]# ab -c 1000 -n 5000000 http://192.168.200.201:30417/index.html
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.200.201 (be patient)
apr_socket_recv: Connection reset by peer (104)
Total of 6202 requests completed
5.查看这边hpa×××部分的变化
[root@master ~]# kubectl describe hpa
Name: myapp
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 24 Oct 2018 16:33:48 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 102% (51m) / 50%
Min replicas: 1
Max replicas: 8
Deployment pods: 1 current / 3 desired
Conditions:

6.查看pod扩展出两个:(这个扩容它是根据自己cpu负载计算的)
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 1h
myapp-6985749785-rf8vb 1/1 Running 0 24s
myapp-6985749785-zx2fv 1/1 Running 0 24s
7.等峰值过去了会自动缩容:(缩容的延迟时间可以自己设定,默认会有延迟)
[root@master ~]# kubectl describe hpa
Name: myapp
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 24 Oct 2018 16:33:48 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (0) / 50%
Min replicas: 1
Max replicas: 8
Deployment pods: 3 current / 3 desired
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 1h
myapp-6985749785-rf8vb 1/1 Running 0 4m
myapp-6985749785-zx2fv 1/1 Running 0 4m
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 1h
默认hpa使用的是v1控制器
8.创建v2控制器
[root@master ~]# vim hpa-v2-demo.yaml

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa-v2
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 1
maxReplicas: 10
metrics:

  • type: Resource
    resource:
    name: cpu
    targetAverageUtilization: 50
  • type: Resource
    resource:
    name: memory
    targetAverageValue: 50Mi #v2支持内存
    [root@master ~]# vim hpa-v2-demo.yaml
    [root@master ~]# kubectl get hpa
    NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
    myapp-hpa-v2 Deployment/myapp 3182592/50Mi, 0%/50% 1 10 1 1m
    9.再次压测:
    [root@minion-1 ~]# ab -c 1000 -n 500000 http://192.168.200.201:30417/index.html
    This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.200.201 (be patient)
apr_socket_recv: Connection reset by peer (104)
Total of 781 requests completed
[root@minion-1 ~]# ab -c 1000 -n 500000 http://192.168.200.201:30417/index.html
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.200.201 (be patient)
apr_socket_recv: Connection reset by peer (104)
Total of 10512 requests completed
[root@master ~]# kubectl describe hpa
Name: myapp-hpa-v2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"myapp-hpa-v2","namespace":"default"},"spec":{...
CreationTimestamp: Wed, 24 Oct 2018 18:20:53 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource memory on pods: 3395584 / 50Mi
resource cpu on pods (as a percentage of request): 37% (18m) / 50%
Min replicas: 1
Max replicas: 10
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message


AbleToScale False BackoffBoth the time since the previous scale is still within both the downscale and upscale forbidden windows
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message


Normal SuccessfulRescale 2m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 2h
myapp-6985749785-qdfcv 1/1 Running 0 2m
helm入门
10.1 部署tiller
下载helm包:https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
Git地址:https://github.com/helm/helm/releases/tag/v2.11.0
1.下载完上传到服务器然后解压启动:
[root@master ~]# tar xf helm-v2.11.0-linux-amd64.tar.gz
[root@master ~]# cd linux-amd64/
[root@master linux-amd64]# ls
helm LICENSE README.md tiller
[root@master linux-amd64]# mv helm /usr/bin/
2.部署tiller
[root@master linux-amd64]# cd ../
[root@master ~]# mkdir helm
[root@master ~]# cd helm
[root@master helm]# vim tiller-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:

  • kind: ServiceAccount
    name: tiller
    namespace: kube-system
    [root@master helm]# kubectl apply -f tiller-rbac.yaml
    serviceaccount/tiller created
    clusterrolebinding.rbac.authorization.k8s.io/tiller created
    3.初始化:×××部分是镜像,如果版本有变化重新找个对应的版本镜像即可
    [root@master helm]# helm init --service-account tiller --upgrade -i sapcc/tiller:v2.11.0 --skip-refresh
    [root@master helm]# kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    coredns-78fcdf6894-qvcg7 1/1 Running 1 1d
    coredns-78fcdf6894-z6hvx 1/1 Running 1 1d
    etcd-master 1/1 Running 1 1d
    kube-apiserver-master 1/1 Running 1 1d
    kube-controller-manager-master 1/1 Running 1 1d
    kube-flannel-ds-amd64-cfbfp 1/1 Running 2 1d
    kube-flannel-ds-amd64-j2qlk 1/1 Running 2 1d
    kube-flannel-ds-amd64-rwgz5 1/1 Running 2 1d
    kube-proxy-b5jnt 1/1 Running 1 1d
    kube-proxy-shjnd 1/1 Running 1 1d
    kube-proxy-sp64v 1/1 Running 1 1d
    kube-scheduler-master 1/1 Running 1 1d
    metrics-server-64d46554f7-grcv6 1/1 Running 1 1d
    tiller-deploy-d89b4dd7f-jng7d 1/1 Running 0 4m
    [root@master helm]# helm version
    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.5.1", GitCommit:"7cf31e8d9a026287041bae077b09165be247ae66", GitTreeState:"clean"}
    官方可用的chart列表:https://hub.kubeapps.com/
    4.查看可用仓库:
    [root@master helm]# helm repo list
    NAME URL
    stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
    local http://127.0.0.1:8879/charts
    5.搜索可用chart:
    [root@master helm]# helm search jenkins
    NAME CHART VERSION APP VERSION DESCRIPTION
    stable/jenkins 0.13.5 2.73 Open source continuous integration server. It supports mu...
    6.查看chart的详细信息:
    [root@master helm]# helm inspect stable/redis
    appVersion: 4.0.8
    description: Open source, advanced key-value store. It is often referred to as a data
    structure server since keys can contain strings, hashes, lists, sets and sorted
    sets.
    engine: gotpl
    home: http://redis.io/
    icon: https://bitnami.com/assets/stacks/redis/img/redis-stack-220x234.png
    keywords:
    • redis
      10.2 helm简单管理及操作
      1.Helm常用命令:
      Release管理
      Install
      delete
      upgrade/rollback
      list
      history
      status
      chart管理:
      create
      fetch
      inspect
      package
      verlfy

spring.data.mongodb.authentication-database=youwin_edu
spring.data.mongodb.database=youwin_edu
spring.data.mongodb.username=youwin_edu
N1w_2xE6MTQ2ODk5Nj_edu
2.创建一个myapp的helm
[root@master helm]# helm create myapp
Creating myapp
3.会自动生成模板文件:
[root@master helm]# tree myapp/
myapp/
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── ingress.yaml
│ ├── NOTES.txt
│ └── service.yaml
└── values.yaml

4.打包myapp这个项目:
[root@master helm]# helm package myapp/
Successfully packaged chart and saved it to: /root/helm/myapp-0.0.1.tgz
[root@master helm]# ls
myapp myapp-0.0.1.tgz tiller-rbac.yaml
5.启动helm本地仓库服务:
[root@master helm]# helm serve
Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
[root@master ~]# helm search myapp #搜索有信息说明启动了或者查看8879端口
NAME CHART VERSION APP VERSION DESCRIPTION
local/myapp 0.0.1 1.0 A Helm chart for Kubernetes
6.安装myapp:
[root@master helm]helm install --name myapp-1 local/myapp
NAME: myapp-1
LAST DEPLOYED: Mon Oct 29 15:42:30 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta2/Deployment
NAME AGE
myapp-1 0s

==> v1/Pod(related)

NAME READY STATUS RESTARTS AGE
myapp-1-847d9b9676-6lzzl 0/1 Pending 0 0s

==> v1/Service

NAME AGE
myapp-1 0s

NOTES:

  1. Get the application URL by running these commands:
    export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=myapp,app.kubernetes.io/instance=myapp-1" -o jsonpath="{.items[0].metadata.name}")
    echo "Visit http://127.0.0.1:8080 to use your application"
    kubectl port-forward $POD_NAME 8080:80

[root@master helm]# kubectl get pods #可能文件配置有问题
NAME READY STATUS RESTARTS AGE
myapp-1-847d9b9676-6lzzl 0/1 InvalidImageName 0 39s
myapp-6985749785-pz8vg 1/1 Running 3 4d
7.删除:
[root@master helm]# helm delete --purge myapp-1
release "myapp-1" deleted

8.添加仓库:stable仓库里面的是稳定的
[root@master helm]# helm repo add stable https://kubernetes-charts.storage.googleapis.com
[root@master helm]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com/
9.添加仓库:incubator仓库里面的应用不是稳定版本,测试可以使用
[root@master helm]# helm repo add incubator http://kubernetes-charts-incubator.storage.googleapis.com
"incubator" has been added to your repositories
[root@master helm]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com/
local http://127.0.0.1:8879/charts
repo_name1 https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts-incubator/
incubator http://kubernetes-charts-incubator.storage.googleapis.com
部署efk日志收集:
[root@master ~]# helm fetch incubator/elasticsearch
[root@master ~]# ls
a k8s-prom
anaconda-ks.cfg k8s.sh
a.tar.gz kube-apiserver-amd64-1.11.0.tar.gz
coredns-1.1.3.tar.gz kube-controller-manager-amd64-1.11.0.tar.gz
elasticsearch-1.10.2.tgz kube-flannel.yml
[root@master helm]# tar xf elasticsearch-1.10.2.tgz
[root@master helm]# cd elasticsearch
修改文件:
[root@master elasticsearch]# vim values.yaml
将数量修改成1,因为资源不够,将存储卷关闭
er to form a cluster.
MINIMUM_MASTER_NODES: "1"

client:
name: client
replicas: 1
master:
name: master
exposeHttp: false
replicas: 1
heapSize: "512m"
persistence:
enabled: false
accessMode: ReadWriteOnce
name: data
size: "4Gi"
data:
name: data
exposeHttp: false
replicas: 1
heapSize: "1536m"
persistence:
enabled: false
安装es
[root@master elasticsearch]# kubectl create namespace efk
[root@master elasticsearch]# helm install --name els1 --namespace=efk -f values.yaml incubator/elasticsearch
NAME: els1
LAST DEPLOYED: Tue Oct 30 10:48:51 2018
NAMESPACE: efk
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME AGE
els1-elasticsearch-client 1s

==> v1beta1/StatefulSet
els1-elasticsearch-data 1s
els1-elasticsearch-master 1s

==> v1/Pod(related)

NAME READY STATUS RESTARTS AGE
els1-elasticsearch-client-7667b8455f-cmbpd 0/1 Init:0/1 0 1s
els1-elasticsearch-data-0 0/1 Init:0/2 0 1s
els1-elasticsearch-master-0 0/1 Init:0/2 0 0s

==> v1/ConfigMap

NAME AGE
els1-elasticsearch 1s

==> v1/Service
els1-elasticsearch-client 1s
els1-elasticsearch-discovery 1s

NOTES:
The elasticsearch cluster has been installed.


Please note that this chart has been deprecated and moved to stable.
Going forward please use the stable version of this chart.


Elasticsearch can be accessed:

  • Within your cluster, at the following DNS name at port 9200:

    els1-elasticsearch-client.efk.svc

  • From outside the cluster, run these commands in the same shell:

    export POD_NAME=$(kubectl get pods --namespace efk -l "app=elasticsearch,component=client,release=els1" -o jsonpath="{.items[0].metadata.name}")
    echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
    kubectl port-forward --namespace efk $POD_NAME 9200:9200
    输出这状态信息用status也可以看:
    [root@master elasticsearch]# helm status els1
    日志收集不完整,由于机器配置问题。
    部署Traefik
    Traefik
    Traefik是一个用Golang开发的轻量级的Http反向代理和负载均衡器。由于可以自动配置和刷新backend节点,目前可以被绝大部分容器平台支持,例如Kubernetes,Swarm,Rancher等。由于traefik会实时与Kubernetes API交互,所以对于Service的节点变化,traefik的反应会更加迅速。总体来说traefik可以在Kubernetes中完美的运行.
    Traefik 还有很多特性如下:
    • 速度快
    • 不需要安装其他依赖,使用 GO 语言编译可执行文件
    • 支持最小化官方 Docker 镜像
    • 支持多种后台,如 Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS 等等
    • 支持 REST API
    • 配置文件热重载,不需要重启进程
    • 支持自动熔断功能
    • 支持轮训、负载均衡
    • 提供简洁的 UI 界面
    • 支持 Websocket, HTTP/2, GRPC
    • 自动更新 HTTPS 证书
    • 支持高可用集群模式
    接下来我们使用 Traefik 来替代 Nginx + Ingress Controller 来实现反向代
    理和服务暴漏。
    那么二者有什么区别呢?简单点说吧,在 Kubernetes 中使用 nginx 作为前端负载均衡,通过 Ingress Controller 不断的跟 Kubernetes API 交互,实时获取后端 Service、Pod 等的变化,然后动态更新 Nginx 配置,并刷新使配置生效,来达到服务自动发现的目的,而 Traefik 本身设计的就能够实时跟 Kubernetes API 交互,感知后端 Service、Pod 等的变化,自动更新配置并热重载。大体上差不多,但是 Traefik 更快速更方便,同时支持更多的特性,使反向代理、负载均衡更直接更高效。
    11.1部署traefik负载均衡
    1.下载下来服务的yaml文件
    [root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml
    [root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml
    [root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml
    2.创建rbac:
    [root@master ~]# kubectl apply -f ./traefik-rbac.yaml
    clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
    clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
    3.创建traefik ds:
    [root@master ~]# vim ./traefik-ds.yaml
    #少一行type: NodePort
    [root@master ~]# kubectl apply -f ./traefik-ds.yaml
    serviceaccount/traefik-ingress-controller unchanged
    daemonset.extensions/traefik-ingress-controller created
    service/traefik-ingress-service unchanged
    4.查看traefik pod是否允许正常,并且在哪个node上
    [root@master ~]# kubectl --namespace=kube-system get pods -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    coredns-78fcdf6894-9fs99 1/1 Running 0 24m 10.244.0.2 master
    coredns-78fcdf6894-vckpp 1/1 Running 0 24m 10.244.0.3 master
    etcd-master 1/1 Running 0 24m 192.168.200.200 master
    kube-apiserver-master 1/1 Running 0 24m 192.168.200.200 master
    kube-controller-manager-master 1/1 Running 0 24m 192.168.200.200 master
    kube-flannel-ds-amd64-2xtqz 1/1 Running 0 21m 192.168.200.200 master
    kube-flannel-ds-amd64-fbmvf 1/1 Running 0 20m 192.168.200.201 minion-1
    kube-flannel-ds-amd64-w76wq 1/1 Running 0 20m 192.168.200.202 minion-2
    kube-proxy-b8r7m 1/1 Running 0 20m 192.168.200.202 minion-2
    kube-proxy-t2528 1/1 Running 0 24m 192.168.200.200 master
    kube-proxy-zkgdl 1/1 Running 0 20m 192.168.200.201 minion-1
    kube-scheduler-master 1/1 Running 0 24m 192.168.200.200 master
    traefik-ingress-controller-5hxnj 1/1 Running 0 3m 10.244.2.3 minion-2
    traefik-ingress-controller-6f6d87769d-vn6n4 1/1 Running 0 4m 10.244.2.2 minion-2
    traefik-ingress-controller-kv6x7 1/1 Running 0 3m 10.244.1.2 minion-1
    5.创建traefik的UI:
    [root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
    [root@master ~]# kubectl apply -f ./ui.yaml
    service/traefik-web-ui created
    ingress.extensions/traefik-web-ui created
    6.测试,创建nginx的pod:
    [root@master ~]# vim nginx.yaml
    apiVersion: v1
    kind: Service
    metadata:
    name: nginx-svc
    spec:
    template:
    metadata:
    labels:
    name: nginx-svc
    namespace: default
    spec:
    selector:
    run: ngx-pod
    ports:

  • protocol: TCP
    port: 80
    targetPort: 80

    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
    name: ngx-pod
    spec:
    replicas: 4
    template:
    metadata:
    labels:
    run: ngx-pod
    spec:
    containers:

    • name: nginx
      image: nginx:1.10
      ports:
      • containerPort: 80

        apiVersion: extensions/v1beta1
        kind: Ingress
        metadata:
        name: ngx-ing
        annotations:
        kubernetes.io/ingress.class: traefik
        spec:
        rules:

  • host: minion-1 #这里换上能够解析的域名
    http:
    paths:
    • backend:
      serviceName: nginx-svc
      servicePort: 80
      7.访问测试:

11.2 配置https访问
1.生成证书:
[root@master ~]# mkdir /opt/k8s/ssl/ -p
[root@master ~]# mkdir /opt/k8s/conf/ -p
#上述操作在node节点上也要做
[root@master ~]# cd /opt/k8s/conf/
[root@master ssl]# openssl genrsa -des3 -out server.key 2048
[root@master ssl]# openssl req -new -key server.key -out server.csr
[root@master ssl]# cp server.key server.key.org
[root@master ssl]# openssl rsa -in server.key.org -out server.key
[root@master ssl]# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
[root@master ssl]# ls
server.crt server.csr server.key server.key.org
2.将创建好的证书传给node
[root@master ssl]# scp [email protected]:/opt/k8s/ssl/
3.创建traefik.toml文件:
[root@master ssl]# cd ../conf/
[root@master conf]# vim traefik.toml
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/opt/k8s/ssl/server.crt" #证书的路径
keyFile = "/opt/k8s/ssl/server.key #证书的路径
4.将配置文件传给node:
[root@master conf]#scp /opt/k8s/conf/
[email protected]:/opt/k8s/conf/
5.创建secret:用于验证
[root@master conf]#kubectl create secret generic traefik-cert --from-file=/opt/k8s/ssl/server.crt --from-file=/opt/k8s/ssl/server.key -n kube-system
6.创建configmap
[root@master conf]#kubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system
7.修改ds的yaml文件:
[root@master ~]# vim traefik-ds.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system

kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
hostNetwork: true
volumes:

  • name: ssl
    secret:
    secretName: traefik-cert
  • name: config
    configMap:
    name: traefik-conf
    containers:
  • image: traefik
    name: traefik-ingress-lb
    volumeMounts:
    • mountPath: "/opt/k8s/ssl/"
      name: "ssl"
    • mountPath: "/opt/k8s/conf/"
      name: "config"
      ports:
    • name: http
      containerPort: 80
    • name: https
      containerPort: 443
    • name: admin
      containerPort: 8080
      args:
    • --configFile=/opt/k8s/conf/traefik.toml
    • --api
    • --kubernetes
    • --logLevel=INFO

      kind: Service
      apiVersion: v1
      metadata:
      name: traefik-ingress-service
      namespace: kube-system
      spec:
      selector:
      k8s-app: traefik-ingress-lb
      ports:

      • protocol: TCP
        port: 80
        name: web
      • protocol: TCP
        port: 443
        name: https
      • protocol: TCP
        port: 8080
        name: admin
        type: NodePort
        主要变化呢是更新了几个方面:
        kind: DaemonSet 官方默认是使用Deployment
        hostNetwork: true 开启Node Port端口转发
        volumeMounts: 新增volumes挂载点
        ports: 新增https443
        args: 新增configfile
        以及Service层的443 ports
        8.先停止之前的DS,再重新创建:
        [root@master ~]# kubectl apply -f ./traefik-ds.yaml
        9.验证:

至此traefik部署完成。

猜你喜欢

转载自blog.51cto.com/qingfeng00/2347509