Kubernetes控制器和service

1.控制器的介绍

Pod 的分类:
自主式 Pod:Pod 退出后不会被创建
控制器管理的 Pod:在控制器的生命周期里,始终要维持 Pod 的副本数目

控制器类型:
Replication Controller和ReplicaSet
Deployment
DaemonSet
StatefulSet
Job
CronJob
HPA全称Horizontal Pod Autoscaler

Replication Controller和ReplicaSet
ReplicaSet 是下一代的 Replication Controller,官方推荐使用ReplicaSet。
ReplicaSet 和 Replication Controller 的唯一区别是选择器的支持,ReplicaSet 支持新的基于集合的选择器需求。ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。虽然 ReplicaSets 可以独立使用,但今天它主要被Deployments 用作协调 Pod 创建、删除和更新的机制。

2.Deployment
Deployment 为 Pod 和 ReplicaSet 提供了一个申明式的定义方法。典型的应用场景: 用来创建Pod和ReplicaSet 、滚动更新和回滚、扩容和缩容、暂停与恢复。

3.DaemonSet
DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。当有节点加入集群时, 也会为他们新增一个 Pod 。当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。
DaemonSet 的典型用法:
• 在每个节点上运行集群存储 DaemonSet,例如 glusterd、ceph。
• 在每个节点上运行日志收集 DaemonSet,例如 fluentd、logstash。
• 在每个节点上运行监控 DaemonSet,例如 Prometheus Node Exporter、zabbix agent等
• 一个简单的用法是在所有的节点上都启动一个 DaemonSet,将被作为每种 类型的 daemon 使用。 • 一个稍微复杂的用法是单独对每种 daemon 类型使用多个 DaemonSet, 但具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求。

4.StatefulSet
StatefulSet 是用来管理有状态应用的工作负载 API 对象。实例之间有不对 等关系,以及实例对外部数据有依赖关系的应用,称为“有状态应用”
StatefulSet 用来管理 Deployment 和扩展一组 Pod,并且能为这些 Pod 提供序号和唯一性保证。
StatefulSets 对于需要满足以下一个或多个需求的应用程序很有价值:
• 稳定的、唯一的网络标识符。
• 稳定的、持久的存储。
• 有序的、优雅的部署和缩放。
• 有序的、自动的滚动更新。

5.Job
Job执行批处理任务,仅执行一次任务,保证任务的一个或多个Pod成功结束。

6.CronJob
Cron Job 创建基于时间调度的 Jobs。 一个 CronJob 对象就像 crontab (cron table) 文件中的一行,它用 Cron 格式进行编写,并周期性地在给定的调度时间执行 Job。

7.HPA
HPA根据资源利用率自动调整service中Pod数量,实现Pod水平自动缩放。
 

2. ReplicaSet控制器

[kubeadm@server1 mainfest]$ vim rs.yml
[kubeadm@server1 mainfest]$ cat rs.yml------------>replicationSet控制器的配置文件
apiVersion: apps/v1
kind: ReplicaSet   -------------->这里是控制器种类,之前我们见到的是pod
metadata:
  name: replicaset-example
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v1
[kubeadm@server1 mainfest]$ kubectl apply -f rs.yml 
replicaset.apps/replicaset-example created
[kubeadm@server1 mainfest]$ kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
replicaset-example-69tbm   1/1     Running   0          17s   10.244.1.31   server2   <none>           <none>
replicaset-example-d46ff   1/1     Running   0          17s   10.244.2.44   server3   <none>           <none>
replicaset-example-tmm25   1/1     Running   0          17s   10.244.2.45   server3   <none>           <none>
[kubeadm@server1 mainfest]$ vim rs.yml
[kubeadm@server1 mainfest]$ cat rs.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-example
spec:
  replicas: 10   ----------->这里指副本数,可以任意扩容或者缩容
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v1
[kubeadm@server1 mainfest]$ kubectl apply -f rs.yml 
replicaset.apps/replicaset-example configured
[kubeadm@server1 mainfest]$ kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
replicaset-example-69tbm   1/1     Running   0          39s   10.244.1.31   server2   <none>           <none>
replicaset-example-7zxcw   1/1     Running   0          4s    10.244.1.35   server2   <none>           <none>
replicaset-example-94p7z   1/1     Running   0          4s    10.244.2.47   server3   <none>           <none>
replicaset-example-d46ff   1/1     Running   0          39s   10.244.2.44   server3   <none>           <none>
replicaset-example-f9vf9   1/1     Running   0          4s    10.244.1.33   server2   <none>           <none>
replicaset-example-lbpls   1/1     Running   0          4s    10.244.2.48   server3   <none>           <none>
replicaset-example-spt6f   1/1     Running   0          4s    10.244.1.32   server2   <none>           <none>
replicaset-example-tfwjh   1/1     Running   0          4s    10.244.1.34   server2   <none>           <none>
replicaset-example-tmm25   1/1     Running   0          39s   10.244.2.45   server3   <none>           <none>
replicaset-example-vsl5k   1/1     Running   0          4s    10.244.2.46   server3   <none>           <none>
[kubeadm@server1 mainfest]$ vim rs.yml
[kubeadm@server1 mainfest]$ cat rs.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-example
spec:
  replicas: 2   ------------->这里再进行缩容
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v1
[kubeadm@server1 mainfest]$ kubectl apply -f rs.yml 
replicaset.apps/replicaset-example configured
[kubeadm@server1 mainfest]$ kubectl get pod -o wide   ----------->回收的时候先回收新创建的容器
NAME                       READY   STATUS        RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
replicaset-example-69tbm   1/1     Running       0          57s   10.244.1.31   server2   <none>           <none>
replicaset-example-7zxcw   0/1     Terminating   0          22s   10.244.1.35   server2   <none>           <none>
replicaset-example-94p7z   0/1     Terminating   0          22s   10.244.2.47   server3   <none>           <none>
replicaset-example-d46ff   0/1     Terminating   0          57s   10.244.2.44   server3   <none>           <none>
replicaset-example-f9vf9   0/1     Terminating   0          22s   10.244.1.33   server2   <none>           <none>
replicaset-example-lbpls   0/1     Terminating   0          22s   10.244.2.48   server3   <none>           <none>
replicaset-example-spt6f   0/1     Terminating   0          22s   10.244.1.32   server2   <none>           <none>
replicaset-example-tfwjh   0/1     Terminating   0          22s   10.244.1.34   server2   <none>           <none>
replicaset-example-tmm25   1/1     Running       0          57s   10.244.2.45   server3   <none>           <none>
replicaset-example-vsl5k   0/1     Terminating   0          22s   10.244.2.46   server3   <none>           <none>
[kubeadm@server1 mainfest]$ kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
replicaset-example-69tbm   1/1     Running   0          66s   10.244.1.31   server2   <none>           <none>
replicaset-example-tmm25   1/1     Running   0          66s   10.244.2.45   server3   <none>           <none>
[kubeadm@server1 mainfest]$ kubectl get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/replicaset-example-69tbm   1/1     Running   0          74s
pod/replicaset-example-tmm25   1/1     Running   0          74s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   7d

NAME                                 DESIRED   CURRENT   READY   AGE  --------->出现新的控制器类型
replicaset.apps/replicaset-example   2         2         2       74s

但如果在文件中修改镜像,则不会被满足

3.deployment控制器

3.1 单个deployment控制器

[kubeadm@server1 mainfest]$ vim deployment.yml
[kubeadm@server1 mainfest]$ cat deployment.yml
apiVersion: apps/v1
kind: Deployment   --------->控制器名称
metadata:
  name: deployment-myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v1
        ports:
        - containerPort: 80
[kubeadm@server1 mainfest]$ kubectl apply -f deployment.yml 
deployment.apps/deployment-myapp created
[kubeadm@server1 mainfest]$ kubectl get all
NAME                                    READY   STATUS    RESTARTS   AGE
pod/deployment-myapp-569fb7cdcb-4z2bp   1/1     Running   0          10s
deployment控制器------rs名字-----标识符
pod/deployment-myapp-569fb7cdcb-s7qv6   1/1     Running   0          10s
pod/deployment-myapp-569fb7cdcb-wrddw   1/1     Running   0          10s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   7d1h

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deployment-myapp   3/3     3            3           10s
deployment控制器
NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/deployment-myapp-569fb7cdcb   3         3         3       10s
rs控制器

注意:在deployment控制器中,如果删除pod后还会有新的pod的pod,所以如果pod状态不对可以删除进行调整。

3.2 多个deployment控制器

[kubeadm@server1 mainfest]$ vim deployment.yml 
[kubeadm@server1 mainfest]$ cat deployment.yml 
apiVersion: apps/v1         ------------------>控制器1号
kind: Deployment
metadata:
  name: deployment-v1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v1
        ports:
        - containerPort: 80
---
apiVersion: apps/v1          --------------------->控制器2号
kind: Deployment
metadata:
  name: deployment-v2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2
        ports:
        - containerPort: 80

[kubeadm@server1 mainfest]$ kubectl apply -f deployment.yml 
deployment.apps/deployment-v1 created
deployment.apps/deployment-v2 created
[kubeadm@server1 mainfest]$ kubectl get all
NAME                                    READY   STATUS    RESTARTS   AGE
pod/deployment-myapp-569fb7cdcb-2tnnl   1/1     Running   0          10m
pod/deployment-myapp-569fb7cdcb-9n92v   1/1     Running   0          10m
pod/deployment-myapp-569fb7cdcb-pzjwt   1/1     Running   0          10m
pod/deployment-v1-569fb7cdcb-bqtjj      1/1     Running   0          8s
pod/deployment-v1-569fb7cdcb-ctzp6      1/1     Running   0          8s
pod/deployment-v1-569fb7cdcb-xjnf4      1/1     Running   0          8s
pod/deployment-v2-6b54f6ffb9-lbbbx      1/1     Running   0          7s
pod/deployment-v2-6b54f6ffb9-q7fph      1/1     Running   0          8s
pod/deployment-v2-6b54f6ffb9-rvgzq      1/1     Running   0          7s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   7d1h

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deployment-myapp   3/3     3            3           18m
deployment.apps/deployment-v1      3/3     3            3           8s
deployment.apps/deployment-v2      3/3     3            3           8s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/deployment-myapp-569fb7cdcb   3         3         3       18m
replicaset.apps/deployment-myapp-6b54f6ffb9   0         0         0       13m
replicaset.apps/deployment-v1-569fb7cdcb      3         3         3       8s
replicaset.apps/deployment-v2-6b54f6ffb9      3         3         3       8s

[kubeadm@server1 mainfest]$ kubectl delete deployments.apps deployment-v1 ##删除其中一个控制器

4.DaemonSet控制器

DaemonSet控制器可以每个节点部署一个容器

[kubeadm@server1 mainfest]$ vim daemonset.yml
[kubeadm@server1 mainfest]$ cat daemonset.yml
apiVersion: apps/v1
kind: DaemonSet   -------------->种类为DeamonSet控制器
metadata:
  name: daemonset-example
  labels:
    k8s-app: zabbix-agent
spec:
  selector:
    matchLabels:
      name: zabbix-agent
  template:
    metadata:
      labels:
        name: zabbix-agent
    spec:
      containers:
      - name: zabbix-agent
        image: zabbix-agent
[kubeadm@server1 mainfest]$ kubectl apply -f daemonset.yml 
daemonset.apps/daemonset-example created
[kubeadm@server1 mainfest]$ kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
daemonset-example-8n9mv   1/1     Running   0          8s    10.244.2.59   server3   <none>           <none>
daemonset-example-zfprr   1/1     Running   0          8s    10.244.1.44   server2   <none>           <none>

5.Job控制器

[kubeadm@server1 mainfest]$ vim job.yml
[kubeadm@server1 mainfest]$ cat job.yml
apiVersion: batch/v1
kind: Job    ------------>类型为Job的控制器
metadata:
  name: pi
spec:
  template:
   metadata:
     name: pi
   spec:
     containers:
     - name: pi
       image: perl
       command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
     restartPolicy: Never
[kubeadm@server1 mainfest]$ kubectl apply -f job.yml 
job.batch/pi created
[kubeadm@server1 mainfest]$ kubectl get pod 
NAME       READY   STATUS      RESTARTS   AGE
pi-fzj8p   0/1     Completed   0          6s
[kubeadm@server1 mainfest]$ kubectl get pod -o wide
NAME       READY   STATUS      RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
pi-fzj8p   0/1     Completed   0          12s   10.244.2.60   server3   <none>           <none>
[kubeadm@server1 mainfest]$ kubectl logs pi-fzj8p    --------->获取日志
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901

6. CronJob 控制器

[kubeadm@server1 mainfest]$ vim cronjob.yml
[kubeadm@server1 mainfest]$ cat cronjob.yml
apiVersion: batch/v1beta1 
kind: CronJob     ------------->给予时间按时调度的Job控制器
metadata:  
  name: cronjob-example 
spec:  
  schedule: "* * * * *"    ------------->每分钟一次
  jobTemplate:    
    spec:      
      template:        
        spec:          
          containers:          
          - name: cronjob            
            image: busybox            
            args:            
            - /bin/sh            
            - -c            
            - date; echo Hello from k8s cluster          
          restartPolicy: OnFailure
[kubeadm@server1 mainfest]$ kubectl apply -f cronjob.yml 
cronjob.batch/cronjob-example created
[kubeadm@server1 mainfest]$ kubectl get pod
No resources found in default namespace.
[kubeadm@server1 mainfest]$ kubectl get pod 
NAME                               READY   STATUS      RESTARTS   AGE
cronjob-example-1593121920-sv47j   0/1     Completed   0          54s
[kubeadm@server1 mainfest]$ kubectl get pod 
NAME                               READY   STATUS              RESTARTS   AGE
cronjob-example-1593121920-sv47j   0/1     Completed           0          65s
cronjob-example-1593121980-pxjvs   0/1     ContainerCreating   0          5s
[kubeadm@server1 mainfest]$ kubectl logs cronjob-example-1593121920-sv47j
Thu Jun 25 21:52:39 UTC 2020
Hello from k8s cluster
[kubeadm@server1 mainfest]$ kubectl logs cronjob-example-1593121980-pxjvs
Thu Jun 25 21:53:13 UTC 2020
Hello from k8s cluster

Service

Service介绍

Service可以看作是一组提供相同服务的Pod对外的访问接口。借助Service,应用可以方便地实现服务发现和负载均衡。 service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)

service的类型:
• ClusterIP:默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问。
• NodePort:将Service通过指定的Node上的端口暴露给外部,访问任意一个 NodeIP:nodePort都将路由到ClusterIP。
• LoadBalancer:在 NodePort 的基础上,借助 cloud provider 创建一个外部的负 载均衡器,并将请求转发到 :NodePort,此模式只能在云服务器上使用。
• ExternalName:将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定)。

Service 是由 kube-proxy 组件,加上 iptables 来共同实现的.
• kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的 CPU资源。
• IPVS模式的service,可以使K8s集群支持更多量级的Pod。

开启kube-proxy的ipvs模式

开启kube-proxy的ipvs模式:
yum install -y ipvsadm 所有节点安装
kubectl edit cm kube-proxy -n kube-system 修改IPVS模式 mode: “ipvs”
kubectl get pod -n kube-system |grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'更新kube-proxy pod


IPVS模式下,kube-proxy会在service创建后,在宿主机上添加一个虚拟网卡: kube-ipvs0,并分配service IP。

kube-proxy通过linux的IPVS模块,以rr轮询方式调度service中的Pod。

创建service

ClusterIP

Kubernetes 提供了一个 DNS 插件 Service

[kubeadm@server1 mainfest]$ vim service.yml 
[kubeadm@server1 mainfest]$ cat service.yml 
kind: Service         ------------------>创建service,默认类型是ClusterIP
apiVersion: v1
metadata:
  name: myservice
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: myapp
[kubeadm@server1 mainfest]$ kubectl apply -f service.yml 
service/myservice created

[kubeadm@server1 mainfest]$ vim pod2.yml 
[kubeadm@server1 mainfest]$ cat pod2.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-example
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2
[kubeadm@server1 mainfest]$ kubectl apply -f pod2.yml 
deployment.apps/deployment-example created
[kubeadm@server1 mainfest]$ kubectl describe svc myservice 
Name:              myservice
Namespace:         default
Labels:            <none>
Annotations:       Selector:  app=myapp
Type:              ClusterIP
IP:                10.107.6.65
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.47:80,10.244.2.70:80
Session Affinity:  None
Events:            <none>
[kubeadm@server1 mainfest]$ kubectl run demo --image=busyboxplus -it --restart=Never
If you don't see a command prompt, try pressing enter.
[ root@demo:/ ]$ curl 10.107.6.65
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[ root@demo:/ ]$ curl 10.107.6.65
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[ root@demo:/ ]$ curl 10.107.6.65/hostname.html
deployment-example-67764dd8bd-pvkc6
[ root@demo:/ ]$ curl 10.107.6.65/hostname.html
deployment-example-67764dd8bd-p5qnr
[ root@demo:/ ]$ curl 10.107.6.65/hostname.html
deployment-example-67764dd8bd-pvkc6
[ root@demo:/ ]$ curl 10.107.6.65/hostname.html
deployment-example-67764dd8bd-p5qnr
[ root@demo:/ ]$ curl myservice/hostname.html-------------->可以通过域名访问
deployment-example-67764dd8bd-pvkc6
[ root@demo:/ ]$ curl myservice/hostname.html
deployment-example-67764dd8bd-p5qnr
[ root@demo:/ ]$ curl myservice/hostname.html
deployment-example-67764dd8bd-pvkc6
[ root@demo:/ ]$ nslookup myservice
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name:      myservice
Address 1: 10.107.6.65 myservice.default.svc.cluster.local
[ root@demo:/ ]$ [kubeadm@server1 mainfest]$            
[kubeadm@server1 mainfest]$  kubectl get services kube-dns --namespace=kube-system
                               k8s提供了一个DNS插件Service
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   7d3h

NodePort(可外部访问)

[kubeadm@server1 mainfest]$ cat pod2.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-example
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2
[kubeadm@server1 mainfest]$ kubectl apply -f pod2.yml 
deployment.apps/deployment-example unchanged
[kubeadm@server1 mainfest]$ kubectl get pod
NAME                                  READY   STATUS      RESTARTS   AGE
demo                                  0/1     Completed   0          7h16m
deployment-example-67764dd8bd-p5qnr   1/1     Running     0          7h17m
deployment-example-67764dd8bd-pvkc6   1/1     Running     0          7h17m

[kubeadm@server1 mainfest]$ vim service.yml 
[kubeadm@server1 mainfest]$ cat service.yml 
kind: Service
apiVersion: v1
metadata:
  name: myservice
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: myapp
[kubeadm@server1 mainfest]$ kubectl apply -f service.yml 
service/myservice created
[kubeadm@server1 mainfest]$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   7d10h
myservice    ClusterIP   10.100.83.68   <none>        80/TCP    5s
[kubeadm@server1 mainfest]$ kubectl edit svc myservice   ------>修改为NodePort方式
service/myservice edited
[kubeadm@server1 mainfest]$ kubectl describe svc myservice 
Name:                     myservice
Namespace:                default
Labels:                   <none>
Annotations:              Selector:  app=myapp
Type:                     NodePort
IP:                       10.100.83.68
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31059/TCP
Endpoints:                10.244.1.47:80,10.244.2.70:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
[kubeadm@server1 mainfest]$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        7d10h
myservice    NodePort    10.100.83.68   <none>        80:31059/TCP   5m30s   -------->每个节点会随机开一个端口

无头服务

Headless Service “无头服务” 。 Headless Service不需要分配一个VIP,而是直接以DNS记录的方式解析出被代理 Pod的IP地址。
域名格式:$(servicename).$(namespace).svc.cluster.local
yum install -y bind-utils.x86_64安装解析工具
 

[kubeadm@server1 mainfest]$ vim service.yml 
[kubeadm@server1 mainfest]$ cat service.yml 
kind: Service
apiVersion: v1
metadata:
  name: myservice
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: myapp
  clusterIP: None -------------->无头服务,不分配IP
[kubeadm@server1 mainfest]$ kubectl apply -f service.yml 
service/myservice created
[kubeadm@server1 mainfest]$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   7d11h
myservice    ClusterIP   None         <none>        80/TCP    10s
[kubeadm@server1 mainfest]$ kubectl describe svc myservice 
Name:              myservice
Namespace:         default
Labels:            <none>
Annotations:       Selector:  app=myapp
Type:              ClusterIP
IP:                None    --------------------->无头服务,不分配IP
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.47:80,10.244.1.48:80,10.244.2.70:80
Session Affinity:  None
Events:            <none>

[kubeadm@server1 mainfest]$ kubectl -n kube-system describe svc kube-dns 
Name:              kube-dns
Namespace:         kube-system
Labels:            k8s-app=kube-dns
                   kubernetes.io/cluster-service=true
                   kubernetes.io/name=KubeDNS
Annotations:       prometheus.io/port: 9153
                   prometheus.io/scrape: true
Selector:          k8s-app=kube-dns
Type:              ClusterIP
IP:                10.96.0.10
Port:              dns  53/UDP
TargetPort:        53/UDP
Endpoints:         10.244.0.10:53,10.244.0.11:53
Port:              dns-tcp  53/TCP
TargetPort:        53/TCP
Endpoints:         10.244.0.10:53,10.244.0.11:53   ----------->给集群分配的IP
Port:              metrics  9153/TCP
TargetPort:        9153/TCP
Endpoints:         10.244.0.10:9153,10.244.0.11:9153
Session Affinity:  None
Events:            <none>

[kubeadm@server1 mainfest]$ dig myservice.default.svc.cluster.local @10.96.0.10   -------------->两个IP都可以解析到
; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> myservice.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19011
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;myservice.default.svc.cluster.local. IN A
;; ANSWER SECTION:
myservice.default.svc.cluster.local. 30 IN A 10.244.2.70
myservice.default.svc.cluster.local. 30 IN A 10.244.1.48
myservice.default.svc.cluster.local. 30 IN A 10.244.1.47
;; Query time: 0 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Fri Jun 26 15:01:23 CST 2020
;; MSG SIZE  rcvd: 217

[kubeadm@server1 mainfest]$ dig myservice.default.svc.cluster.local @10.244.0.11
; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> myservice.default.svc.cluster.local @10.244.0.11
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16525
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;myservice.default.svc.cluster.local. IN A
;; ANSWER SECTION:
myservice.default.svc.cluster.local. 30 IN A 10.244.1.48
myservice.default.svc.cluster.local. 30 IN A 10.244.2.70
myservice.default.svc.cluster.local. 30 IN A 10.244.1.47   --------------->还可实现轮询
;; Query time: 0 msec
;; SERVER: 10.244.0.11#53(10.244.0.11)
;; WHEN: Fri Jun 26 15:02:57 CST 2020
;; MSG SIZE  rcvd: 217

Pod滚动更新后,依然可以解析:

[kubeadm@server1 mainfest]$ kubectl delete pod --all
pod "deployment-example-67764dd8bd-p5qnr" deleted
pod "deployment-example-67764dd8bd-pvkc6" deleted
pod "deployment-example-67764dd8bd-smr7c" deleted
[kubeadm@server1 mainfest]$ kubectl get pod -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
deployment-example-67764dd8bd-6pwdw   1/1     Running   0          7s    10.244.2.74   server3   <none>           <none>
deployment-example-67764dd8bd-jl7nl   1/1     Running   0          7s    10.244.1.49   server2   <none>           <none>
deployment-example-67764dd8bd-zvd28   1/1     Running   0          7s    10.244.2.73   server3   <none>           <none>
[kubeadm@server1 mainfest]$ dig myservice.default.svc.cluster.local @10.244.0.11   ------------>Pod滚动更新以后,依然可以解析
; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> myservice.default.svc.cluster.local @10.244.0.11
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48989
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;myservice.default.svc.cluster.local. IN A
;; ANSWER SECTION:
myservice.default.svc.cluster.local. 30 IN A 10.244.1.49
myservice.default.svc.cluster.local. 30 IN A 10.244.2.73
myservice.default.svc.cluster.local. 30 IN A 10.244.2.74
;; Query time: 0 msec
;; SERVER: 10.244.0.11#53(10.244.0.11)
;; WHEN: Fri Jun 26 15:14:37 CST 2020
;; MSG SIZE  rcvd: 217

LoadBalancer(可外部访问)

从外部访问 Service 的第二种方式,适用于公有云上的 Kubernetes 服务。这时候,你可以指定一个 LoadBalancer 类型的 Service。
在service提交后,Kubernetes就会调用 CloudProvider 在公有云上创建一个负载均衡服务,并且把被代理的 Pod 的 IP地址配置给负载均衡服务做后端。
 

[kubeadm@server1 mainfest]$ vim service.yml 
[kubeadm@server1 mainfest]$ cat service.yml 
kind: Service
apiVersion: v1
metadata:
  name: myservice
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: myapp
  type: LoadBalancer   ------------------>这里的类型为LoasBalancer
[kubeadm@server1 mainfest]$ kubectl apply -f service.yml 
service/myservice created
[kubeadm@server1 mainfest]$ kubectl get svc 
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP        7d11h
myservice    LoadBalancer   10.104.173.144   <pending>     80:32646/TCP   5s
                                           需要访问公有云

ExternalName(可外部访问)

从外部访问的第三种方式叫做ExternalName。适用于集群内部容器访问外部资源

[kubeadm@server1 mainfest]$ vim service.yml 
[kubeadm@server1 mainfest]$ cat service.yml 
kind: Service
apiVersion: v1
metadata:
  name: myservice
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: myapp
  type:  ExternalName  
  externalName: www.baidu.com   ---------->需要访问的外部域名
[kubeadm@server1 mainfest]$ kubectl apply -f service.yml 
service/myservice created
[kubeadm@server1 mainfest]$ kubectl get svc 
NAME         TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)   AGE
kubernetes   ClusterIP      10.96.0.1    <none>          443/TCP   7d11h
myservice    ExternalName   <none>       www.baidu.com   80/TCP    4s
[kubeadm@server1 mainfest]$ dig myservice.default.svc.cluster.local @10.244.0.11
; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> myservice.default.svc.cluster.local @10.244.0.11
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44555
;; flags: qr aa rd; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;myservice.default.svc.cluster.local. IN A
;; ANSWER SECTION:
myservice.default.svc.cluster.local. 30 IN CNAME www.baidu.com.   ----------->访问成功
www.baidu.com.  30 IN CNAME www.a.shifen.com.
www.a.shifen.com. 30 IN A 61.135.169.121
www.a.shifen.com. 30 IN A 61.135.169.125
;; Query time: 75 msec
;; SERVER: 10.244.0.11#53(10.244.0.11)
;; WHEN: Fri Jun 26 15:28:43 CST 2020
;; MSG SIZE  rcvd: 233

service直接分配一个公有IP(外网地址)

[kubeadm@server1 mainfest]$ vim service.yml 
[kubeadm@server1 mainfest]$ cat service.yml 
kind: Service
apiVersion: v1
metadata:
  name: myservice
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: myapp
  externalIPs:    -------------->直接分配一个共有IP
  - 172.25.1.100
[kubeadm@server1 mainfest]$ kubectl apply -f service.yml 
service/myservice created
[kubeadm@server1 mainfest]$ kubectl get svc 
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP    PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>         443/TCP   7d11h
myservice    ClusterIP   10.105.122.218   172.25.1.100   80/TCP    4s
[kubeadm@server1 mainfest]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:bb:3e:1d brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.11/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 172.25.1.1/24 brd 172.25.1.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.3.201/24 brd 192.168.3.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 2408:84fb:1:1209:20c:29ff:febb:3e1d/64 scope global mngtmpaddr dynamic 
       valid_lft 3317sec preferred_lft 3317sec
    inet6 fe80::20c:29ff:febb:3e1d/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:da:68:ae:81 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether d2:fd:d6:3b:73:d3 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::d0fd:d6ff:fe3b:73d3/64 scope link 
       valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 4a:a1:c4:bb:75:78 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::48a1:c4ff:febb:7578/64 scope link 
       valid_lft forever preferred_lft forever
6: veth3fc18a62@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether a2:7d:16:36:60:6a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::a07d:16ff:fe36:606a/64 scope link 
       valid_lft forever preferred_lft forever
7: veth99a251bf@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether 86:20:80:1b:26:22 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::8420:80ff:fe1b:2622/64 scope link 
       valid_lft forever preferred_lft forever
8: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d2:d9:2f:45:a0:05 brd ff:ff:ff:ff:ff:ff
9: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 9a:26:f3:9b:57:38 brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.105.122.218/32 brd 10.105.122.218 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 172.25.1.100/32 brd 172.25.1.100 scope global kube-ipvs0   -------------------->在这里
       valid_lft forever preferred_lft forever
[kubeadm@server1 mainfest]$ logout
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.1.100:80 rr
  -> 10.244.1.49:80               Masq    1      0          0         
  -> 10.244.2.73:80               Masq    1      0          0         
  -> 10.244.2.74:80               Masq    1      0          0         
TCP  10.96.0.1:443 rr
  -> 192.168.43.11:6443           Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.10:53               Masq    1      0          0         
  -> 10.244.0.11:53               Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.10:9153             Masq    1      0          0         
  -> 10.244.0.11:9153             Masq    1      0          0         
TCP  10.105.122.218:80 rr
  -> 10.244.1.49:80               Masq    1      0          0         
  -> 10.244.2.73:80               Masq    1      0          0         
  -> 10.244.2.74:80               Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.10:53               Masq    1      0          0         
  -> 10.244.0.11:53               Masq    1      0          0   

Guess you like

Origin blog.csdn.net/Thorne_lu/article/details/121390432