kubernetes Service Resources

Service Component correspondence relationship

1, on the platform kubernetes, POD is the life cycle, it is possible to provide a fixed access to the corresponding client terminal, so we add a fixed intermediate layer between the client and the service Pod, the intermediate layer we call Service, the real work of this Service, we have to rely heavily on the annex above k8s deployment. Called kubernetes the dns service, different versions of k8s achieve may be different, newer versions of default CoreDNS used, kube-dns with the previous 1.11 version, Service name resolution is strongly dependent on the DNS accessories, we have deployed k8s must go after the deployment of a CoreDNS or kube-dns.

2, k8s order to provide network functions to the client, it needs to rely on third-party programs, such a program can be obtained by a third party (at least the newer version) CNI (standard plug receptacle network interfaces) to access any this program follows the third-party plug-ins standards. Of course, there have a number of programs, like flannel before we deployed, canal and so on.

3, there are three types of network addresses in k8s, namely a network node (node ​​network), the presence of pure hardware network pod (pod Network), cluster network (cluster network or Service network) the first two networks are real or software simulations, are there. After trunked IP network is called virtual IP, because there is no real these IP configured on an interface. It only appears in the rule Service.

4, Service then what is it? On each node we work with a component called kube-proxy, this component keeps track of changes in resource information about Service of api server on the master, this is a request by the natural method k8s the watch (monitoring ) to achieve. Once the changes have Service resources, including creating, kube-proxy must convert it to above the current node enables service resource scheduling, including user requests dispatched to rule over a particular pod back-end resources, this rule there may be iptables, there may be ipvs, depending on the implementation of Service

5, Service implementations There are three models in the k8s

(1) userspace, a request from the internal client pod iptables rules must first reach the current kernel space node requests a service, which is service rules. The service works by requesting service from the service after reaching the local listener put it into a socket on the user space kube-proxy, which is responsible for processing, after processing then transferred to the service IP, proxy final As each pod service associated with the implementation schedule. Can be found in the request sent by the client pod service, service will come back kube-proxy listens on this port to be distributed by kube-proxy, so kube-proxy process is working in user space. Therefore, it is called userspace. This way is inefficient, because the first to the kernel space and then to the current host of user-space kube-proxy, by the kube-proxy package that message back to the agent after completion of the kernel space and then distributed by iptables rules.

(2) iptables, and later on to a second embodiment, the method is a direct request service ip ip client request, the request message is intercepted by the local kernel space service rules, and thus dispatched directly to the server pod, which ways to work directly in kernel space directly responsible for scheduling the iptables rules.

(3) ipvs, client pod request arrived directly from the kernel space ipvs scheduled rule, directly related to scheduling resources pod Pod network address range.

6, we set to work in the service when installed and configured k8s in what mode he will generate corresponding rules of what mode. 1.10 and earlier versions use the iptables, and then move forward with the prior 1.1 userspace, 1.11 is used by default ipvs, if the default is not activated ipvs iptables. If a service Pod resources behind the changes, such as service tag selector applicable version of this information so the addition of a pod will react immediately applicable to the apiserver, because etcd more information pod will exist apiserver in, then kube-proxy can detect this change and immediately converted to the service rules. So his transformation is a dynamic real-time, if you delete a pod and the pod has not been reconstructed, the results will be fed back to the state etcd apiserver of this change is to watch kube-proxy, and then immediately convert it into iptables rule.

. 7, service to the pod is an intermediate layer, service not directly to the pod, service will first endpoints, endpoints k8s is a standard object. Its equivalent pod address + port, and then from the back end to the endpoint associated pod, but we understand, then to be understood directly from the service directly to the pod on the line, but in fact we can create endpoints resources for the service manual.

 

 

 service creation

kubectl explain svc.spec

KIND:     Service
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     Spec defines the behavior of a service.
     https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

     ServiceSpec describes the attributes that a user creates on a service.

FIELDS:
   clusterIP    <string> #默认是自动分配,也可自己指定
     clusterIP is the IP address of the service and is usually assigned randomly
     by the master. If an address is specified manually and is not in use by
     others, it will be allocated to the service; otherwise, creation of the
     service will fail. This field can not be changed through updates. Valid
     values are "None", empty string (""), or a valid IP address. "None" can be
     specified for headless services when proxying is not required. Only applies
     to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is
     ExternalName. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

   externalIPs    <[]string>
     externalIPs is a list of IP addresses for which nodes in the cluster will
     also accept traffic for this service. These IPs are not managed by
     Kubernetes. The user is responsible for ensuring that traffic arrives at a
     node with this IP. A common example is external load-balancers that are not
     part of the Kubernetes system.

   externalName    <string>
     externalName is the external reference that kubedns or equivalent will
     return as a CNAME record for this service. No proxying will be involved.
     Must be a valid RFC-1123 hostname (https://tools.ietf.org/html/rfc1123) and
     requires Type to be ExternalName.

   externalTrafficPolicy    <string>
     externalTrafficPolicy denotes if this Service desires to route external
     traffic to node-local or cluster-wide endpoints. "Local" preserves the
     client source IP and avoids a second hop for LoadBalancer and Nodeport type
     services, but risks potentially imbalanced traffic spreading. "Cluster"
     obscures the client source IP and may cause a second hop to another node,
     but should have good overall load-spreading.

   healthCheckNodePort    <integer>
     healthCheckNodePort specifies the healthcheck nodePort for the service. If
     not specified, HealthCheckNodePort is created by the service api backend
     with the allocated nodePort. Will use user-specified nodePort value if
     specified by the client. Only effects when Type is set to LoadBalancer and
     ExternalTrafficPolicy is set to Local.

   loadBalancerIP    <string>
     Only applies to Service Type: LoadBalancer LoadBalancer will get created
     with the IP specified in this field. This feature depends on whether the
     underlying cloud-provider supports specifying the loadBalancerIP when a
     load balancer is created. This field will be ignored ifcloud- at The provider
     does not at The Support the Feature. 

   loadBalancerSourceRanges     <[] String > 
     the If specified and at The Platform Supported by, the this by Will the restrict traffic 
     through at The Cloud -provider load- Balancer by Will at The BE Tel Restricted to 
     specified Client IPs. This Field, ignored by Will BE IF cloud- at the Provider does 
     not at the Support the Feature. " More info: 
     HTTPS: // kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/ 

   the ports     <[] Object>   # we intend to which port container port associated with the back-end relationship
     The list of ports that are exposed by this service. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

   publishNotReadyAddresses    <boolean>
     publishNotReadyAddresses, when set to true, indicates that DNS
     implementations must publish the notReadyAddresses of subsets for the
     Endpoints associated with the Service. The default value is false. The
     primary use case for setting this field is to use a StatefulSet's Headless
     Service to propagate SRV records for its Pods without respect to their
     readiness for purpose of peer discovery.

   selector    <map[string]string> #关联到哪些pod资源上
     Route service traffic to pods with label keys and values matching this
     selector. If empty or not present, the service is assumed to have an
     external process managing its endpoints, which Kubernetes will not modify.
     Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if
     type is ExternalName. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/

   sessionAffinity    <string>
     Supports "ClientIP" and "None". Used to maintain session affinity. Enable
     client IP based session affinity. Must be ClientIP or None. Defaults to
     None. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

   sessionAffinityConfig    <Object>
     sessionAffinityConfig contains the configurations of session affinity.

   type    <string>
     type determines how the Service is exposed. Defaults to ClusterIP. Valid
     options are ExternalName, ClusterIP, NodePort, and LoadBalancer.
     "ExternalName" maps to the specified externalName. "ClusterIP" allocates a
     cluster-internal IP address for load-balancing to endpoints. Endpoints are
     determined by the selector or if that is not specified, by manual
     construction of an Endpoints object. If clusterIP is "None", no virtual IP
     is allocated and the endpoints are published as a set of endpoints rather
     than a stable IP. "NodePort" builds on ClusterIP and allocates a port on
     every node which routes to the clusterIP. "LoadBalancer" builds on NodePort
     and creates an external load-balancer (if supported in the current cloud)
     which routes to the clusterIP. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
spec

 

kubectl explain svc.spec.ports

KIND:     Service
VERSION:  v1

RESOURCE: ports <[]Object>

DESCRIPTION:
     The list of ports that are exposed by this service. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

     ServicePort contains information on service's port.

FIELDS:
   name    <string> #ports的名称
     The name of this port within the service. This must be a DNS_LABEL. All
     ports within a ServiceSpec must have unique names. This maps to the 'Name'
     field in EndpointPort objects. Optional if only one ServicePort is defined
     on this service.

   nodePort    <integer> #指定节点上的端口,只有类型为NodePort时才生效。
     The port on each node on which this service is exposed when type=NodePort
     or LoadBalancer. Usually assigned by the system. If specified, it will be
     allocated to the service if unused or else creation of the service will
     fail. Default is to auto-allocate a port if the ServiceType of the this # container port Service
     One More info The requires:. 
     HTTPS: // kubernetes.io/docs/concepts/services-networking/service/#type-nodeport 

   Port     <Integer> -required-   # this service to provide services outside the port 
     The port that will be exposed by the this Service. 

   protocol     < String > # protocol, the default tcp 
     at The IP protocol for  the this Port. the Supports " TCP " and " UDP " . the default IS TCP. 

   TARGETPORT     < String >
     Number or name of the port to access on the pods targeted by the service.
     Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If
     this is a string, it will be looked up as a named port in the target Pod's
     container ports. If this is not specified, the value of the 'port' field is
     used (an identity map). This field is ignored for services with
     clusterIP=None, and should be omitted or set equal to the 'port' field.
     More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
ports

ClusterIP

Default only two ports is useful when using this type of ClusterIP showing a cluster ip address assigned to the cluster which is used only for communication,

  • Port on the port, service address
  • Port on the targetPort, pod ip

cat redis-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: default
spec:
  selector:
    app: redis
    role: logstor
  clusterIP: 10.97.97.97
  type: ClusterIP
  ports:
  - port: 6379
    targetPort: 6379

 

Deployment Services

kubectl apply -f redis-svc.yaml
kubectl describe svc redis

service have created a long cluster on k8s dns service so we can resolve directly present here his service name, analytically service name is automatically dynamically add a resource dns the cluster after each service you have created recording. More than one, will also include service layer

svc records, A records, you can add after parsing format resource record for SVC_NAME (service name) .NS_NAME (namespace name) .DOMAIN.LTD. (domain name suffix), and the default domain name suffix cluster is svc. cluster.local. So if we do not change the domain name suffix, then

Each of our service has been created is this domain name format, such as the above resource record is redis.default.svc.cluster.local

NodePort

Access to external traffic cluster to use nodePort only use this type, otherwise it is useless  

myapp.svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  clusterIP: 10.99.99.99 #指定固定ClusterIP
  type: NodePort
  ports:
  - port: 80 #Service上的端口
    targetPort: 80 #pod IP上的端口
    nodePort: 30080 #节点端口,也可以不指定让系统动态分配

 

部署服务

kubectl apply -f myapp.svc.yaml
kubectl get svc -o wide --show-labels

查看服务详细信息  

kubectl describe svc myapp

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        release: canary
        environment: qa
    spec:
      containers:
      - name: myapp-container
        image: ikubernetes/myapp:v1
        ports:
        - name: http
          containerPort: 80
rs-demo.yaml

 

重新打开一个shell访问集群任意一个节点,可以看到其还有负载均衡的效果,并且流量是经过了好几级转换,首先由nodeport转换为service port,再由service转成pod port

curl 192.168.228.138:30080/hostname.html

LoadBalance

LoadBalance(负载均衡及服务(LBaas)的一键调用):表示我们把k8s部署在虚拟机上而虚拟机是工作在云环境中,而我们云环境支持lb负载均衡器时使用。自动触发在外部创建一个负载均衡器。 比如在阿里云上

买了四个虚拟主机,同时又买了阿里云的LBaas的服务,在这四个vps上部署了k8s集群,然后这个k8s集群可以与其底层的公有云IaaS公有云的api相交互。其自身有这个能力,其能去调底层的IAAS云计算层中的API,

调的时候其能够请求去创建一个外置的负载均衡器,比如我们有四个节点,一个master,正常工作的是三个节点,在这三个节点上都使用的是同一个nodePort对集群外提供服务,它会自动请求底层IAAS用纯软件的方

式做一个负载均衡器并且为这个负载均衡器提供的配置信息是我们本机这三个节点的(注意是节点IP)节点端口上提供的相应服务,可以自动通过底层IAAS的api创建这个软负载均衡器的时候提供后端有哪几个节点,

因此,回头用户通过云计算环境之外的客户端来访问阿里云的内部的LBAAS生成的负载均衡器时由该负载均衡器来调度到后端几个节点的nodePort上,然后由nodeport转发给service,再由service在集群内部负载均衡

至pod上,因此可以发现其有两级负载均衡,第一级是将用户请求负载给多个node中的某一个,再由node通过service反代给集群内部的多个pod中的某一个。 

ExternalName  

将集群外部的服务引入至集群内部在集群内部直接使用。假如我们有个k8s集群,有三个工作节点,三个节点上有一些节点上的pod资源是作为客户端使用的,当此客户端访问某一服务时,此服务应该是由其它pod提供

的,但有这种可能性,我们pod访问服务集群中没有,但是集群外有个服务,比如在我们的本地局域网环境中,但是是在k8s集群之外,或者在互联网上有一个服务,比如dns等,我们期望这个服务让集群内能够访问

到,集群内一般用的都是私网地址,就算我们能够将请求报文路由出去离开本地网络到外部去那么外部响应报文也回不来,这样干是没法正常通信的,因此,ExternalName就是用来实现我们在集群中建一个服务

(service),这个service的端点不是本地port而是service关联到外部服务上去了因此我们客户端访问service时由service通过层级转换,包括nodePort转换请求到外部的服务中,外部服务先回给nodeIP,再由nodeIP转交

给service,再由service转交给pod client,从而让pod能够访问集群外部的服务。这样就能让我们集群内的pod像使用集群内部的服务一样来使用集群外部的服务。对此种服务来讲我们的cluster IP作用在于pod client内

部解析时使用,更重要的是ExternalName此时很关键,因为ExternalName确实应该是一个name而不是一个IP,并且此name还必须要被我们dns服务所解析才能够被访问,所以ExternalName引入时有这么一个基本限

制(了解一下就好),在svc.spec.中有如下字段  

另外,在我们svc实现负载均衡时还支持sessionAffinity(会话联系,上述explain中有),默认值为None,因此其是随机基于iptables来调度的,若我们将其值定义成ClientIP则表示把来自同一个客户端IP的请求始终调度到同一个后端pod上去。  

 

while true; do curl 192.168.228.138:30080/hostname.html; sleep 3; done

打上补丁

kubectl patch svc myapp -p '{"spec":{"sessionAffinity":"ClientIP"}}'  

 

再次访问

while true; do curl 192.168.228.138:30080/hostname.html; sleep 3; done

可以看到同一 Ip 访问的都是同一个 pod资源,可以看到我们打的补丁已经生效

headless

无头service(headless),我们此前使用service一直是客户端pod访问service时解析的应该是service的名称,每一个service应该有其名称,并且其解析结果应该是其ClusterIP,一般解析ClusterIP一般只有一个。但是

我们也可以这样干,把中间层去掉,每一个pod也有其自己名称,我们可以在解析service IP时,将其解析给后端的pod IP,这种service就叫无头service。创建这种service时我们一样只需要指定明确定义clusterIP,并

且指定其值为None。  

myapp-svc-headless.yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  clusterIP: "None"
  ports:
  - port: 80 #Service上的端口
    targetPort: 80 #pod IP上的端口

 

 

 

通过dig解析

dig -t A myapp-svc.default.svc.cluster.local.

 

通过以上的 servic可以发现有个问题,当我们定义完service后,我们要访问service后端的pod需要多级调度或代理,因此如果我们要建一个https服务的话我们会发现我们每一个myapp都要配置为https的主机,事实上k8s还有一种引入集群外部流量的方式叫做 ingress ,我们service是4层调度,但是ingress是七层调度器,它利用一种七层pod实现将外部流量引入到内部来,但是事实上他也脱离不了service的工作。作为ingress作为七层调度时我们必须要用Pod中的运行的七层服务功能的应用来调度,可以用nginx,haproxy等等

 

Guess you like

Origin www.cnblogs.com/crazymagic/p/11241590.html