kubernetes_Service: the client to discover and communicate with pod

5.1.Service Introduction

5.1.1.Serice Profile

5.1.1.1 What is Service

  service is an important concept in k8s, mainly to provide load balancing and automatic service discovery .

  Service is kube-proxy components, plus iptables to jointly achieve.

5.1.1.2.Service creation

   There are two ways to create Service:

  1. Create by kubectl expose

#kubectl expose deployment nginx --port = 88 --type = NodePort --target-port = 80 --name = nginx-service 

This step is said to be exposed to the service, in fact, add a load balancer in front of the service, because pod may be distributed over different nodes. 
-Port: exposed to the port 
-type = NodePort: Use junction + port access service 
-target-port: Port container 
-name: Create a service name specified

  2. Create a file by yaml

  Create a service called hostnames-yaohong the received request is routed to the port 80 and the link having the tag selector 9376 is the port app = hostnames of the pod.

  Use kubectl creat to create serivice

apiVersion: v1 
kind: Service 
the Metadata: 
  name: hostnames-yaohong 
spec: 
  Selector: 
    App: hostnames 
  the ports: 
  - name: default 
    Protocol: TCP 
    Port: 80 // ports available for the service 
    targetPort: 9376 // have app = hostnames label pod belong to this service

5.1.1.3. Testing Services

  Use the following command to check the service:

$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.187.0.1   <none>        443/TCP   18d

5.1.1.4. Remote execution of commands run in a container

  Use kubectl exec command to execute a remote command vessel

Kube -n-System kubectl $ Exec coredns-7b8dbb87dd-pb9hk - LS / 
bin 
coredns 
dev 
etc 
Home 
lib 
Media 
mnt 
proc 
the root 
RUN 
sbin 
SRV 
SYS 
tmp 
usr 
var 

bis dash (-) represents the end of command items kubectl in bis content behind the bar are commands to be executed inside the pod.

  

5.2. Connecting clusters external service

5.2.1. Introduction service endpoint

And services not directly connected to the pod, is between Endpoint resources between them.

Endpoint resource is exposed to a list of IP addresses and port services.

View by service endpoint as follows:

$ kubectl -n kube-system get svc kube-dns
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterI
P   10.187.0.2   <none>        53/UDP,53/TCP   19d

$ kubectl -n kube-system describe svc kube-dns
Name:              kube-dns
Namespace:         kube-system
Labels:            addonmanager.kubernetes.io/mode=Reconcile
                   k8s-app=kube-dns
                   kubernetes.io/cluster-service=true
                   kubernetes.io/name=CoreDNS
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"prometheus.io/scrape":"true"},"labels":{"addonmanager.kubernetes.io/mode":...
                   prometheus.io/scrape: to true 
Selector: App = K8S-DNS-Kube 
the Type: ClusterIP 
the IP: 10.187.0.2 
Port: DNS 53 is / the UDP 
TARGETPORT: 53 is / the UDP 
Endpoints: // 10.186.0.2:53,10.186.0.3:53 ip and port list represents the service endpoint of the pod 
port: 53 dns-tcp / TCP 
TARGETPORT: 53 / TCP 
Endpoints: 10.186.0.2:53,10.186.0.3:53 
the Session Affinity: none 
Events: <none>

  

Direct view endpoint information as follows:

#kubectl -n kube-system get endpoints kube-dns
NAME       ENDPOINTS                                               AGE
kube-dns   10.186.0.2:53,10.186.0.3:53,10.186.0.2:53 + 1 more...   19d

#kubectl -n kube-system describe  endpoints kube-dns
Name:         kube-dns
Namespace:    kube-system
Labels:       addonmanager.kubernetes.io/mode=Reconcile
              k8s-app=kube-dns
              kubernetes.io/cluster-service=true
              kubernetes.io/name=CoreDNS
Annotations:  <none>
Subsets:
  Addresses:          10.186.0.2,10.186.0.3
  NotReadyAddresses:  <none>
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    dns      53    UDP
    dns-tcp  53    TCP

Events:  <none>

  

5.2.2. Endpoint manually configure services

 Does not contain a selector when creating the pod, then k8s will not create endpoint resources. This refers to the need to create endpoint to endpoint corresponding to the list of services.

Creating endpoint resource service, where a role is for the service to know which pod contained.

 

 

 

5.2.3. Create an alias for the external service

 In addition to manually configured to access external services, you can also use the fully qualified domain name (FQDN) to access an external service.

apiVersion: v1 
kind: Service 
the Metadata: 
  name: Service-yaohong 
spec: 
  of the type: ExternalName // code of the type is set to become ExternalName  
externalName: someapi.somecompany.com fully qualified domain name // actual service (the FQDN of the) 
Port: - Port: 80

  After the service is created, pod can connect an external service via external-service.default.svc.cluster.local domain name (even external-service).

5.3. The service is exposed to external clients

There are three ways to access external services:

  1. The type of service provided NodePort;

  2. The type of service provided LoadBalance;

  3. Create a Ingress resources.

5.3.1 Using nodeport type of service

The first method is disclosed to a set of external client pod is: NodePort

apiVersion: v1 
kind: Service 
the Metadata: 
  name: Service-yaohong 
spec: 
  of the type: NodePort // set the service type to NodePort 
  the ports: 
  - Port: 80   
    TARGETPORT: 8080 
    nodeport: // 30123 through 30123 port cluster node can access the service 
  selector: 
    app: yh

  

5.3.2. By load balancing service exposed

Load balancing service usage defined by the following method

apiVersion: v1 
kind: Service 
the Metadata: 
  name: LoadBalancer-yaohong 
spec: 
  of the type: LoadBalancer // get the service from the load balancer cluster infrastructure k8s 
  the ports: 
  - Port: 80   
    TARGETPORT: 8080 
  Selector: 
    App: YH

5.4. Ingress exposed by the service

Why use Ingress, an important reason is LoadBalancer services are required to create their own load balancing, as well as a unique public Ip addresses, but only one public Ingress Ip will be able to provide access to many services.

 

 

5.4.1. Creating Ingress resources

 Write the following file ingress.yml

kind: Ingress
metadata:
  name: ingressyaohong
spec:
  rules:
  - host: kubia.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: kubia-nodeport
          servicePort: 80

View ingress through the following command

# Kubectl Create -f ingress.yml

5.4.2. Ingress access by service

By kubectl get ing command to view the ingress

# kubectl get ing
NAME             HOSTS               ADDRESS   PORTS   AGE
ingressyaohong   kubia.example.com             80      2m

Learn how the Ingress

 

 

 

 

5.4.3. How many services are exposed through the same Ingress

1. The different services are mapped to different paths of the same host

apiVersion: v1 
kind: Ingress 
the Metadata: 
  name: Ingress-yaohong 
spec: 
  rules: 
  - Host: kubia.example.com 
    HTTP: 
      Paths: 
      - path: / YH // to kubia.example.com/yh kubai service request forwarded to 
        backend : 
          serviceName: kubia 
          the servicePort: 80 
      - path: / foo // to kubia.example.com/foo forward the request to bar service 
        backend: 
          serviceName: bar 
          the servicePort: 80

  

2. mapped to different services on different hosts

apiVersion: v1 
kind: Ingress 
the Metadata: 
  name: Ingress-yaohong 
spec: 
  rules: 
  - Host: yh.example.com 
    HTTP: 
      Paths: 
      - path: / // for yh.example.com forwarded to kubai service request 
        : backend 
          serviceName: kubia 
          the servicePort: 80 
  - Host: bar.example.com 
    HTTP: 
      Paths: 
      - path: / // bar.example.com for forwarding the request to bar service 
        backend: 
          serviceName: bar 
          the servicePort: 80

 

5.4.4 Configuration Ingress TLS transmission processing

Communication between the controller and the client are encrypted, and the communication between the controller and the rear end of the pod are not.

apiVersion: V1 
kind: the Ingress 
Metadata: 
  name: the Ingress-yaohong 
spec: 
 TLS: // include all arranged in this TLS attribute 
 - the hosts: 
   - // yh.example.com received from the TLS connection yh.example.com 
   serviceName: tls-secret // founded before obtained from tls-secret private key and certificate 
  rules: 
  - Host: yh.example.com 
    HTTP: 
      Paths: 
      - path: / // for yh.example.com forward the request to kubai services 
        backend: 
          serviceName: kubia 
          the servicePort: 80

Signaled ready after 5.5.pod

5.5.1. Introduction Ready probe

Ready probes There are three types:

1.Exec probe, where the implementation process. State of the container is determined by the state of the process exit code.

2.HTTP GET probe, sending an HTTP GET request, http response status code is determined by the container to the vessel is ready.

3.TCP socket probe, it opens a TCP connection to the specified port container, if the connection is established, it is believed the vessel is ready.

 

When you start a container, k8s set a wait time, wait time will perform a readiness check. After the call will periodically probe, and ready to take action based on the results of the probe.

If a pod is not ready success, the pod will be removed from the service, if successful pod ready again, from the newly added pod.

 

And survival probe differences:

Ready probe if the container is not ready, it will not terminate or reboot starts.

Survival probe by killing abnormal vessel, and with a new vessel to replace his normal work to ensure the pod.

Ready probe only ready to handle requests pod before receiving his request.

 

importance;

Ensure that the client only interacts with normal pod, and will never know the problem with the system.

 

5.5.2. Adding to the pod probe ready

yml file add the following

apiVersion: v1
kind: deployment
...
spec:
  ...
  port:
    containers:
    - name: kubia-yh
        imgress: luksa/kubia
        readinessProbe:
          failureThreshold: 2
          httpGet:
            path: /ping
            port: 80
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 3

Related parameters are explained as follows:

  • initialDelaySeconds: number of seconds between the start and the container probe actuator.
  • periodSeconds: Check the frequency (in seconds). The default is 10 seconds. The minimum value is 1.
  • timeoutSeconds: Check the timeout in seconds. The default is 1 second. The minimum value is 1.
  • successThreshold: After the success of failure of checking the minimum number of consecutive successful. The default is 1. The activity must be 1. The minimum value is 1.
  • failureThreshold: Pod successfully started and when the check fails, Kubernetes will try failureThreshold times before giving up. Check the means to give up living restart Pod. Ready to give up examination, Pod will be marked as not ready. The default is 3. The minimum value is 1.

HTTP probe configuration item on the httpGet:

  • host: host name, the default is pod of IP.
  • scheme: a scheme for connecting the host (HTTP or HTTPS). The default is HTTP.
  • path: the path of the probe.
  • httpHeaders: custom headers provided in the HTTP request. HTTP allows repeat request header.
  • port: the name or number of the port. Number must be in the range of 1 to 65535

 

Analog Ready probe

# kubectl   exec <pod_name> -- curl http://10.187.0.139:80/ping
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0

5.6 Use headless service discovery separate pod

5.6.1. Create a headless service

Headless ServiceIs a kind of Service, except that defines spec:clusterIP: None, that is not need Cluster IPof Service.

As the name suggests, Headless Serviceis not the head Service. What are the usage scenario?

  • The first: the right to choose, and sometimes clientwant to decide which to use Real Servercan query DNSto get Real Serverinformation.

  • The second: Headless ServicesThere is also a useful (PS: that is, that the features we needed). Headless ServiceEach of corresponding Endpoints, i.e. each Pod, will have a corresponding DNSdomain name; so Podbetween each other can be accessed.

 

  

Guess you like

Origin www.cnblogs.com/yaohong/p/11478749.html