Kubernetes Basics (3)-Service External Network Access Method

1 Overview

NodePort, LoadBalancer, and Ingress are all ways to import traffic from outside the cluster into the cluster, but they are implemented in different ways. Here’s how the three ways work

Note: Everything mentioned here is based on Google Kubernetes Engine. If you use minikube or other tools to run on other clouds in preset mode (om prem), the corresponding operations may be slightly different.

2 ClusterIP

The ClusterIP service is the default service of Kubernetes. It provides a service within a cluster that can be accessed by other applications in the cluster. ClusterIP is not accessible outside the cluster.

The YAML file of the ClusterIP service is similar to the following:

apiVersion: v1 
kind: Service 
metadata:   
  name: my-internal-service 
selector:     
  app: my-app 
spec: 
  type: ClusterIP 
  ports:   
  - name: http 
    port: 80 
    targetPort: 80 
    protocol: TCP

The ClusterIP service cannot be accessed from outside the cluster, so how can the cluster data be accessed outside the cluster? The service can be accessed through the proxy mode of Kubernetes. The link is as follows:

proxy setting method:

kubectl proxy --port=8080

2.1 Access method

  • After starting the Kubernetes proxy, you can use http://localhost:8080/api/v1/proxy/namespaces//services/:/ to access the service through the Kubernetes API.
  • You can also use the address http://localhost:8080/api/v1/proxy/namespaces/default/services/my-internal-service:http/ to access the service defined above.

2.2 Usage scenarios

  • Debug the service, or connect to the service directly from your laptop;
  • Allow internal traffic access and display internal dashboards 

Since this method requires kubectl to be run as an authenticated user, you cannot use this method to expose the service to the public network or use it in a production environment.

3 NodePort

NodePort type service is the most basic way to allow external traffic to access cluster internal services. NodePort opens a specific port on all Node (VM), and any traffic sent to this port will be forwarded to the service.

Technically, this may not be the most accurate diagram, but it shows how NodePort works.

The YAML of the NodePort type service is as follows:

apiVersion: v1 
kind: Service 
metadata:   
  name: my-nodeport-service 
selector:     
  app: my-app 
spec: 
  type: NodePort 
  ports:   
  - name: http 
    port: 80 
    targetPort: 80

    nodePort:30036
    protocol: TCP

3.1 Differences from clusterip

There are two differences between the NodePort type service and the ordinary "ClusterIP" type service:

  • Type is "NodePort";
  • It has an additional port called nodePort, which can be specified on the node to open. If you do not specify a port, the NodePort type service will randomly select one. Most of the time you should let Kubernetes choose the port;

3.2 Usage scenarios

  • Running services do not need to remain available at all times;
  • Good for a demo application or other temporary stuff

3.3 Disadvantages

  • Each port can only be bound to one service;
  • The usable port numbers can only be from 30000 to 32767;
  • If the IP address of the node/VM changes, it needs to be processed.

4 LoadBalancer

LoadBalancer type services are the standard way to expose services on the public Internet. A network LoadBalancer will be started on GKE, which will provide an IP address for svc to forward all traffic to the kubernetes service.

Kubernetes clusters running on cloud providers typically support automatic provisioning of load balancers from the cloud infrastructure. All that needs to be done is to set the service type to Load Badancer instead of NodePort. The load balancer has its own unique, publicly accessible IP address and redirects all connections to the service. The service is accessible through the load balancer's IP address. If Kubemetes is running in an environment that does not support the Load Badancer service, the load balancer will not be provisioned, but the service will still behave like a NodePort service. This is because the LoadBadancer service is an extension of the NodePort service. 

Yaml file: 

apiVersion: v1
kind: Service
metadata:
  name: kubia-loadbalancer
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: kubia

It can be routed to the NodePort service and the ClusterIP service. This needs to be operated in conjunction with the specific cloud vendor. 

4.1 Usage scenarios

  • Need to directly expose services to external applications.

4.2 Disadvantages

  • No filters, no routing, etc. This means that almost any kind of traffic can be sent to the service, like  HTTP,TCP,UDP,Websocket,gRPC or any other kind.
  • Each  LoadBalancer service exposed will have its own IP address, and each user  LoadBalancer needs to pay, which is costly;

5 Ingress

Ingress is not actually a type of service. Its architectural side sits in front of multiple services and acts as an "intelligent router" or entry point in the cluster. Users can do many different things with Ingress. There are many different types of Ingress controllers on the market, and different types of Ingress have different functions.

The default GKE ingress controller will start an HTTP(S) LoadBalancer for the k8s cluster. Helps users to perform path and subdomain based routing to backend services. For example: the user can send all the content on http://a.baidu.com to the corresponding service, and send all the content under the path http://b.baiud.com/test to the corresponding service.

The YAML for the Ingress object on GKE looks like this (with L7 HTTPLoadBalancer):

apiVersion: extensions/v1beta1 
kind: Ingress 
metadata: 
  name: my-ingress 
spec: 
  backend: 
    serviceName: other 
    servicePort: 8080 
  rules: 
  - host: foo.mydomain.com 
    http: 
      paths: 
      - backend: 
          serviceName: foo 
          servicePort: 8080 
  - host: mydomain.com 
    http: 
      paths: 
      - path: /bar/* 
        backend: 
          serviceName: bar 
          servicePort: 8080

 5.1 Usage scenarios

Ingress is the most powerful way to expose services, but it is also the most complex. In fact, there are many types of Ingress controllers, such as LoadBalancer from Google Cloud, Nginx, Contour, Istio, etc. There are also plug-ins for Ingress controllers, such as cert-manager, which can automatically provide SSL certificates for users' services.

If the business needs to expose multiple services under the same IP address, and these services all use the same L7 protocol (usually HTTP), Ingress is most suitable at this time. If your business needs to use GCP integration, then only use one load balancer. Since Ingress is "smart", users get many "out of the box" features such as SSL, Auth, routing, etc.

Guess you like

Origin blog.csdn.net/ygq13572549874/article/details/130982980