service和ingress

1. Introduction of Kubernetes Service Exposure

Starting from the 1.2 version of kubernetes, kubernetes provides Ingress objects to achieve external exposure services; so far kubernetes has a total of three ways to expose services:

  • LoadBlancer Service
  • NodePort Service
  • Ingress

1.1、LoadBlancer Service

LoadBlancer Service is a component of kubernetes deeply integrated with the cloud platform; when using the LoadBlancer Service to expose the service, it actually exposes the service by applying to the underlying cloud platform to create a load balancer ; the cloud platform supported by LoadBlancer Service has been relatively perfect , Such as foreign GCE, DigitalOcean, domestic Alibaba Cloud, private cloud Openstack, etc., because LoadBlancer Service is deeply integrated with cloud platforms, it can only be used on some cloud platforms

1.2、NodePort Service

The NodePort Service, as the name implies, is basically achieved by exposing a port on each node of the cluster and then mapping this port to a specific service. Although there are many ports (0 ~ 65535) for each node, but Security and ease of use (message is chaotic when there are too many services, and there are port conflicts). Actual use may not be much

1.3、Ingress

Ingress this thing only appeared after 1.2, through Ingress users can realize the use of nginx and other open source reverse proxy load balancer to achieve external exposure services

There are generally three components when using Ingress:

  • Reverse proxy load balancer ( nginx )
  • Ingress Controller ( Monitor, nginx and Ingress Controller have been integrated into one component, nginx does not need to be deployed separately )
  • Ingress ( rule definition )

1.3.1, reverse proxy load balancer

The reverse proxy load balancer is very simple, that is, nginx, apache or something; the reverse proxy load balancer can be freely deployed in the cluster, you can use Replication Controller, Deployment, DaemonSet, etc., but personally like to deploy DaemonSet, feel More convenient

1.3.2、Ingress Controller

The Ingress Controller can be understood as a monitor in essence . The Ingress Controller constantly interacts with the kubernetes API to sense real-time changes in back-end services and pods, such as adding and reducing pods, and service increase and decrease; when these changes are obtained After that, the Ingress Controller combines the following Ingress to generate the configuration, then updates the reverse proxy load balancer, and refreshes its configuration to achieve the role of service discovery

1.3.3、Ingress

Ingress is a simple definition of a rule definition, which is a set of rules that authorizes inbound connections to reach the cluster service ; for example, a certain domain name corresponds to a service, that is, it is forwarded to a service when a request for a certain domain name comes in; this rule will be related to Ingress Controller is combined, and then Ingress Controller dynamically writes it into the load balancer configuration to achieve overall service discovery and load balancing

 

It can be clearly seen from the above figure that in fact the request is still intercepted by the load balancer, such as nginx, and then the Ingress Controller learns which service corresponds to a certain domain name by interacting with Ingress, and then learns the service by interacting with the kubernetes API Address and other information; after synthesis, the configuration file is generated and written to the load balancer in real time, and then the load balancer reloads the rule to implement service discovery, that is, dynamic mapping

After understanding the above, this is a good illustration of why I like to deploy the load balancer as a Daemon Set; because the request is first intercepted by the load balancer anyway, so it is deployed on each node, and Hostport monitors port 80; then it solves the problem of unsure where the load balancer is deployed in other ways. At the same time, accessing 80 on each node can correctly resolve the request; if you put another nginx in the front end, a layer of load balancing is achieved.

 

Refer to the following blog post:

https://mritd.me/2016/12/06/try-traefik-on-kubernetes/

https://mritd.me/2017/03/04/how-to-use-nginx-ingress/

https://blog.csdn.net/zll_0405/article/details/88723082

https://www.kubernetes.org.cn/1885.html

Guess you like

Origin www.cnblogs.com/zjz20/p/12691770.html