K8S | Service service discovery

1. Background

In the microservice architecture, it is described here based on the development environment "Dev", which is usually opened in the K8S cluster: routing gateway, registration center, configuration center and other related services can be accessed by the outside of the cluster;

picture

For the test "Tes" environment or the production "Pro" environment, for the sake of security or environmental isolation, only the gateway service will be opened under normal circumstances, and the "registration and configuration" center will not be exposed; Other business services are generally not open to the outside world, and normal communication between services within the K8S cluster is possible. For the "Dev" environment, research and development will use the "Registration and Configuration" center, and the gateway is the access entrance of the system; in the K8S cluster , through the Service component, service discovery and load balancing can be realized quickly and easily;

Two, Service components

1 Introduction

In the K8S cluster, application services are deployed through the Pod component, the Deployment component implements Pod orchestration management, and the Service component implements application access;

picture

【Pod】Its own characteristics are temporary and directly discarded entities after use. In this way, in the state of Pod creation and destruction, the IP address will change, that is, the fixed IP cannot be used for application access;

[Deployment] The controller indirectly implements Pod management by managing the ReplicaSet, such as publishing methods, update and rollback strategies, maintaining the number of Pod copies, and quickly orchestrating applications, but it does not involve application access;

[Service] is a method of exposing a network application running on one or a group of Pods as a network service, and can use the service discovery mechanism to access the application without modifying the existing application;

Based on the collaboration of the three components of Pod, Deployment, and Service, the deployment script of the same application can be reused in different environments of development, testing, and production;

2. Basic grammar

Here is a simple [Service] syntax for reference;

picture

It should be noted that the service type is not specified in the script ServiceType, that is, the default is ClusterIPto expose the service through the internal IP of the cluster. When this value is selected, the service can only be accessed within the cluster;

3. Internal Service Discovery

1. Pod creation

Based on the [Deployment] component, create an "auto-serve" application;

apiVersion: apps/v1
kind: Deployment
metadata:
  name: serve-deployment
  labels:
    app: auto-serve
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auto-serve
  template:
    metadata:
      labels:
        app: auto-serve
    spec:
      containers:
        - name: auto-serve
          image: auto-serve:latest
          imagePullPolicy: Never
          ports:
            - containerPort: 8082
              name: auto-serve-port

Execute the create command

kubectl apply -f serve-deployment.yaml

2. Service creation

Simple script file: app-service.yaml;

apiVersion: v1
kind: Service
metadata:
  name: app-service
spec:
  selector:
    app: auto-serve
  ports:
  - name: app-service-port
    protocol: TCP
    port: 8082
    targetPort: auto-serve-port

Create【Service】

kubectl apply -f app-service.yaml

View [Service], you can use the command line or interface;

kubectl describe svc app-service

picture

Delete【Service】

kubectl delete -f app-service.yaml

3. Internal access

It has been explained above that when Typeit is not specified ClusterIP, it can only be accessed within the cluster, and the network outside the cluster cannot be accessed; provide a piece of code to access the [auto-serve] interface in the [auto-client] service, and Make a mirror [auto-client:3.3.3], and check the log print after the deployment is completed;

@Component
public class HttpServiceJob {

    private static final Logger LOG = LoggerFactory.getLogger(HttpServiceJob.class.getName()) ;

    private static final String SERVER_NAME = "http://app-service:8082/serve";
    private static final String SERVER_IP = "http://10.103.252.94:8082/serve";

    /**
     * 每30秒执行一次
     */
    @Scheduled(fixedDelay = 30000)
    public void systemDate () {
        SimpleClientHttpRequestFactory factory = new SimpleClientHttpRequestFactory();
        factory.setReadTimeout(3000);
        factory.setConnectTimeout(6000);
        RestTemplate restTemplate = new RestTemplate(factory);

        try {
            Map<String, String> paramMap = new HashMap<>();
            String result = restTemplate.getForObject(SERVER_NAME, String.class, paramMap);
            LOG.info("service-name-resp::::" + result);
        } catch (Exception e) {
            e.printStackTrace();
        }

        try {
            Map<String, String> paramMap = new HashMap<>();
            String result = restTemplate.getForObject(SERVER_IP, String.class, paramMap);
            LOG.info("service-ip-resp::::" + result);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

It can be accessed normally through 服务名:端口and in the code , and the logs of the two applications are viewed in the Pod, and the request and response are normal;IP:端口

picture

4. External service discovery

1. NodePort type

Specify NodePorta script of type: app-np-service.yaml;

apiVersion: v1
kind: Service
metadata:
  name: app-np-service
spec:
  type: NodePort
  selector:
    app: auto-serve
  ports:
    - protocol: TCP
      port: 8082
      targetPort: 8082
      nodePort: 30010

Create【Service】

kubectl apply -f app-np-service.yaml

Using NodePortthe type, the K8S control plane will allocate ports within the specified range. If a specific port number is required, nodePortthe value in the field can be specified, but this type requires a load balancing solution to be set by itself;

2. Load Balancer type

Specify LoadBalancera script of type: app-lb-service.yaml;

apiVersion: v1
kind: Service
metadata:
  name: app-lb-service
spec:
  type: LoadBalancer
  selector:
    app: auto-serve
  ports:
    - protocol: TCP
      port: 8082
      targetPort: 8082

Create【Service】

kubectl apply -f app-lb-service.yaml

View [Service], when viewing "app-lb-service", it is worth noting the Endpointsfield attributes, which are selected by the Pod selector Pod;

kubectl get svc app-lb-service -o wide


NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE     SELECTOR
app-lb-service   LoadBalancer   10.111.65.220   localhost     8082:30636/TCP   6m49s   app=auto-serve


kubectl describe svc app-lb-service


Name:                     app-lb-service
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=auto-serve
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.65.220
IPs:                      10.111.65.220
LoadBalancer Ingress:     localhost
Port:                     <unset>  8082/TCP
TargetPort:               8082/TCP
NodePort:                 <unset>  30636/TCP
Endpoints:                10.1.0.160:8082,10.1.0.161:8082,10.1.0.162:8082
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>


kubectl get pods -o wide


NAME                               READY   STATUS    RESTARTS   AGE   IP           NODE          
serve-deployment-f6f6c5bbd-9qvgr   1/1     Running   0          39m   10.1.0.162   docker-desktop
serve-deployment-f6f6c5bbd-w7nj2   1/1     Running   0          39m   10.1.0.161   docker-desktop
serve-deployment-f6f6c5bbd-x7v4d   1/1     Running   0          39m   10.1.0.160   docker-desktop

picture

Guess you like

Origin blog.csdn.net/qq_28165595/article/details/132115338