Chapter 26 Nine Analysis takes you easily to explode Istio-k8s traffic into the NodePort of the cluster

Series of articles:


General Catalog Index: Nine Analysis Takes You Easy to Complete the Istio Service Grid Series Tutorial

table of Contents

1 Introduction

2 Invitation

3 NodePort sample

4 Access flow

5 Demo

6 Disadvantages


1 Introduction

        If you have any questions about the blog, please let me know.1.png


2 Invitation

        You can search for "Nine Analysis" from station b to get free, more vivid video materials:clipboard2.png


3 NodePort sample

        Unlike hostNetwork and hostPort, NodePort is one of the service types. The target object of hostPort and hostNetwork is Pod, and the target object of NodePort is service.

        The pod file is as follows:clipboard3.png

        The service file is as follows:clipboard4.png

        When creating a NodePort service, the user can specify a port in the range of 30000 ~ 32767, or modify the service type by patching, so that the NodePort port will be automatically allocated in the range of 30000 ~ 32767.

kubectl patch svc svc_name -n ns_name -p '{"type": "NodePort"}'


4 Access flow

        The following figure shows what changes have occurred within the k8s cluster after k8s svc was created.clipboard5.png

        When the client sends a kubectl apply instruction to the APIServer, it will generate related service objects, and all nodes (master and node) in the k8s cluster will run the kube-proxy process, as shown in the following screenshot:clipboard6.png

        The role of this process is to be responsible for communication with APIServer and life cycle management of pods on node. If the k8s service is created, kube-proxy will create the corresponding iptables rules and forward the traffic sent to the service nodePort to the corresponding port of the Pod provided by the service backend.


5 Demo

        After executing the jiuxi-svc.yaml resource file, as shown in the following screenshot:clipboard7.png

        Looking at the network monitoring interface, we can see that the service process corresponding to 30088 is kube-proxy, as shown in the following figure:clipboard8.png

        When we access hostIP: 30088 through curl, the packet will be sent to the kube-proxy process through port 30088, and the kube-proxy process will route the packet to the actual pod IP and the corresponding port according to the endpoint of the k8s service. See the screenshot below to view the corresponding endpoints of service.clipboard9.png


6 Disadvantages

        Using the NodePort method, within the k8s cluster, the kube-proxy process on each node (master or node) will open the relevant port. If services are used in this way, there is a possibility of port conflicts. Therefore, it is recommended to use caution.

Guess you like

Origin blog.51cto.com/14625168/2489488