Kubernetes Quick Start 13-Network

Kubernetes Quick Start 13-Network

There are four network models for containers in kubernetes:

  1. bridge, bridged network, freestyle network namespace
  2. joined, share the network space using another container
  3. opened, the container directly shares the host's network namespace
  4. Closed or None, do not use any network space

The problem with the docker network: when accessing containers across nodes, they must go through the network of their respective hosts and perform SNAT and DNATl conversion. The initiator container accesses the target container to access the network of the host where the target container is located. The target container can be accessed only through DNAT conversion. , The target container cannot see the IP address of the initiator container, and the initiator cannot see the IP address of the target container. Communication efficiency through SNAT and DNAT conversion is low.

Kubernetes Quick Start 13-Network

k8s network communication:

  1. Communication between containers: The communication between multiple containers in the same pod directly uses the lo loopback address
  2. Inter-pod communication: directly use the Pod's network IP address to communicate
  3. Pod and Service communication: Pod IP communicates with ClusterIP
  4. Communication between Service and outside the cluster: external LB, ingress, NodePort

k8s itself does not provide network solutions. It supports CNI (container network plug-in, which is just a specification) network plug-in method to introduce network solutions. Commonly include flannel, calico, canal, etc., canal is a combination of flannel and calico, in flannel The realization of the network strategy based on the increase.

flannel network

Flannel supports a variety of message bearing methods:

  1. VxLAN, implemented using a tunnel network, has a large overhead, which is the default working mode of flannel.

    VxLAN也有两种工作方式:
    1. 原生vxlan
    2. directrouting, 直接路由,两个物理节点在同一个三层网络中则使用直接路由,如果物理节点之间有路由器进行隔离,那就降级为使用原生的vxlan的隧道叠加方式通信
  2. host-gw: host gateway The host gateway, virtualizes the physical network card of the host machine as the default gateway of the pod, and realizes the access between pods across physical nodes through routing table forwarding. Each physical node needs to be in the same three-layer In the network

  3. UDP, use ordinary UDP method, the efficiency is very low, it is recommended not to use

Configuration parameters of flannel

Network: flannel使用的CIDR格式的网络地址,用于为Pod配置网络功能
SubnetLen: 把Network切分子网供各节点使用时,使用多长的掩码进行切分,默认为24位
SubnetMin: 用于分配给节点子网的起始网络
SubnetMax: 用于分配给节点子网的最大网络
Backend: flannel的工作方式,vxlan, host-gw, udp

Exploration of the working principle of flannel

After the flannel plug-in is installed, it works in the default vxlanmanner, and DaemonSetmanages the flannel Pod by the controller, that is, each physical node runs one pod, and the pod shares the host's network namespace.

View the current network interface and routing information on the node02 node

root@node02:~# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.1  netmask 255.255.255.0  broadcast 10.244.1.255
        ...
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
       ...
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.101.41  netmask 255.255.255.0  broadcast 192.168.101.255
        ...

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.0  netmask 255.255.255.255  broadcast 0.0.0.0
        ...
...

cni0And the flannel.1two interfaces are network interfaces automatically generated after the deployment of flannel. It cni0is a bridge device responsible for the communication between the containers on the node. When the Pod in the node needs to access the pod outside the node, the data packet will pass flannel.1The interface performs tunnel packet encapsulation. The mtu value of these two interfaces is 1450, which reserves some space for the encapsulation of the tunnel report.

Let's take a look at the routing information of the host

root@node02:~# ip route
default via 192.168.101.1 dev ens33 proto static
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 # 本机的pod就直接走cni0网络
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink  # 其他节点的pod网络需要走flannel.1接口进行隧道转发
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.101.0/24 dev ens33 proto kernel scope link src 192.168.101.41

From the routing information, it can be seen that if you want to access the Pod of other nodes, you need to route the data to flannel.1this interface for tunnel packet encapsulation. After all, the tunnel technology is used and the overhead is high.

Take a Ping message example to see how data communicates between Pods

k8s@node01:~$ kubectl get pods -o wide
NAME      READY   STATUS    RESTARTS   AGE    IP             NODE     NOMINATED NODE   READINESS GATES
myapp-0   1/1     Running   1          20h    10.244.2.117   node03   <none>           <none>
myapp-1   1/1     Running   2          2d5h   10.244.1.88    node02   <none>           <none>
myapp-2   1/1     Running   2          2d5h   10.244.2.116   node03   <none>           <none>

# 在node03上运行的Pod中对node02上运行的Pod进行ping操作
k8s@node01:~$ kubectl exec -it myapp-0 -- /bin/sh
/ # ping 10.244.1.88

# 在node02上抓包看看
root@node02:~# tcpdump -i flannel.1 -nn icmp  # flannel.1接口能看到icmp包的信息
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
15:03:27.659305 IP 10.244.2.117 > 10.244.1.88: ICMP echo request, id 5888, seq 69, length 64
15:03:27.659351 IP 10.244.1.88 > 10.244.2.117: ICMP echo reply, id 5888, seq 69, length 64
...
root@node02:~# tcpdump -i cni0 -nn icmp  # cni0也能看到icmp包的信息
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:03:45.672189 IP 10.244.2.117 > 10.244.1.88: ICMP echo request, id 5888, seq 87, length 64
15:03:45.672244 IP 10.244.1.88 > 10.244.2.117: ICMP echo reply, id 5888, seq 87, length 64
...

# 物理接口ens33上没有icmp包的信息,这是因为从node03节点上的Ping包在进入到node02节点时就被flannel进行了隧道封装,使用了overlay网络叠加技术,所以直接抓icmp包是看不到的,但对数据包进行分析,可以看到被封装后的报文,如下
root@node02:~# tcpdump -i ens33 -nn
...
13:54:50.800954 IP 192.168.101.42.46347 > 192.168.101.41.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.2.117 > 10.244.1.88: ICMP echo request, id 3328, seq 4, length 64
13:54:50.801064 IP 192.168.101.41.51235 > 192.168.101.42.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.1.88 > 10.244.2.117: ICMP echo reply, id 3328, seq 4, length 64
...

flannel network optimization

The default vxlanworking mode is tunnel forwarding during cross-node access, which is not efficient. Flannel also supports adding a Directrouingparameter to directly use routing technology when communicating between cross-nodes instead of tunnel forwarding, which can improve performance.

First download flannelthe yaml file when deploying the network plug-in and add Directrouingparameters

k8s@node01:~/install_k8s$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 编辑该文件,并在ConfigMap资源中的“net-conf.json”这个key的值中增加"Directrouing"
k8s@node01:~/install_k8s$ vim kube-flannel.yml
...
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "Directrouting": true   # 增加此key/value,注意上一行尾的逗号
      }
    }
...

# 先删除之前部署的flannel
k8s@node01:~/install_k8s$ kubectl delete -f kube-flannel.yml
# 再部署flannel
k8s@node01:~/install_k8s$ kubectl apply -f kube-flannel.yml

Let's take a look at the routing information on the node02 node

root@node02:~# ip route show
default via 192.168.101.1 dev ens33 proto static
10.244.0.0/24 via 192.168.101.40 dev ens33
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
10.244.2.0/24 via 192.168.101.42 dev ens33   # 到其他节点的Pod网络就直接走物理接口
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.101.0/24 dev ens33 proto kernel scope link src 192.168.101.41

At that 10.244.2.0/24time, the flannel.1interface will no longer be used , but directly routed to the physical interface ens33. And now use the tcpdumpcommand ens33to capture the ICMP packets between the corresponding Pods on the interface.

host-gwThe work is somewhat similar to the way vxlanin directrouting, all of the physical interface to the host as a gateway to use, just use host-gwrequires that all nodes in the cluster at the same time should be a three-tier network.

Note: In the production environment, when the k8s cluster is already running business, you cannot directly delete the flannel and then add the parameters before applying it. This will cause the communication between the Pods in the cluster to be interrupted. The flannel network should be optimized when the cluster is just deployed.

Based on Calico network strategy

For more information about calico, please refer to: https://www.projectcalico.org/

Calico is an open source network and network security solution for containers, virtual machines, and host-based local workloads. Calico supports a wide range of platforms, including Kubernetes, OpenShift, Docker EE, OpenStack and bare metal services.

Flannel solves the communication between pod networks across physical nodes, but it lacks the function of defining network strategies. Calico can also provide inter-pod networks and network strategies, but Calico is more complicated than flannel and has a higher learning threshold, so we Calico can be installed under the network provided by flannel, and only use its network strategy function.

Flannel provides the basic network, and Calico provides the installation of the network strategy, please refer to: https://docs.projectcalico.org/getting-started/kubernetes/flannel/flannel

Calico also needs to rely on etcd database, you can use one set for yourself, but you need to build a set of etcd separately, and there is already etcd service in the k8s cluster, so you can share this set of etcd, but Calico does not directly call the etcd for reading Write, but call the api server interface of k8s.

# 下载其实就是 canal 这个网络插件,项目地址:https://github.com/projectcalico/canal,其文档文档也是跳转到calico的文档
k8s@node01:~/install_k8s$ wget https://docs.projectcalico.org/manifests/canal.yaml
# 应用后会创建一大堆资源对象,大部分都是CRD(自定义资源定义)
k8s@node01:~/install_k8s$ kubectl apply -f canal.yaml
configmap/canal-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created
daemonset.apps/canal created
serviceaccount/canal created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

The Canal network plug-in is also daemonsetmanaged by a controller. A physical node only runs one pod and shares the network name space of the physical node. Related resources are stored in the kube-systemnamespace.

k8s@node01:~/install_k8s$ kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
calico-kube-controllers-578894d4cd-t66mz   1/1     Running   0          2m3s    10.244.2.126     node03   <none>           <none>
canal-fcpmq                                2/2     Running   0          2m3s    192.168.101.41   node02   <none>           <none>
canal-jknl6                                2/2     Running   0          2m3s    192.168.101.42   node03   <none>           <none>
canal-xsg99                                2/2     Running   0          2m3s    192.168.101.40   node01   <none>           <none>
...

After the installation is complete, there will be an networkpolicyadditional resource object named in k8s , which can be abbreviated as netpol, you can also use kubectl explain networkpolicythe help information for viewing resources

KIND:     NetworkPolicy
VERSION:  networking.k8s.io/v1

FIELDS:
spec    <Object>
    egress  <[]Object>  定义出站策略
        ports   <[]Object>  出站规则中定义对方开放的端口列表
            port    <string>  若不指定,则表示所有端口
            protocol    <string> TCP, UDP, or SCTP. defaults to TCP
        to  <[]Object>  出站到对方的对象,可以是以下几类对象
            ipBlock <Object>  ip块,描述一个网段,一个ip地址
                cidr    <string> -required-  cidr格式的地址
                except  <[]string>   排除地址,也是cidr格式的地址
            namespaceSelector   <Object>  名称空间标签选择器,出站到指定的名称空间
                matchLabels <map[string]string>
                matchExpressions    <[]Object>
            podSelector <Object>  pod标签选择器,出站到指定的一类pod
                matchLabels <map[string]string>
                matchExpressions    <[]Object>
    ingress <[]Object>  定义入站策略
        from    <[]Object>  入站是从何而来,同样类似"egress.to"
            ipBlock <Object>
            namespaceSelector   <Object>
            podSelector <Object>
        ports   <[]Object>  需要访问的端口列表,与"egress.ports"类似
            port    <string>
            protocol    <string>
    podSelector <Object> -required-  策略应用在哪些Pod上,同样使用标签选择器,如果设置为空,即{},即表示该名称空间的所有pod
        matchLabels <map[string]string>
        matchExpressions    <[]Object>
    policyTypes <[]string>  策略类型,"Ingress", "Egress", or "Ingress,Egress",显示声明策略类型,会影响入站或出站的默认行为。1. 如果值为“Ingress”,仅ingress入站策略生效,egress出站策略无效;2. 若未定义ingress规则,则ingress默认规则为拒绝所有;3. 若定义了ingress规则,则按照该规则动作;4. 若ingress规则为空,即"{}",则放行所有的入站流量;“Egress”类似,可以多测试看看      

Create two namespaces for testing

k8s@node01:~/install_k8s$ kubectl create namespace dev
k8s@node01:~/install_k8s$ kubectl create namespace prod

Then run Deploymentthe pod controlled by the controller between the two names

k8s@node01:~/networkpolicy$ cat deployment-pods.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp-pod
        image: ikubernetes/myapp:v1
k8s@node01:~/networkpolicy$ kubectl apply -f deployment-pods.yaml -n dev
deployment.apps/myapp-deploy created

k8s@node01:~/networkpolicy$ kubectl get pods -n dev -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
myapp-deploy-6f96ddbbf9-x4jps   1/1     Running   0          96s   10.244.2.3   node03   <none>           <none>
myapp-deploy-6f96ddbbf9-xs227   1/1     Running   0          96s   10.244.1.3   node02   <none>           <none>
k8s@node01:~/networkpolicy$ kubectl get pods -n prod -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
myapp-deploy-6f96ddbbf9-92s5g   1/1     Running   0          88s   10.244.1.4   node02   <none>           <none>
myapp-deploy-6f96ddbbf9-djwc6   1/1     Running   0          88s   10.244.2.4   node03   <none>           <none>

Without a network strategy, these 4 Pods can communicate with each other. Use now to networkpolicydevelop network strategies

k8s@node01:~/networkpolicy$ cat netpol-test.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: netpo-test
spec:
  podSelector: {} # 应用名称空间的所有pod
  policyTypes:
  - Ingress
k8s@node01:~/networkpolicy$ kubectl apply -f netpol-test.yaml -n dev

This rule does not define ingress and egress, but the policy type is specified as Ingress, which means that all inbound requests of pods in the dev namespace use the default denial policy. The ping namespace in the pod in the prod namespace is the pod in dev. See if you can Ping

k8s@node01:~/networkpolicy$ kubectl exec myapp-deploy-6f96ddbbf9-92s5g -n prod -- ping 10.244.2.3
PING 10.244.2.3 (10.244.2.3): 56 data bytes
^C
k8s@node01:~/networkpolicy$ kubectl exec myapp-deploy-6f96ddbbf9-92s5g -n prod -- ping 10.244.1.3
PING 10.244.1.3 (10.244.1.3): 56 data bytes
^C
# 都无法Ping通

Try again to see if you can Ping between Pods in the dev namespace

k8s@node01:~/networkpolicy$ kubectl exec myapp-deploy-6f96ddbbf9-xs227 -n dev -- ping 10.244.2.3
PING 10.244.2.3 (10.244.2.3): 56 data bytes

# 依然是不通的

This strategy rejects all inbound data from the pod. Modify the policy to open the inbound traffic of all pods in the dev namespace

k8s@node01:~/networkpolicy$ cat netpol-test.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: netpo-test
spec:
  podSelector: {} # 应用名称空间的所有pod
  ingress:  # 增加入站策略,但设置为空
  - {}
  policyTypes:
  - Ingress

k8s@node01:~/networkpolicy$ kubectl apply -f netpol-test.yaml -n dev

The inbound policy is set to empty, and the policy type is Ingress, which means that the inbound change allows all traffic

# 在宿主机也可以直接访问dev空间中的两个Pod的ip地址,能正常访问其相当的服务
k8s@node01:~/networkpolicy$ curl 10.244.2.3
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
k8s@node01:~/networkpolicy$ curl 10.244.1.3
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

Modify the policy to allow 10.244.0.0/16network access to all ports dev namespace all pod, but the removal of the name space pod prod address 10.244.1.4/32of the pod

k8s@node01:~/networkpolicy$ cat netpol-test.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: netpo-test
spec:
  podSelector: {} # 应用名称空间的所有pod
  ingress:
  - from:
    - ipBlock:
        cidr: 10.244.0.0/16
        except:
        - 10.244.1.4/32
  policyTypes:
  - Ingress
k8s@node01:~/networkpolicy$ kubectl apply -f netpol-test.yaml -n dev

Test in 2 Pods in the prod namespace

# 地址为 10.244.1.4/32 的pod无法访问
k8s@node01:~/networkpolicy$ kubectl exec myapp-deploy-6f96ddbbf9-92s5g -n prod -- /usr/bin/wget -O - -q 10.244.2.3
^C
# 另一个Pod则可以正常访问
k8s@node01:~/networkpolicy$ kubectl exec myapp-deploy-6f96ddbbf9-djwc6 -n prod -- /usr/bin/wget -O - -q 10.244.2.3
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

Now I want to realize that the pods in the dev namespace can access each other, but the pods in the prod namespace cannot access the pods in the dev namespace. Here you need to use the tag selector, first tag the Pod in the dev namespace

k8s@node01:~/networkpolicy$ kubectl get pods -n dev
NAME                            READY   STATUS    RESTARTS   AGE
myapp-deploy-6f96ddbbf9-x4jps   1/1     Running   0          45m
myapp-deploy-6f96ddbbf9-xs227   1/1     Running   0          45m
k8s@node01:~/networkpolicy$ kubectl label pod myapp-deploy-6f96ddbbf9-x4jps ns=dev -n dev
pod/myapp-deploy-6f96ddbbf9-x4jps labeled
k8s@node01:~/networkpolicy$ kubectl label pod myapp-deploy-6f96ddbbf9-xs227 ns=dev -n dev
pod/myapp-deploy-6f96ddbbf9-xs227 labeled
k8s@node01:~/networkpolicy$ kubectl get pods -n dev --show-labels
NAME                            READY   STATUS    RESTARTS   AGE   LABELS
myapp-deploy-6f96ddbbf9-x4jps   1/1     Running   0          46m   app=myapp,ns=dev,pod-template-hash=6f96ddbbf9
myapp-deploy-6f96ddbbf9-xs227   1/1     Running   0          46m   app=myapp,ns=dev,pod-template-hash=6f96ddbbf9

Tag the namespace too

k8s@node01:~/networkpolicy$ kubectl label ns prod ns=prod
namespace/prod labeled
k8s@node01:~/networkpolicy$ kubectl label ns dev ns=dev
namespace/dev labeled
k8s@node01:~/networkpolicy$ kubectl get ns --show-labels
NAME                   STATUS   AGE    LABELS
default                Active   10d    <none>
dev                    Active   89m    ns=dev
ingress-nginx          Active   5d2h   app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-node-lease        Active   10d    <none>
kube-public            Active   10d    <none>
kube-system            Active   10d    <none>
kubernetes-dashboard   Active   31h    <none>
prod                   Active   88m    ns=prod

Modify the policy file again

k8s@node01:~/networkpolicy$ cat netpol-test.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: netpo-test
spec:
  podSelector:
    matchLabels:
      ns: dev
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          ns: dev
  policyTypes:
  - Ingress
  k8s@node01:~/networkpolicy$ kubectl apply -f netpol-test.yaml -n dev

The current situation is that the two pods in the prod namespace cannot access the pods in the dev namespace, and the pods in the dev namespace can access each other

# 无法访问
k8s@node01:~/networkpolicy$ kubectl exec myapp-deploy-6f96ddbbf9-djwc6 -n prod -- /usr/bin/wget -O - -q 10.244.2.3
^C
k8s@node01:~/networkpolicy$ kubectl exec myapp-deploy-6f96ddbbf9-92s5g -n prod -- /usr/bin/wget -O - -q 10.244.2.3
^C

k8s@node01:~/networkpolicy$ kubectl get pods -n dev -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
myapp-deploy-6f96ddbbf9-x4jps   1/1     Running   0          60m   10.244.2.3   node03   <none>           <none>
myapp-deploy-6f96ddbbf9-xs227   1/1     Running   0          60m   10.244.1.3   node02   <none>           <none>
# dev名称空间中的pod可以互相访问
k8s@node01:~/networkpolicy$ kubectl exec myapp-deploy-6f96ddbbf9-xs227 -n dev -- /usr/bin/wget -O - -q 10.244.2.3
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

Now I want to open the service on port 80 to the pod in prod, then modify the policy file as follows

k8s@node01:~/networkpolicy$ cat netpol-test.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: netpo-test
spec:
  podSelector:
    matchLabels:
      ns: dev
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          ns: dev
  - from:  # 增加一条开放给prod名称空间的权限
    - namespaceSelector:
        matchLabels:
          ns: prod
    ports:
    - protocol: TCP
      port: 80
  policyTypes:
  - Ingress
k8s@node01:~/networkpolicy$ kubectl apply -f netpol-test.yaml -n dev

Now the pod in prod can access the http service provided by the pod in the dev namespace, but it can only access the service on port 80

k8s@node01:~/networkpolicy$ kubectl exec myapp-deploy-6f96ddbbf9-djwc6 -n prod -- /usr/bin/wget -O - -q 10.244.2.3
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
k8s@node01:~/networkpolicy$ kubectl exec myapp-deploy-6f96ddbbf9-92s5g -n prod -- /usr/bin/wget -O - -q 10.244.2.3
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

# ping服务未开放,无法ping通
k8s@node01:~/networkpolicy$ kubectl exec myapp-deploy-6f96ddbbf9-92s5g -n prod -- ping 10.244.2.3
PING 10.244.2.3 (10.244.2.3): 56 data bytes
^C

networkpolicyThe idea is similar to iptables, but it also filters outbound and inbound traffic.

Guess you like

Origin blog.51cto.com/zhaochj/2533841