Micro-isolation of Cloud Native Network

This blog address: https://security.blog.csdn.net/article/details/130044619

1. Introduction to micro isolation

1.1. The concept of micro isolation

When the subject performs an action, the authority and behavior of the subject are judged. The most common one is network access control, that is, Zero Trust Network Access (ZTNA). Zero-trust network access is a very important technical branch in zero-trust implementation, and micro-segmentation, as one of the key technologies to realize ZTNA, also plays an important role in the construction of cloud-native network security.

Micro-isolation is a finer-grained network isolation technology. Its core capabilities also focus on east-west traffic. It isolates and controls east-west traffic in traditional environments, virtualized environments, hybrid cloud environments, and container environments. The focus is on Block lateral movement by attackers once they have gained access to data center networks or cloud virtual networks.

Micro-segmentation is different from the traditional border-based firewall isolation technology. Micro-segmentation technology usually adopts a software-defined method. Its policy control center and policy execution unit are separated, and it usually has the characteristics of distribution and self-adaptation.

The policy control center is the core control unit of the micro-isolation system, which visually displays the network access relationship between internal systems and business applications, and can quickly classify and group workloads that need to be isolated according to roles and labels, efficiently and flexibly Configure isolation policies between workloads and business applications.

The policy execution unit is mainly used for monitoring network traffic data and executing isolation policies, and is usually implemented as a virtualization device or an agent on a host.

1.2. Two types of network micro-segregation mechanisms in the cloud-native environment

1. Implementation based on Network Policy

A Network Policy is a Kubernetes resource that describes how a set of Pods are allowed to communicate with each other and with other network endpoints. Network Policy selects Pods using labels and defines the communication rules that are allowed for the selected Pods. The network traffic of each Pod includes two directions: ingress (Ingress) and egress (Egress).

By default, all Pods are non-isolated and can completely communicate with each other, that is, a blacklist communication mode is adopted. When a Network Policy is defined for a Pod, only allowed traffic can communicate with the corresponding Pod. Under normal circumstances, in order to achieve a more effective and precise isolation effect, this default blacklist mechanism will be changed to a whitelist mechanism, that is, when the Pod is initialized, its Network Policy is set to, and then according to the deny allservice Inter-communication needs, formulate fine-grained policies, and accurately select network traffic that can be communicated.

CNI only formulates corresponding interface specifications for Network Policy, and the Network Policy functions of Kubernetes are also implemented by third-party plug-ins. Therefore, usually only network plug-ins or security plug-ins that support the Network Policy function can perform corresponding network policy configurations, such as Calico, Ciliumetc.

2. Implementation based on Sidecar

Another way to implement micro-isolation is to use the Sidecar method in the Service Mesh architecture. The traffic management model of Service Mesh (such as Istio) is usually deployed together with a Sidecar proxy (such as Envoy). All traffic sent and received by services in the mesh goes through the Sidecar proxy, which makes it extremely simple to control traffic in the mesh, and Micro-segregation can be easily achieved without any changes to the service, and with the control plane outside the grid.

2. Introduction to Cilium

Open source address and documentation: https://github.com/cilium/cilium

2.1. Concept

Cilium is an open source cloud-native network implementation solution. Unlike other network solutions, Cilium emphasizes its advantages in network security, which can transparently monitor the network between application services on container management platforms such as Kubernetes. The connection is secured.

Cilium is designed and implemented based on a new Linux kernel technology eBPF, which can dynamically insert powerful security and visible network control logic inside Linux, and the corresponding security policy can be implemented without modifying the application code or container configuration. Apply and update. Its characteristics mainly include the following three aspects:

⬤ Provide basic network interconnection and intercommunication capabilities in Kubernetes, and realize basic network connectivity functions including Pods and Services in container clusters.
⬤ Relying on eBPF, realize the observability of the network in Kubernetes and basic security strategies such as network isolation and troubleshooting.
⬤ Relying on eBPF, it breaks through the limitation that traditional host firewalls only support L3 and L4 micro-isolation, and supports API-based network security filtering capabilities. Cilium provides a simple and effective way to define and enforce container/Pod identity-based network layer and application layer (such as HTTP/gRPC/Kafka, etc.) security policies.

2.2. Architecture

Cilium is located between the container orchestration system and the Linux Kernel. Upwardly, it can configure the network and corresponding security configuration for the container through the orchestration platform. Downwardly, it can control the forwarding behavior of the container network and the implementation of security policies by mounting the eBPF program on the Linux kernel.

In the Cilium architecture, the main components include Cilium Agent and Cilium Operator.

⬤ As the core component of the whole architecture, Cilium Agent runs on each host of the cluster in the mode of privileged container through DaemonSet. As a user space daemon, Cilium Agent interacts with the container runtime and container orchestration system through plug-ins, and then performs network and security-related configurations for the containers on the machine. At the same time, an open API is provided for other components to call.

⬤ Cilium Operator is mainly responsible for managing the tasks in the cluster, as far as possible to ensure that the cluster is used as a unit instead of the node as a unit for task processing, mainly including synchronizing resource information between nodes through etcd and ensuring that Pod DNS can be accessed Cilium management, cluster Network Policy management and update, etc.

The Cilium architecture is shown in the figure:
insert image description here

2.3. Networking mode

Cilium provides a variety of networking modes, and the vxlan-based Overlay networking is used by default. In addition, it also includes:

⬤ Through BGP routing, realize the networking and interconnection of Pods between clusters;
⬤ Deploy and use Cilium in the ENI (Elastic Network Interface) mode of AWS;
⬤ Integrated deployment of Flannel and Cilium;
⬤ Use ipvlan-based networking, and It is not based on veth by default;
⬤ Cluster Mesh networking, realizing multiple networking modes such as network connectivity and security across multiple Kubernetes clusters

2.4 Observable

Cilium provides the network visualization component Hubble. Hubble is built on top of Cilium and eBPF. It provides in-depth visualization of network infrastructure communication and application behavior in a completely transparent manner. It is a fully distributed application for cloud-native workloads. Network and Security Observability Platform.

Hubble can leverage the eBPF data path provided by Cilium to gain deep visibility into Kubernetes application and service network traffic. These network traffic information can be connected to Hubble CLI and UI tools, and can quickly discover and diagnose related network problems and security problems in an interactive manner. In addition to its own monitoring tools, Hubble can also interface with mainstream cloud-native monitoring systems such as Prometheus and Grafana to implement scalable monitoring strategies.

2.5. Installation

# 下载及解压
wget https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin

# 安装
cilium install --kube-proxy-replacement=strict

# 可视化组件
cilium hubble enable --ui

# 查看状态
cilium status

# 展示Service
cilium service list

Guess you like

Origin blog.csdn.net/wutianxu123/article/details/130044619