Are there significant performance issues with the current NFV architecture?

Generally, vCPE chooses to use VM or container to deploy on multi-core, OVS allocates the network, and runs DPDK in each container/vm to achieve the best performance. There are several problems here:
1. The load on each core is unbalanced, Some vnf may be very idle, and some may be very busy, becoming a bottleneck. If it is not bound to the core, the system context switch will cause a delay
. 2. Each VNF needs to fetch data from the virtual network card, which may be ovs or sriov. It must go through kernel mode user mode switching, packet data analysis, and packet grouping , or even send it to the hardware and come back. Even the ovs swap has to go around and come back.
3. Data is processed on different CPUs, which means entering DDR, load cpu cache, flush, and then repeating to the next CPU. DDR itself is a performance bottleneck.

Overall , the packet has passed through multiple vnfs and also through multiple VNFs. CPU, the delay is large, and the CPU load is unbalanced. If this horizontal division is integrated into vertical, the performance will be improved by more than 10 times:
1. Each vnf process becomes a callback, and the packet goes through a series of callbacks from incoming to outgoing, all processed on one cpu. In this way, the packet data is always in the first-level cache, which is extremely efficient.
2. The VNF only transmits packets backwards or discards them, which is at most as complicated as a flowchart. vnf drives the message from the calling parameters and does not need to know the hardware information.
3. The complex functions of OVS, such as packet filtering rules and overlays, have become one or several standard vnfs and added to the processing pipeline.
4. If the packet does not go down to the CPU until it goes out, the multi-core utilization depends on RSS or other load balancing, which is very mature.
5. The vnf control plane, which can be isolated and managed by the container. The container and vnf should communicate through shared memory as much as possible to avoid switching.

question:
1. Security, will vnf interfere with each other and destroy each other? The operator's procedures should be fully tested. Even if not, after isolation, it is easy to locate the problem VNF. I don't know whether the program can dynamically switch the Namespace, and if it can, it can be dynamically isolated. In the long run, especially after the VNF is stable, you can choose this non-isolation method to improve performance, and use VM or container to isolate in the early stage.
2. Will Openstack still support it? This is a new platform other than vm/container, there should be new plugins
3. What if it's still insecure? vcpe or each core becomes bare metal mode, and the API provided by the system has limited

performance . . . An order of magnitude is fine, right? In essence, NFV is a solution that converts hardware network elements into software. It cannot be said that with V, it is necessary to virtualize and require containers. This is a wrong idea. The purpose of softwareization is to flexibly control upgrades and utilize popular low-cost computing capabilities. As long as these can be done, vm/container is not suitable for the serial data processing flow of vnf!

Opennfv all use openstack as a reference implementation, this is an early transition idea, and the performance can be improved by orders of magnitude

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326482691&siteId=291194637