OVS the Quality of Service (QoS) rate limiting

This document explains how to use the Open vSwitch VM traffic control will be achieved in 1 Mbps or 10 Mbps.

Here Insert Picture Description

set up

This guide assumes that the environment such as the following configurations.

A physical network

  • Data network

    Ethernet virtual machine for data communication. This network is used to transmit / receive traffic from the external host, the host is used to measure the transmission rate of the VM. For the experiments, the physical network is optional; all you can connect a virtual bridge, but the bridge is not connected to any physical interface, using a virtual machine as the measuring host.

There may be other networks (for example, for network management traffic). However, this guide only interested in data networks.

Two physical hosts

The first host name host1is a run Open vSwitch and Hypervisor have a NIC. This unique network card eth0to connect to the data network. OVS bridge because it belongs to, can not eth0assign an IP address.

The second host name Measurement Hostmeasuring host may be any host capable of measuring the throughput VM. For this guide, we use netperfa free tool for measuring rate, the transmission rate can be measured by a different host to host. Only one measuring host "eth0" card, connected to the data network. eth0The IP address can access any host1VM on the host.

Two VMs

Two VMs ( vm1and vm2) running host1on the host.

Each VM has a port, a display device for Linux (e.g., on a physical host tap0).

Note :
For Xen / XenServer, VM's display port to Linux device, similar to the name of the vif1.0other Linux systems may show these ports. vnet0, vnet1Wait.

Configuration Steps

For both the virtual machine, we modify the interface table Interface tableto configure the entry policy rules. There are two values to set:

ingress_policing_rate
maximum VM allows transmission rate (Kbps)

ingress_policing_burst
a policy algorithm parameters for maximum data rate exceeds the configured policy indication may be sent by the interface (in KB).

The vm1 rate limiting at 1 Mbps, using the following command:

$ ovs-vsctl set interface tap0 ingress_policing_rate=1000
$ ovs-vsctl set interface tap0 ingress_policing_burst=100

Similarly, the vm2 rate limiting at 10 Mbps, the host1input of the following command on the host:

$ ovs-vsctl set interface tap1 ingress_policing_rate=10000
$ ovs-vsctl set interface tap1 ingress_policing_burst=1000

VM1 to view the current rate limit, execute the following command:

$ ovs-vsctl list interface tap0

test

To test the above configuration, make sure the two VM Host and measuring Measurement Host are installed and running on netperf. Netperf consists of a client (netperf) and a server (netserver) components. In this example, we measure the running on the host NetServer (usually in the form of startup daemon installation netperf NetServer , which means that it is the default operation).

For this example, we assume that the host IP measurement is 10.0.0.100. And it can be accessed from two virtual machines.

On the "vm1" run the following command:

$ netperf -H 10.0.0.100

This will result in the transmission of TCP traffic VM1 possible speed measurement block to the host. After 10 seconds, the output range of values. We are interested in "Throughput" value to Mbps (10 ^ 6 bits / second). This value should be close to 1 VM1. On VM2 run the same command result should be close to 10.

Troubleshooting

Open vSwitch using Linux traffic-control flow control function to achieve rate limiting. If you do not see the effect of limiting the rate configuration, make sure your kernel to open the "ingress qdisc" option at compile time, and the user space tools installed (for example, , /sbin/tc).

extra information

Open vSwitch use rate limiting policy, which does not packets are queued. It drops any packets exceeds the prescribed rate. Specify a larger burst size algorithm to make it easier to be tolerant, which is very important for the agreement have severe reactions such as TCP packet loss. Avoid the burst size set smaller than the MTU value (e.g., 10 kb).

For the TCP traffic, the burst size set to the value of a significant proportion of the total rate of the strategy (e.g.,> 10%), closer to facilitate full flow. If the amount is set to burst a large proportion of the total rate, the client will be slightly higher than the actual average rate setting policy rates.

For UDP traffic, the burst size set slightly larger than the MTU value, and that the performance of your tool does not transmit packets larger than the MTU value (otherwise, the packet will be fragmented, resulting in poor performance). For example, you can force netperf UDP traffic sent 1000 bytes in length, run the following command:

$ netperf -H 10.0.0.100 -t UDP_STREAM -- -m 1000

Guess you like

Origin blog.csdn.net/sinat_20184565/article/details/94654914