K8S-OVS use Openvswitch SDN functionality to provide support for a single-tenant and multi-tenant mode mode

k8s-OVS

==============================

Some recent seeking jobs, if there is kubernetes related research and development recruiting friends, please feel free to contact me. My resume by Baidu network disk: https://pan.baidu.com/s/1jI20TWa  download. Thank you

k8s-ovs is a use openvswitch to K8S provide project SDN functionality. The project is based on openshift SDN development principles. Because openshift the SDN network solutions and openshift code itself coupled together, not like flannel and calico and other network programs as plugins to provide services for independent K8S, so I developed k8s-ovs, it has openshift excellent SDN function, You can independently K8S provide services.

Part of the project code base or is a copy from a few modifications of openshift pkg / sdn / plugin directly. If you have a problem License terms please feel free to contact me correct: [email protected] .

If you have any questions about this project, welcome to join QQ chat group k8s-ovs-sdn of 477023854discussion.

The following will k8s-ovs features and installation details. If you want to learn how to configure the different functions, you can jump to admin.md read.

k8s-ovs function


k8s-ovs mode supports single-tenant and multi-tenant model.

  • Single-tenant mode directly openvswitch + vxlan the POD K8S the network of a large two-story, all POD can communicate.
  • Multi-tenant mode can also use openvswitch + vxlan to set up K8S the POD network, but it can be based on K8S in NAMESPACE to allocate virtual network to form a network independent of the tenant, a NAMESPACE the POD can not access other NAMESPACE in the PODS and SERVICES .
  • NAMESPACE can be set for some multi-tenant mode, these NAMESPACE The POD can exchange visits and all other NAMESPACE in the PODS and SERVICES.
  • NAMESPACE two can be combined in a multi-tenant virtual network mode, so that they can exchange visits of PODS and SERVICES.
  • Multi-tenant mode can also be combined above NAMESPACE virtual network separation.
  • Supports single-tenant and multi-tenant mode POD flow limiting function, so you can ensure POD on the same host relatively fair share bandwidth of the NIC, a POD and will not appear because the amount of traffic filled the card can not properly lead to other POD case work.
  • Outreach support load balancing mode single-tenant and multi-tenant.

installation


Deployment installation, you need at least 3 servers, one of which K8S as a master, the other two nodes as a node. My test environment for Centos7.2, docker (1.12.6) version and golang (1.7.1) version. Each node node needs to install openvswitch-2.5.0 or later , and each node to node needs ovsdb-serverand ovs-vswitchdrunning.

K8S cluster installation

Please refer to K8S installation manual , after the v1.6.0 version is recommended to install because previous versions of kubelet exist in the case of CNI's IP address leakage problems .

1, K8S cluster installation process should skip this step network deployment, network deployment by following k8s-ovs deployed.

2, needs to be set during installation kubelet use CNI , i.e. kubelet startup parameters to be set --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin, if kubelet is required to use the containers are started /etc/cni/net.d, /opt/cni/binand /var/run/hung on to the interior kubelet.

3, the installation is complete node node K8S will exhibit the following condition. NotReadyBecause not yet deployed network, kubelet in /etc/cni/net.d/no cni found the following directory profile cause, which will be resolved with the deployment of the network will be back.

$ kubectl get node
NAME        STATUS     AGE       VERSION
sdn-test1   NotReady   10s       v1.6.4
sdn-test2   NotReady   4m        v1.6.4
sdn-test3   NotReady   6s        v1.6.4

Installation k8s-ovs

Now we will install two cases, the user can choose their own kind. 1, a key deployment using direct yaml k8s-ovs to K8S cluster, and to act as daemonset running. 2, details of each component mounting step k8s-ovs so that the user of the various components k8s-ovs a dependence understanding.

Started Following installation of the operating premise is that you have installed the K8S cluster follow the above steps. And each node on the node ovsdb-serverand ovs-vswitchdup and running.

Quick Installation

Quick installation requires you to deploy K8S 1.6 or later cluster, if it is 1.5 or 1.4 cluster file download yaml be amended accordingly.

$ kubectl apply -f https://raw.githubusercontent.com/tangle329/k8s-ovs/master/rootfs/k8s-ovs.yaml

After the above command returns successfully, you can get the pod and node status by running the following query to verify that the installation was successful:

$ kubectl get pod --namespace=kube-system | grep k8s-ovs
k8s-ovs-etcd-h0fsc                                   1/1       Running   0          2h
k8s-ovs-node-c27jr                                   1/1       Running   0          2h
k8s-ovs-node-fxwwl                                   1/1       Running   0          2h
k8s-ovs-node-p09jd                                   1/1       Running   0          2h
$ kubectl get node NAME STATUS AGE VERSION sdn-test1 Ready 11m v1.6.4 sdn-test2 Ready 15m v1.6.4 sdn-test3 Ready 11m v1.6.4

So far, k8s-ovs deployment is complete, the user can jump to admin.md function configured.

Detailed installation

Detailed installation requires you to deploy more than K8S v1.4 version of the cluster. The following commands need to run each K8S the node node can also be a good correspondence files compiled on one server, then use bulk deployment tools to the corresponding file on all who node node. You can also use K8S-OVS-RPM RPM to produce the SPEC project RPM packages of the project, and then mounted directly RPM package to complete the operation of the following commands.

$ cd $GOPATH/src/
$ git clone https://github.com/tangle329/k8s-ovs.git
$ cd k8s-ovs
$ go build -o rootfs/opt/cni/bin/k8s-ovs k8s-ovs/cniclient
$ cp rootfs/opt/cni/bin/k8s-ovs /opt/cni/bin/
$ cp rootfs/opt/cni/bin/host-local /opt/cni/bin/ $ cp rootfs/opt/cni/bin/loopback /opt/cni/bin/ $ cp rootfs/etc/cni/net.d/80-k8s-ovs.conf /etc/cni/net.d/ $ go build -o rootfs/usr/sbin/k8s-ovs k8s-ovs $ cp rootfs/usr/sbin/k8s-ovs /usr/sbin/ $ cp rootfs/usr/sbin/k8s-sdn-ovs /usr/sbin/

The first of a go build -o rootfs/opt/cni/bin/k8s-ovs k8s-ovs/cniclientgeneration of k8s-ovs is cni client, kubelet when creating and deleting POD will call it to configure the network part of the POD. The second go build -o rootfs/usr/sbin/k8s-ovs k8s-ovsgeneration of k8s-ovs is the core of our entire k8s-ovs, all of the aforementioned functions to achieve by it, it is also cni server-side, front cni accept and process the client's request. Do not pay attention to the / opt / cni / bin / directory to the PATH environment variable.

The case is usually used cni in kubelet be executed cp rootfs/etc/cni/net.d/80-k8s-ovs.conf /etc/cni/net.d/after the command k8s of node node will be ready status, in addition to make sure that only 80-k8s-ovs.conf /etc/cni/net.d/ in this document, the Executive after the above command K8S node node state:

$ kubectl get node
NAME        STATUS    AGE       VERSION
sdn-test1   Ready     11m       v1.6.4
sdn-test2   Ready     15m       v1.6.4
sdn-test3   Ready     11m       v1.6.4

Set of network parameters k8s-ovs

Before setting network parameters, you need to build a etcd service, or with K8S of apiserver share a etcd service, all K8S node needs access to the etcd service.

After etcd build a good server, use the following command to set the network parameters k8s-ovs of:

$ etcdctl set /k8s.ovs.com/ovs/network/config '{"Name":"k8ssdn", "Network":"172.11.0.0/16", "HostSubnetLength":10, "ServiceNetwork":"10.96.0.0/12", "PluginName":"k8s-ovs-multitenant"}'

Wherein, Networkto set the entire cluster POD K8S network segment; HostSubnetLengthfor setting the length of each sub-node of the node; ServiceNetworkfor setting K8S service in the network segment, and the parameters needed K8S apiserver the --service-cluster-ip-rangespecified network consistent; PluginNamewith tenant to setting mode k8s-ovs-multitenantfor setting a multi-tenant mode k8s-ovs-subnetfor setting a single tenant mode.

Start k8s-ovs

1, before starting to set on each node accesses K8S apiserver K8S node environment variables, k8s-ovs is to communicate through the environment variable and the apiserver. If the non-use of encryption K8S you need to set KUBERNETES_MASTER, you need the following two variables apiserver_vipand apiserver_portreplace it with your own apiserver services ip and port:

$ export KUBERNETES_MASTER="${apiserver_vip}:${apiserver_port}"

If you use encryption K8S KUBECONFIG you need to set environment variables. We are using encryption so KUBECONFIG environment variable settings, in which each node needs to have above /etc/kubernetes/admin.conf this file, which is generated on K8S master when deploying K8S cluster encryption services you need to copy it to turn on each node node:

$ export KUBECONFIG="/etc/kubernetes/admin.conf"

2, set the environment variable can run after the k8s-ovs. k8s-ovs There are several important options --etcd-endpointsfor specifying etcd services ip + port access list; if it is encrypted etcd service can be --etcd-cafile, --etcd-certfileand --etcd-keyfileto specify the CA, certificates, keys; --etcd-prefixfor specifying k8s-ovs network configuration storage directory, network configuration requires the front section and etcdctl setthe command as the specified directory; --hostnameis used to specify the name of the node node k8s-ovs running, the name of the foregoing needs and kubectl get nodethe name of the same output, usually --hostnamenot necessary to specify, but sometimes some of the clusters K8S kubelet deployment script to pass through --hostname-overrideoptions to override the default node node name, then you need to set k8s-ovs in --hostnameorder to be consistent. Since our environment is not covered by node node name, etcd did not use encryption, so run the following command:

$ / Usr / sbin / k8s-OVS --etcd endpoints = http: // $ {} etcd_ip: 2379 --etcd-prefix = / k8s.ovs.com / OVS / network --alsologtostderr --v = 5

So far, k8s-ovs deployment is complete, the user can jump to admin.md function configured.

Item code Clouds address: https: //gitee.com/mirrors/k8s-ovs

Magnetic Search Site Map Update 2020

https://www.cnblogs.com/cilisousuo/p/12099547.html

Guess you like

Origin www.cnblogs.com/cilisousuo/p/12114296.html