For a specific introduction to vSphere7 with WCP, that is, integrating Kubernetes, please refer to:
vSphere 7 integrates Kubernetes to build a platform for modern applications
vSphere 7 with WCP installation
Reference article: After
installing vSphere 7 with WCP
, you can see on the workload page that the original vSphere cluster has become a WCP Supervisor cluster, and a resource pool named namespaces is created. Three VMs are generated inside the cluster as the master node of kubernetes in WCP.
WCP architecture
In WCP, vSphere supports native vSphere pods and the original K8S cluster. The relationship between them is shown
in the following structure diagram: In Supervisor, different Supervisor Namespaces can be established, and each Supervisor can support different virtualization, including VM /vSphere Pod/K8S cluster and database.
For an explanation of the components in the architecture, please refer to:
vSphere with Kubernetes architecture
Supervisor cluster network connection
VMware NSX-T™ Data Center provides network connections for objects in the Supervisor cluster and external networks. The connection of the ESXi hosts included in the cluster is handled by the standard vSphere network.
Experimental steps
Generate Namespace
As shown in the figure above, the Namespace (Supervisor Namespace) in WCP has a broader meaning than NS in K8S. In use, it is established by the administrator of the IT infrastructure through a graphical interface, and developers can only use it, not establish it.
Through the WCP platform, the work is coordinated by a method (graphical interface/command line) that meets the usage habits of managers and developers.
- Menu -> Workload Management -> New Namespace to create NameSpace
- You can set permissions/storage in the Namespace
- Developers using NameSpace
can see the summary information of NameSpace in the second step, open the last "link to CLI tool"
The system will give different plug-in links according to the client's operating system.
After downloading and installing the plug-in as prompted, you can log in.
Developers use vSphere kubectl to use vSphere pods
1. Log in to the system
After completing the plug-in installation in the previous step, log in through the command line.
Note that in this version, you need to add the parameter –insecure-skip-tls-verify (not mentioned in the prompt of the link)
. The user here is the user in the permissions settings of Namespcae. If multiple Namespaces are set for the same user to find, when logging in Will be displayed together.
kubectl config use-context xxx namespace name to change the current Namespace.
2. Use kubectl command
[root@localhost ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.7-2+bfe512e5ddaaaa", GitCommit:"bfe512e5ddaaaa7243d602d5d161fa09a57ecf3c", GitTreeState:"clean", BuildDate:"2020-03-03T03:40:35Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.7-2+bfe512e5ddaaaa", GitCommit:"bfe512e5ddaaaa7243d602d5d161fa09a57ecf3c", GitTreeState:"clean", BuildDate:"2020-03-03T03:37:44Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
You can see that the version of kubernetes is 1.16.7.
[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
422845fe835f0bfe3ee1d61194b81eed Ready master 18d v1.16.7-2+bfe512e5ddaaaa
42285c4ffd346b9812c373070cb0cf49 Ready master 18d v1.16.7-2+bfe512e5ddaaaa
4228e3c8c95df99566f41fa037fc190e Ready master 18d v1.16.7-2+bfe512e5ddaaaa
esx-01a Ready agent 18d v1.16.7-sph-4d52cd1
esx-02a Ready agent 18d v1.16.7-sph-4d52cd1
esx-03a Ready agent 18d v1.16.7-sph-4d52cd1
The three masters and three hosts generated by the system, note that the role of the host is agent.
3. Use vSphere pod to build a simple application.
A test Deployment is recommended on the official website:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.5
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: I just deployed a PodVM on the Supervisor Cluster!!
In yaml, directly specify the service type: LoadBalancer. The load balancing function of this Ingress is provided by NSX-T, as shown in the figure below.
The effect can be seen on the web page:
by refreshing or reopening the page, you can see the effect of LB (above).
At the same time, you can see the pod generation in the WCP console, which is the administrator's perspective.
Provision Tanzu Kubernetes cluster using Tanzu Kubernetes Grid service
In the WCP architecture mentioned above , this content is the green part.
The relationship between the Supervisor cluster and the Tanzu Kubernetes cluster The
Supervisor cluster provides the management layer where the Tanzu Kubernetes cluster is built. The Tanzu Kubernetes Grid service is a custom controller manager that contains a set of controllers belonging to the Supervisor cluster. The purpose of the Tanzu Kubernetes Grid service is to provision Tanzu Kubernetes clusters.
There is a one-to-one relationship between the Supervisor cluster and the vSphere cluster, and there is a one-to-many relationship between the Supervisor cluster and the Tanzu Kubernetes cluster. Multiple Tanzu Kubernetes clusters can be provisioned in a single Supervisor cluster. The workload management function provided by the Supervisor cluster allows you to control the cluster configuration and life cycle, while helping to maintain concurrency with upstream Kubernetes.
The relationship between the content library and the Tanzu Kubernetes cluster The
vSphere content library provides virtual machine templates for creating Tanzu Kubernetes cluster nodes. For each Supervisor cluster in which the Tanzu Kubernetes cluster is to be deployed, a subscribed content library object must be defined to provide a source for the OVA that the Tanzu Kubernetes Grid service uses to build cluster nodes. You can configure the same subscribed content library for multiple Supervisor clusters. There is no relationship between the subscribed content library and the Supervisor namespace.
When experimenting with the content library and Tanzu Kubernetes cluster, we might as well define the content library first.
Reference:
Create a subscribed content library and associate it with the Supervisor cluster
Tanzu Kubernetes cluster network connection
- In the actual configuration, the first step is to create a content library
过程
在 vSphere Client中,选择菜单 > 内容库。
单击创建新内容库图标。
此时将打开 新建内容库向导。
在名称和位置页面上输入标识信息。
输入内容库的名称。
对于 vCenter Server,选择配置了 Supervisor 集群的 vCenter Server 实例。
单击下一步。
在配置内容库页面上提供内容库配置详细信息。
选择菜单项已订阅内容库。
在订阅 URL 文本框中,输入已发布库的 URL 地址:https://wp-content.vmware.com/v2/latest/lib.json。
对于下载内容选项,选择立即。
单击下一步。
出现提示时,接受 SSL 证书指纹。
在从清单中删除已订阅内容库之前,SSL 证书指纹存储在系统中。
在添加存储页面上,选择数据存储作为内容库内容的存储位置,然后单击下一步。
在即将完成页面上,检查详细信息并单击完成。
在内容库页面上,确认库已同步。
在 vSphere Client中,导航到菜单 > 主机和集群 > 集群 > 配置 > 命名空间 > 常规。
对于内容库,单击编辑。
单击添加库。
选择所创建的 Kubernetes 已订阅内容库。
要完成此过程,请单击确定。
In the actual operation, I found the subscribed publishing library link: https://wp-content.vmware.com/v2/latest/lib.json, an error will be reported.
Use http://wp-content.vmware.com/v2/latest/lib.json to pass.
- Write yaml file
It is recommended to use yaml files for provisioning a Tanzu Kubernetes cluster. For example, you can refer to the official website:
Sample YAML for provisioning a Tanzu Kubernetes cluster
The parameters involved are explained in detail in:
Configuration parameters for provisioning Tanzu Kubernetes cluster
Specify the virtual machine type when provisioning a Tanzu Kubernetes cluster. Each class type reserves a set of resources for the virtual machine, including processing, memory, and storage.
The yaml file used in the experiment is as follows:
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: tkg-cluster-1
namespace: pkg-vmlab
spec:
distribution:
version: v1.16
topology:
controlPlane:
count: 1
class: best-effort-small
storageClass: wcp-storage-policy
workers:
count: 3
class: best-effort-small
storageClass: wcp-storage-policy
The Tanzu Kubernetes cluster can be generated by applying the yaml file.
It is worth noting that the cluster is completely created/maintained/destroyed by developers. The operation and maintenance personnel cannot operate it from the control platform of wcp.
Delete the cluster, switch the configuration context to the target cluster, and execute:
kubectl delete tanzukubernetescluster --namespace CLUSTER-NAMESPACE CLUSTER-NAME
summary
Our first experience in WCP
- Discussed WCP architecture and network topology
- Create Supervisor Namespace from the perspective of administrators, and use them by developers to create vSphere pods
- Create a Tanzu Kubernetes cluster from the perspective of developers and manage it throughout its life cycle.