Install VMware vSphere 7 with WCP platform

VMware VCF4.0及vSphere 7

On April 2, VMware officially released vSphere7 and VCF4.0 (VMware Cloud Foundation). vSphere7 integrates the VM and Kubernetes platforms, and provides network services with the help of NSX-T, one of the VCF components, including routing, switching, firewall, load balancing and other functional services.
For a specific introduction to vSphere 7 integrating Kubernetes, please refer to:
vSphere 7 integrates Kubernetes to build a modern application platform

Software list

This experiment uses the latest official website downloadable components to install, as follows:

name version Remarks internal number image
vSphere ESXi 7.0.0 15843807 VMware-VMvisor-Installer-7.0.0-15843807.x86_64.iso
vCenter 7.0.0 15952498 VMware-VCSA-all-7.0.0-15952498.iso
NSX-T 3.0.0 15946739 nsx-unified-appliance-3.0.0.0.0.15946739.ova

Hardware situation

name Configuration Parameter Description
Host Three servers, sharing management cluster and computing cluster Each provides 2 core 2.20GHz CPU, 128G memory, 1.92T vSAN storage, 2-port 1000baseT and 2-port 10GbaseSX
Internet equipment 48-port 1/10G switch Meet host access

Resource requirements

Virtual machine vCPU Memory G Store G
vCenter vCSA 4 19 200
NSX-T Manger 6 24 300
NSX-T Edge 8 32 200

IP address planning

name Address segment Remarks
NSX-T Manager 192.168.10.40-43/24 192.168.10.40 is VIP
NSX-T Edge 192.168.10.31-32/24
vCenter 192.168.10.10/24 GW 192.168.1.1
vSAN 192.168.130.0/24 Interoperability in this network segment
vMotion 192.168.140.0/24 Interoperability in this network segment
VTEP IP Pool 192.168.13.101-199/24 Interoperability in this network segment
WCP master 192.168.101-105/24
Pod cidr 172.211.0.0/20 internal
Services CIDR 172.96.0.0/23 internal
Ingerss CIDR 172.208.0.0/23 Externally routable
Egress CIDR 172.206.0.0/24 Externally routable
External network interface 10.YX1 / 28 Opposite 10.YX13

System logic diagram

Insert picture description here
Note 1: NSX-T3.0 can not only use the original self-built nVDS when preparing the host, but also use the vDS created by vSphere (named CVDS), including two cases of single vDS and mixed computing traffic vDS. However, WCP compatible clusters must be configured with at least vSphere Distributed Switch 7.0, that is, CVDS.
Note 2: All network MTUs related to VTEP must be greater than or equal to 1600
Note 3: All components synchronize NTP and unified DNS
Note 4: ESXi management ports of all hosts use separate Gigabit ports and separate standard switches
Note 5: If you use vSAN , The disk group version must be upgraded to 11 or above.

Installation process

1. Install ESXi on the host

2. Install vCenter

  • Import ova file
  • Turn on the virtual machine and log in to https://hostname:5480 to configure and install vcsa.
    After the end of the page:
    Insert picture description here
  • Log in to https://hostname again to enter the vCenter management interface. Configure the host cluster and storage.
  • WCP Insert picture description hereprompts in Menu -> Workload Management to install NSX-T first. Here are a few concepts to understand:
  • Workload: A collection of applications composed of vSphere Pods, regular VMs, or both.
    Insert picture description here
  • Supervisor Cluster: Cluster enabled for vSphere and Kubernetes. Insert picture description here
    CRX: Executing the program while the container is running (CRX). From the perspective of Hostd and vCenter Server, CRX is similar to VM. CRX includes a paravirtualized Linux kernel that works with the hypervisor. CRX uses the same hardware virtualization technology as VM and has a VM boundary. A direct boot technique is used here, which allows the CRX Linux client to initialize the main init process without kernel initialization. This makes the vSphere pod start almost as fast as the container. In use, it is consistent with most commands of native POD.
    For the needs to use native Kubernetes clusters, the WCP platform uses Tanzu Kubernetes clusters.
  • Tanzu Kubernetes cluster: is a complete open source Kubernetes software distribution, it is packaged, signed and supported by VMware. In the vSphere environment of Kubernetes, the Tanzu Kubernetes grid service can be used to provide a Tanzu Kubernetes cluster on the management cluster. By using kubectl and YAML definitions, the Tanzu Kubernetes grid service API can be called declaratively.
    The complete vSphere with Kubernetes Architecture for Tanzu Kubernetes Clusters is as followsInsert picture description here

3. Install NSX-T

Follow the prompts above to install and configure NSX-T.

  • Import OVA file

  • Run NSX-T and log in. It can be seen that compared with the 2.5 version, the advanced network options have been removed from the first menu.Insert picture description here

  • Register VCSA in NSX-T. Note that you need to turn on the button to enable trust here. Insert picture description here
    After completion:Insert picture description here

  • To create Transport Zones (do not use the system default),
    two Transport Zones are required: Overlay and VLAN Transport Zones.

  • Create Host/Edge Uplink Profile to generate Host/Edge Transport NodeInsert picture description here

  • Create IP Pool used by VTEPInsert picture description here

  • Create the Transport Node Profile, and pay attention to choosing vCenter to establish vds as the transport node switch.Insert picture description here

  • Host Transport Node is ready to
    apply the profile created in the previous stepInsert picture description here

  • Add Edge node and Edge-Cluster. Note that the edge needs to create two switches for the overlay and uplink vlan networks; since the edge is a VM, it must be connected to the vDS of vCenter. The example is as follows:
    Name: nsxt-edge-01/ 2
    Host name/FQDN: your edge fqdn
    Form Factor: Large
    Must provide CLI and Root Credentials
    Enable SSH login for CLI and root to facilitate troubleshooting during beta
    IP Assigment: Static
    Management IP:
    Default gateway:
    Management Interface: vPG-management-10
    Search domain names: optional, your DNS domain name
    DNS Servers: your DNS server IP address
    NTP Server: your NTP IP or fqdn
    New Node Switch:
    Edge Switch Name: overlay-switch
    Transport Zone: nxs-overlay-tz
    Uplink Profile: wcp-edge -uplinkprofile
    IP Assignment: Use IP Pool
    IP Pool: VTEP Pool
    uplink1 - vPG-edge-overlay
    New Node Switch:
    Edge Switch Name: vlan-switch
    Transport Zone: nsx-vlan-tz
    Uplink Profile: wcp-t0-uplinkprofile
    uplink1 - vPG-edge-vlan
    Select “Finish”
    建好的edge和cluster:Insert picture description here
    Insert picture description here

  • Create the segment used for T0 uplink (ls logical switch)Insert picture description here

  • Create a T0 router,
    configure the IP address of the outgoing port and static routing for the entire network segment (T0 also supports BGP)Insert picture description here

  • Create Storage Profiles
    path on vcsa : Menu→ Policies and Profiles→ VM Storage Policies, Create VM Storage Policy
    . The tags used in it should be allocated in advance

  • Enable WCP in the computing cluster

- Pod CIDR:将从中为 Pod 分配 IP 的内部 CIDR 块。它们不能与工作负载管理 组件 (VC、NSX、ESX、Management DNS、NTP) 的 IP 重叠,也不应与用来 与 Pod 通信的其他数据中心 IP 重叠。
- Service CIDR:将从中为 Kubernetes ClusterIP 服务分配 IP 的内部 CIDR 块。它们不能与工作负载管理 组件 (VC、NSX、ESX、Management DNS、NTP) 的 IP 重叠, 也不应与用来与 Pod 通信的其他数据中心 IP 重叠。
- Ingess CIDR:将从中为 Kubernetes Ingress 和 LoadBalancer 类型的服务 分配 IP 的外部 CIDR 块。这些 CIDR 将来自外部网络的 IP 空间。 Supervisor Kubernetes 控制层面将为每个命名空间分配 1 个公用 Kubernetes Ingress IP (同一命名空间中的所有 Ingress 共享一个公用 IP), 为每个 LoadBalancer 类型的服务分配 1 个 IP (每个 Kubernetes LoadBalancer 类型的服务均分配有一个唯一 IP)。
- Egress CIDR:将用于通过 SNAT 规则将内部 Pod IP 转换为外部 IP 的外部 CIDR 块, 以允许 Pod 访问外部网络。这些 CIDR 应来自外部网络的 IP 空间。 Supervisor Kubernetes 控制层面将为每个命名空间 分配 1 个输出 IP。

Choose Cluster
Insert picture description here
The network topology is as follows. The Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
entire activation process takes 40-50 minutes, and you can see it on the workload interface after success.

Insert picture description here
You can see that the version of k8s is v1.16.7.
You can see three master nodes in the host and virtual machine interfaces.
Insert picture description here
Since then, the WCP platform has been successfully installed and the system can create a namespace.
Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_43394724/article/details/105450441