VMware storage: SAN Configuration Basics

Introduction

 

VMware storage LUN maps to more than just the physical servers that simple. VMware vSphere allows the system administrator to create multiple virtual machines on one physical machine.

 

Potential hypervisor and vSphere ESXi, provide the guest virtual machine using both internal and external storage devices. This article will discuss the foundation and build a SAN administrator considerations when deploying shared SAN storage on vSphere.

 

More information

 

VMware storage: SAN foundation

 

vSphere includes support for JBODs, hardware RAID arrays, SSD disk and PCIe SSD card, including the internal connections disk devices. However, the use of these storage forms a major inconvenience in that they are directly connected to a single server.

 

However, SAN storage but provides a shared, high availability and resilient storage platform, and can be expanded multi-server deployment environment. In addition, the storage vendors to add support for vSphere on the product, provide better performance and scalability than local storage deployment.

 

Using the NAS and SAN storage vSphere deployment is feasible, but only to herein SAN, or block device. Involving iSCSI, Fibre Channel and FCoE protocols.

 

VMware file system and data storage:

 

One of the important structural characteristics vSphere storage block is the use of VMware File System (VMFS). The same as the file system block formatted with the traditional server device, vSphere upper block with VMFS LUN stored in the virtual machine.

 

The storage unit is vSphere datastore, comprising one or more concatenated LUN. In many instances, between the vSphere deployed using a LUN and datastore: 1 correspondence, but this is not required configuration.

 

vSphere After several versions of the change, the same VMFS been updated and improved version of the current ESXi 5.1 using VMFS version 5. Has improved scalability and performance, has a single datastore can host multiple virtual machines.

 

In the internal datastore, virtual machine as a virtual machine disk file (VMDKs) to store. vSphere while allowing direct connection without a LUN VMFS format. These devices are referred to as the original mapping device (raw device mapping, RDM). Raw device mapping virtual environment can be used to create a direct connection with the virtual machine. By VMware RDM functionality, higher I / O performance overhead application can get a huge performance boost, because RDM commands can be called directly from an existing SAN environment.

 

This allows the user to load an existing LUN. If you use Exchange Server, and it has been running in the SAN above, when the virtual Exchange Server, you can run VMware Converter, Microsoft or other third-party conversion product to convert physical machines to virtual machines, if only transformed C disk drive, you can load the original stored in the existing location. So that the server does not require downtime, no need for additional space to allocate VMDK migrate all data in the conversion process VMDK.

 

VMware-SAN connections:

 

vSphere supports Fibre Channel, FCoE, and iSCSI block storage protocol.

 

Fibre Channel protocol provides a multi-path, high elastic infrastructure, but for dedicated storage network infrastructure requires additional expenses, such as optical switches and HBA.

 

Instead, iSCSI provides a relatively inexpensive option for shared storage, because the card is usually far Fibre Channel HBA and converged network adapters cheap.

 

Prior to the latest version of vSphere, multipath is difficult to configure, but the situation has improved. Further, iSCSI connection speed is currently limited to 1Gbps and 10Gbps. Finally, for security administrators iSCSI devices is more complicated, because of its more basic features, not suitable for highly scalable environment.

 

Configuration limits:

 

There are some limitations to the VMware configuration stored in block size. It applies to iSCSI and Fibre Channel:

LUNs per ESXi host – 256

Maximum volume size – 64TB

Maximum file size – 2TB minus 512 bytes

 

These limit values ​​are high for most users are unlikely to reach, but the number in the shared LUN large-scale deployment environment might be a problem, and therefore the number and type of deployment datastore in the vSphere infrastructure is essential.

 

Hypervisor features:

 

vSphere hypervisor contains a number of features to manage external storage.

 

Storage vMotion virtual machine can be moved between datastore, and virtual machines without downtime. This can be a good load balancing or data migration from the old hardware.

 

Storage DRS (SDRS) provides the possibility of policy-based storage. Create a new virtual machine can be built on a service-based strategies, such as IOPS and capacity. Moreover, once the virtual machine is deployed and put into use, SDRS ensure capacity and performance load balancing across multiple similar datastore.

 

Storage feature:

 

vSphere hypervisor contains a number of features to manage external storage.

 

vStorage APIs for Array Integration (VAAI) is a series of additional SCSI commands introduced in ESXi 4.1, which allows the host to a particular virtual machine and storage management operations uninstall (offload) to compliant storage hardware. In the storage hardware help host these operations it will be faster, and saves CPU, memory, storage and network bandwidth resources.

 

The above-mentioned features are "primitives" mapped by vSphere directly to the new SCSI commands to achieve. These features include hardware-assisted atomic lock, which is locked in the file VMFS better granularity. This feature provides protection for an alternative VMFS cluster file system metadata (Metadata) method, can improve the ESX server share the same data stored in large clusters scalability (Scalability); full copy job to copy the data array . This feature allows the storage array can be created directly inside the whole array of replication, ESX server is no longer required to read and write back the data involved in the process; block is cleared, the thinly provisioned VMFS clearing work at ambient offloaded to the array. This feature allows the storage array to quickly clear a large number of memory blocks, and supplied to accelerate the deployment of virtual machines (provisioning). VAAI thereby extended to SCSI UNMAP, allow the hypervisor to control the release of the empty storage array resources under thinly provisioned environment.

 

Hardware-assisted locking:

VMFS file system allows multiple hosts to share concurrent access to the same logical volume, which is a necessary condition for vMotion operation. VMFS has a built-in security mechanisms to prevent the virtual machine is running more than a single host or modified at the same time. When using vSphere "SCSI reserve" as a traditional file-locking mechanism, in this way during an instruction storage-related operations, such as growth or incremental snapshots occur, are using the "RESERVE SCSI" commands lock the entire logical volume. This helps prevent conflicts, but also delayed the completion of the work of storage, because the host must wait for logical volume unlock command "RELEASE SCSI" to continue writing. Use Atomic Test and Set (ATS) command is a hardware-assisted locking mechanism can be locked to the storage array offline, so that you can not to the entire individual logical disk blocks. This will make the remaining logical volumes continue to be locked during access to the host, it is very help avoid performance degradation. The functions simultaneously through VMFS data store, allowing the same cluster deployed more hosts, as well as the deployment of additional virtual hosts on the same logical volume.

 

Full copy:

By Full Copy technology, the ability to deploy virtual machines will be greatly enhanced, because appropriate treatment can work within the storage array, or between storage arrays (array storage vendor support some xcopy function) is completed, it takes a few minutes past processing has now become a matter of seconds, while ESX server CPU load will be reduced (because of its participation in the data flow reduction). The benefit of this property for more meaningful desktop infrastructure environment, this environment is likely to involve template-based deployment of hundreds of virtual machines and the like work.

For Storage vMotion, migrate virtual machines stranger to the process will also be used when a similar cut, because the replication process no longer needs to be uploaded to the ESX server and then reached under an array of equipment, which will greatly release the occupied storage I / O and server CPU cycles.

Full Copy can not only save processing time, the server can also save CPU, memory resources, and network bandwidth and storage front end controller I / O. For most of the above metrics, Full copy can reach up to 95% reduction.

 

Block Clear:

So that the disk array is cleared to complete large (bulk zeroing) will accelerate the standard initialization process. One use for block zeroing is to create a clear thick acute mode (eager-zero thick) virtual disk format. If you do not use the block zeroing technique, create a command must wait until after the end of the disk array is cleared task completed. For large-capacity disk, which may last for a long time. block zeroing (also referred to as copy same) disk array will immediately cursor (Cursor) returned to the service requesting (zero if the write process has been completed), and then generates a clearing completion of the memory block, then no the need to hold the whole work until the end of the cursor.

vStorage APIs for Storage Awareness (VASA)  is another set of API, allowing more potential vSphere resource information stored in the array. RAID levels include, for example, if the thinly provisioned data deduplication and other feature information. Another issue addressed VASA thin provisioning exists is space reclamation. When you delete a file on a Windows or Linux, does not physically remove the file from the disk. On the contrary, after the file is marked for deletion, only in creating a new file, deleted file was eventually deleted. In most cases, this is not a problem. But streamlining is located on the data storage thin virtual disk, this may cause beyond the control of the growth of thin volumes. This is because after you delete a file, free disk space and not returned to the storage array.

 

Deployment of key steps SAN storage:

 

When storage administrators to deploy SAN storage should consider the following steps:

 

Manufacturers and functional support

Most, but not all storage vendors to support vSphere's advanced features, such as VAAI and VASA. If it is possible to use these functions, be careful to confirm. Currently, the integrated array vStorage application interface only for block-based storage arrays (or optical storage iSCSI) effective, does not support NFS storage. More manufacturers are not the same for a VAAI support, some vendors, such as EMC, soon to support these functions, while other manufacturers took a long time to integrate it into all of their storage array models. You can store a list of compatible optical fiber to know which storage array supports vStorage application-specific interface characteristics by looking at VMware. Optical storage with VMware Compatibility List, you can search for your storage array supports VAAI, if so, other application interface is also supported.

 

ISCSI HBA support, and a dedicated connection

If the administrator planning to deploy Fibre Channel, then the HBA must be on the VMware Hardware Compatibility List. The number of HBA on each server workload will depend on the expectations and needs at least two hardware redundancy. For iSCSI, it requires a special card, and therefore require multiple redundancy.

 

Datastore size

Where possible, create as much datastore. To within the limits of storage products, particularly in the case of thinly provisioned. Thus reducing the need for future users of the data may be moved.

 

Datastore type

Datastore is the minimum size of the current virtual machine performance. Therefore, the administrator should prompt datastore and workload types match. For example, the test data should be developed and stored in lower performance. When mapped to a datastore LUN, the storage administrator should also create a separate array-based datastore synchronization protection LUN.

 

VMware and storage of the future:

 

VMware vSphere has sketched out the evolution of block storage to show in the form of virtual volumes (vVOLs) of. Currently, a virtual machine is composed of a plurality of file systems located on the physical LUN mapping datastore thereof. vVOLs virtual machine files will provide opportunities for abstract fame vVOL container, the purpose is to open only QoS capabilities for virtual machine itself. Currently, QoS only as an attribute of the entire datastore, which can lead to the virtual machine just to ensure that it receives the service level of the application and data migration.

 

Same with VMware, other vendors have also been developed specifically for the VMware platform. Tintri is a good example, even though it uses NFS protocol rather than a block. Tintri VMstore platform familiar file types that make up the virtual machine, so you can ensure quality of service, performance tracking, and flash use is accurately positioned at the virtual machine level.

 

 

Applied

 

VMware

Guess you like

Origin www.cnblogs.com/Anderson-An/p/11280146.html