vmware storage-1

Datastore:
data storage means is a logical storage, it can use disk space on a physical device, you can span multiple physical devices. Similar to a file system, but hides the details of physical storage devices, for storing virtual machine files provide a unified model, the virtual machine files saved data storage specific directories which can also be used to store templates and ISO images. vSphere supports the following types of data storage:
VMFS: Virtual Machine File System, a distributed file system storage
NFS: NAS (Network Attached Storage A storage format)
VSAN :( embedded in vsphere among the underlying device into a virtual server resources pool, and then redistributed to the top of the virtual machine)
the vSphere virtual volume (more granular storage demand can automatically dynamically adjust a virtual machine)
i.e. a single virtual machine and its disk storage rather than become LUN management unit of the storage system. Virtual package volume and other virtual machine virtual disk files, and stored locally on the storage system.
Here Insert Picture Description
ESXi host supports a variety of storage technologies:

  • LUN is the logical storage unit
    (physically divided logical volume unit) located on the logical structure of the disk array can be directly used to identify the server, the number of devices on the SCSI bus can be attached is limited, typically 8 or 16 , there may be a plurality of each of the following target LUN
    expanded description of the device

  • Direct attached (Direct Attached Storage): connection to an internal or external memory or disk array host network through a direct connection rather than

  • FC (Fiber Channel): used for high-speed transmission protocol san, in general, Fiber Channel nodes are servers, storage systems, tape drives, or

  • FCOE (FCoverinternet) :( optical fiber transmission in the Ethernet frame is encapsulated) by an Ethernet link on the same carrying capacity while using Ethernet and Fiber Channel traffic, a large increase in the physical infrastructure. FCoE also reduces the total number of network ports and cables.

  • iSCSI: one kind SCSI transfer protocol supports access to storage devices through standard cable connections and TCP / IP network. iSCSI over TCP / IP mapping for SCSI storage block.

These are based on the LUN and format supplied by the virtual machine file system VMFS

  • NFS is a transport protocol LINUX NAS windows inside with one kind among with the CIFS

  • NAS: dedicated data storage server, it is actually there Lun basic framework, but the biggest difference is the back-end storage itself has created a file system without formatting) connected to an Ethernet network
    through a special protocol to transfer to (CIFS NFS) to the virtual machine is based on the file-level sharing

  • VSAN: self-development is a distributed storage products vmware software, cloud computing and big data environments with very wide
    through distributed storage, distributed storage technology can integrate scattered in the service of shared resources as a resource pool stored
    originally had on × 86 server slots, but not all spent, but also directly connected to the internal hard drive is below the presence server is a distributed cluster

  • Vsphere virtual volumes: in vmware unique, back-end storage container is either based on existing NAS SAN
    resources on the storage devices can be used by FC / Ethernet connection to the virtual disk to use it is an overview of our entire virtual storage

About VMFS:
ESXi host version 6.5 and later support VMFS5 and VMFS6:
VMFS5 and VMFS6 support functions:

  • They allow concurrent access to shared storage of a plurality of hosts concurrently access the same data store, and do not interfere with each other
  • They can be extended dynamically (dynamic expansion Datastore)
  • They use a block size of 1 MB and 512 MB, which is very appropriate virtual machine disk files for storing large virtual disk file is packaged in a large number of G may block
    if the number of block size 512M provides that when I block storage points relatively small
  • Lock making on providing concurrent access to shared storage disk when the
    IO will not cause conflicts if ESXi host fails, the release of each virtual machine's disk lock, so that you can restart virtual machines on other ESXi hosts.

VMFS6 Supported features:

  • 4K local storage device
  • Automatic Space Reclamation:

There are thin-provisioned, with accounting for how much space can support recovery

Supported Services:
• migration of running virtual machines from one ESXi host to another with no down time
• automatically restart failed virtual machine on a different physical server
virtual machine • VMFS cluster in different physical servers

VMFS can be deployed based on the three scsi storage devices:
• Direct Attached Storage
• Fiber Channel storage
• stores the iSCSI

VMFS virtual machine we do not see the system, the virtual machine is still seeing the file system used by the operating system, NFS, EXT4, etc., which are packaged into VMFS, the ESXi host, it is for the VMFS, not virtual it's machine

NFS:
NFS file system level is shared across the network storage is the biggest difference between A network-attached storage and VMFS NAS in the backend was already the data is formatted NFS mount by way of data storage resources to mount on a dedicated host file system, so that the file system level
supports TCP / IP, NFS version 3 and 4.1
as NFS 3 and NFS 4.1 client does not use the same lock-up agreement, you can not use a different NFS mount version of the same data is stored on multiple hosts. From the same access virtual disk two incompatible client may result in incorrect behavior and cause data corruption.

VSAN : VMware is a distributed storage software, virtual environments and hpervisor polymerization, software-defined storage
connected through the cluster host hard drive, drive, vsan create a shared virtual machine by aggregating data storage
to be aggregated into scattered disk resources vSAN resource pool can be configured as mixed or all-flash storage. In the hybrid storage architecture, the server connection VSAN ssd hdd and stored together, to create a distributed shared data storage, storage hardware abstraction thereby, provide storage layer software-defined virtual machines. A flash memory as a cache read / write buffer to speed up performance, disk capacity and provide persistent data storage servers are calculated and stored in both, the full flash architecture, wherein the flash memory device as a write cache, and provide ssd capacity, data persistence fast and consistent response time

VVOLS:
back-end storage is a storage container, the internal environment of a lot of things are packaged together, at the time of migration is very convenient to traditional storage LUN or NFS piece to replace, vSphere virtual volume storage paradigm to address the software-defined data center demand in the next store.
 VMDK local representation can no central node management based SAN / NAS (not virtual volume management and LUN) represented by logic VMDK do
 based on already existing SAN LUN and do not have to re-deploy some use of them

 create a new control path in VM / VMDK level data manipulation. (Finer particle size)

 support VM-level snapshots on external expansion, replication and other operations.

 automatic control of service levels for each vm.

 API for storage backend storage can be found

 storage container can span the entire array

Raw Device Mapping:
ordinary deposit is VMDK and flat vmdk
stored on rdm.vmdk RDM is a mapping file must be stored in the VMFS file system
through the mapping file to store it mapped to LUN bare a bit like a shortcut pointer
allows a virtual machine directly access to physical LUN

Physical storage considerations
Sphere storage needs, including the following:
LUN size of
I / O bandwidth
I / O requests per second that can be executed LUN IOPS
disk cache parameters
partition and mapping how the shielding partition how
each ESXi host has the same LUN
active - active or active - passive arrays
exporting NFS datastore property

FC:

  • SAN: storage area network storage area network

SAN side and has host-side storage, storage aside, aside host, via an intermediate switching network connected together to
if called together by FC connected to FCSAN
if an ordinary switch by iscas ip san
can be high-performance storage and remote host high-speed communication devices through a fiber channel private network

  • ESXi support:

32 Gbps Fiber Channel
Fiber Channel over Ethernet (FCoE)
Here Insert Picture Description
from top to bottom

  • Disk array storage system to do the physical disk and then do the physical volume is formed raid divided LUN logical storage unit is formed
  • SP: a memory controller transmitting a command signal to the disk array
  • FC switch
  • HBA card and host communication through
  • There are a host IO generated by HBA card when the link to the FC switch and then through the controller

WWN: only 64 of the world designed to address FC node
Lun masking: sp level achieved lun mapping
Zoning: Set at the switch level: In order to limit host access Lun lun assigned to only see
you this HBA card which is connected to which SP LUN can see, HBA card can access which LUN to
what users see different quotas is not the same
by Zoning is not accurate to-one access
which requires masking, to do more fine

Multi-path storage technology
storage multi-path can also allow access to the disk array hardware failure at the time
and support load balancing
Here Insert Picture Description

Hosts are connected to a particular HBA port SP connected to the FC switches and accesses the disk array.
Under normal circumstances, the host only to access the disk array through a specific path. If this path fails, the disk array can be accessed by other valid path.

• Active-active disk arrays allows access lun simultaneously through all storage processors, without significant performance degradation. All paths at any time (unless a path fails) are active.
• Active-passive disk array, a storage processor is actively LUN for a given service. Another storage processors act as backup for the LUN, might be actively servicing other LUN I / O. I / O can only be sent to the active processor. If the primary storage processor fails, one of the auxiliary storage processor or automatically activated by management interventions.

If you do not want to switch optical fiber can then pass to the switch fabric FCOE internet frame is encapsulated in a frame which
Here Insert Picture Description
has in the course of software and hardware FCOE
main difference FC drive adapter
hardware: a network adapter by polymerization
software: by way of software while network card requires compatible software, without having to install a dedicated HBA or third party FCOE ESXI driver on the host, the NIC must be used for binding FCoE as an uplink port to the vSwitch VMkernel group comprising (VMK) of the
FCoE protocol fiber channel frames are encapsulated into Ethernet frames. Therefore, your host can use lossless Ethernet to deliver 10 Gbit Fiber Channel traffic

The FCOE Software configuration:
The first step:
connecting to the physical FCoE nic VMkernel installed on the host.
VLAN ID found during initialization FCoE (FC network need to be isolated) and priority. Priority class is not configured in vSphere.
ESXi supports up to four network adapters for software FCoE port of.
Step two:
Add Software FCOE adapter
Here Insert Picture Description

FCOE multi-path software:

Vsphere virtual switch paths through the multi-
port connection to a plurality of the plurality of physical NICs implemented vmkernel plurality of access paths to a storage fc
Here Insert Picture Description

ISCI components:
Here Insert Picture Description

ESXi host configuration of an iSCSI initiator. Start the program can be based on hardware, in this case, start the program is iSCSI HBA. Or start the program may be software-based, called iSCSI Software Initiator

iSCSI Address:
Here Insert Picture Description
iSCS Alias: i.e. an alternative device or port easier iSCSI name of the iSCSI name management use. iSCSI aliases
are not unique, it's just a friendly name to associate with a port.
Target: target storage
Eui: different naming rules

Storage device naming conventions:
a storage device definitions There are many ways
Runtime name: Use vmhbaN: C: T: L conventions. This name is not persistent across reboots.
vmhbaAdapter: CChannel: TTarget: LLUN

vmhbaAdapter is the name of the storage adapter. This name refers to the physical adapter on the host, rather than the SCSI controller used by the virtual machine.

Channel is a memory channel number.
ISCSI adapter and software dependent hardware adapter to display the number of the channel to a plurality of paths to the same destination.

Target for the target number. Target number determined by the host, if the mapping changes visible to the host of targets, numbers may change. Shared by different hosts might not have the same target number.

LUN is the LUN number that shows the target position of the LUN. LUN number is provided by the storage system. If the target is only one LUN, the LUN number is always zero (0).
C1 represents the second channel
T1 represents the second adapter
first three changes (or restart other) occurs no change in number lun

iSCSI adapter:

Here Insert Picture Description

  • Card Hardware: built ISCSI initiator, there HBA driver (indenpendent) installed on the host specialized iSCSI
    HBA adapter, enabling the host to the switch, efficient data exchange between the host and storage.

  • Denpendent: depends on the card, the card built in the iSCSI initiator with which esxi to two things: + iscsi drive
    using the TOE may greatly improve the data transmission rate. TCP / IP protocol stack functions performed by the TOE, iSCSI and functional layer is still done by the host. vmkernel
    need to address with a iscsi

  • Pure Software: The card is not required, esxi requirements: Driver + TCP / IP + iscsi-start
    performance would be affected, vmkernl overhead big benefits: no card is limited by
    comparison with the first two methods, the third connection iSCSI HBA cards are employed, and therefore the best data transfer performance, the highest price.

ESXi configuration of IP storage:
Here Insert Picture Description

You must create a VMkernel port access for the ESXi software iSCSI.
You can use the same port to access the NAS / NFS storage.
To optimize your vSphere network settings, set the iSCSI network and NAS / NFS network separately: The best is physical separation. If you do not have physical separation, use vlan.
Software iSCSI network configuration includes creating a VMkernel port on the virtual switch to handle iSCSI traffic.
According to the number of physical adapters you want to use for iSCSI traffic, network settings may vary:
• If you have a physical network adapter, you need to have a VMkernel port on the virtual switch.
• If you have two or more physical network adapters for iSCSI, you can use these adapters for host-based multipathing. For performance and security, best practice is to isolate your iSCSI network with other networks. Physically separate networks. If the network can not be separated physically, through a separate VLAN configuration for each network, the network logically separate from one another on a single virtual switch.

Here Insert Picture Description
You must activate the software iSCSI adapter so that your host can use it to access iSCSI storage.
You can only activate a software iSCSI adapter.
Please note that is enabled when you first boot from iSCSI boot If you use the software iSCSI adapter adapter and create a network configuration.
If you disable the adapter, then re-enable it every time you boot the host.

Found iscsi target:
Here Insert Picture Description

SCSI adapter discovery of storage resources on the network, and determine what resources are available for access. ESXi host supports these discovery methods:
 Static
 dynamic, called the SendTargets
the SendTargets response back IQN and all available IP addresses.
• Static discovery: the initiator does not have to perform discovery. The initiator in advance knows all targets it will contact and use their IP addresses and domain names to communicate with them.
• Dynamic SendTargets found or discovered: each time the initiator contacts a specified iSCSI server, it will SendTargets request to the server. Server responds by providing a list of available target to the initiator. The names of these goals and objectives as a static IP address appears in the vSphere client. You can delete a static target added by dynamic discovery. It might return to the list of target during the target if deleted, the next rescan operation. If the HBA is reset or the host is restarted, the target may also be returned to the list.

iSCSI storage multipathing:
Here Insert Picture Description

Related hardware or software iSCSI:

 Use multiple network cards

 Each NIC is connected to a separate VMkernel port

 The VMkernel ports and iSCSI initiator binding

Independent hardware iSCSI uses two or more hardware iSCSI adapters

Published 26 original articles · won praise 23 · views 1619

Guess you like

Origin blog.csdn.net/surijing/article/details/105123145