作者: Leic
-
VMware Storage Best Practices
Patrick Carmichael – Escalation Engineer, Global Support Services. -
ESXi6 kernel log (dmesg)
Understanding SCSI device/target NMP errors/conditions in ESX/ESXi 4.x and ESXi 5.x/6.0 (1030381)
2016-10-27T12:50:47.496Z cpu7:32798)ScsiDeviceIO: 2651: Cmd(0x439d80358400) 0x1a, CmdSN 0x1d1f2 from world 0 to dev “mpx.vmhba33:C0:T0:L0” failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
Host Status = 0x0 = OK
Device Status = 0x2 = Check Condition
Plugin Status = 0x0 = OKSense Key = 0x5 = ILLEGAL REQUEST
Additional Sense Code/ASC Qualifier = 0x24/0x0 =INVALID FIELD IN CDB
For a complete list of possible Sense Keys, see SCSI Sense Keys
For a complete list of Additional Sense Code/ASC Qualifier pairings, see ASC-NUM.TXT -
ESXi disk (LUN) property check
Identifying disks when working with VMware ESXi/ESX (1014953)
Run these commands to collect disk and LUN information from ESXi 6:
- Run the esxcli storage core path list command to generate a list of all LUN paths currently connected to the ESXi host.
- Run the esxcli storage core device list command to generate a list of LUNs currently connected to the ESXi host.
- Run the esxcli storage vmfs extent list command to generate a list of extents for each volume and mapping from device name to UUID.
- Run the esxcli storage filesystem list command to generate a compact list of the LUNs currently connected to the ESXi host, including VMFS version.
- Run the ls -alh /vmfs/devices/disks command to list the possible targets for certain storage operations.
-
How to create/delete/assign RDM in ESXi6
Create RDM map file:
Login into the ESXi6 shell terminal, and execute the rdm.sh ( rdm ) to create the RDM map file
Attach / Detach RDM to VM:
Launch vmware vshere client v5.5 to attach /detach the RDM disk, the existed disk to VM
Delete RDM map file:
Launch vmware web client to delete the RDM disk map file
http://blog.zhenglei.net/?p=255651
http://blog.zhenglei.net/?p=255653
-
How ESXi identify disk
Identifying disks when working with VMware ESXi/ESX (1014953)
These are the definitions for some of identifiers and their conventions:- naa.<NAA>:<Partition> or eui.<EUI>:<Partition>
NAA stands for Network Addressing Authority identifier. EUI stands for Extended Unique Identifier. The number is guaranteed to be unique to that LUN. The NAA or EUI identifier is the preferred method of identifying LUNs and the number is generated by the storage device. Since the NAA or EUI is unique to the LUN, if the LUN is presented the same way across all ESXi hosts, the NAA or EUI identifier remains the same. For more information on these standards, see the SPC-3 documentation from the InterNational Committee for Information Technology Standards (T10).
The <Partition> represents the partition number on the LUN or Disk. If the <Partition> is specified as 0, it identifies the entire disk instead of only one partition. This identifier is generally used for operations with utilities such as vmkfstools.
mpx.vmhba<Adapter>:C<Channel>:T<Target>:L<LUN> or mpx.vmhba<Adapter>:C<Channel>:T<Target>:L<LUN>:<Partition>
Some devices do not provide the NAA number described above. In these circumstances, an MPX Identifier is generated by ESXi to represent the LUN or disk. The identifier takes the form similar to that of the canonical name of previous versions of ESXi with the mpx. prefix. This identifier can be used in the exact same way as the NAA Identifier described above.
- naa.<NAA>:<Partition> or eui.<EUI>:<Partition>
-
Local Storage as RDM
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1017530
/vmfs/devices/disks for local or SAN-based disks.
/vmfs/devices/lvm for ESXi logical volumes.
/vmfs/devices/generic for generic SCSI devices.
To configure a local device as an RDM disk:- Open an SSH session to the ESXi/ESX host.
- Run this command to list the disks that are attached to the ESXi host:# ls -l /vmfs/devices/disks
-
From the list, identify the local device you want to configure as an RDM and copy the device name.Note: The device name is likely be prefixed with t10. and look similar to:
t10.F405E46494C4540046F455B64787D285941707D203F45765 - To configure the device as an RDM and output the RDM pointer file to your chosen destination, run this command:# vmkfstools -z /vmfs/devices/disks/diskname /vmfs/volumes/datastorename/vmfolder/vmname.vmdkFor example:
# vmkfstools -z /vmfs/devices/disks/t10.F405E46494C4540046F455B64787D285941707D203F45765 /vmfs/volumes/Datastore2/localrdm1/localrdm1.vmdk
Note: The size of the newly created RDM pointer file appears to be the same size and the Raw Device it it mapped to, this is a dummy file and is not consuming any storage space.
- When you have created the RDM pointer file, attach the RDM to a virtual machine using the vSphere Client:
- Right click the virtual machine you want to add an RDM disk to.
- Click Edit Settings.
- Click Add.
- Select Hard Disk.
- Select Use an existing virtual disk.
- Browse to the directory you saved the RDM pointer to in step 5 and select the RDM pointer file and click Next.
- Select the virtual SCSI controller you want to attach the disk to and click Next.
- Click Finish.
- You should now see your new hard disk in the virtual machine inventory as Mapped Raw LUN.
-
vmware RDM
RDM has two compatibility modes:- Physical compatibility mode
- Virtual compatibility mode
Note: RDM is not available for direct-attached block devices or certain RAID devices. You cannot map a disk partition as RDM. RDMs require the mapped device to be a whole LUN. (depends on the controller)
-
Backup ESXi host with Live CD
Using Live CD/USB to backup and restore the ESXi host disk/partition:
Note:
#Redo, partition level backup & restore, support samba & ftp
CloneZilla, file level backup& restore (but VMFS5 ), support nfs.
-
Backup ESXi virtual machine
Backup:
Copy the whole directory of the virtual machine in VMFS to NFSRestore:
Copy back from NFC to the same VMFS locationClone:
Using vmkfstools to clone between different VMFS or Virtual Machine NameRefer to 1027876
-
HP P410/Raid5 benchmark
Test Environment
ProLiant MicroServer Gen8
E3 1230V2@3.3G
16GB RAM
3 WD Red NAS 4T/ @HP P410 Raid5
Software:
HP ESXi 6.0.0U2
Virtual:
Debian 8/X64
8 CPU/ 4G RAMBenchmark Tool
FIO (Flexible I/O Tester)
Google Compute SSD FIO Script
https://wiki.mikejung.biz/Benchmarking#Linux_Benchmarking_Tools
Result:
# Full write pass 1 process Run status group 0 (all jobs): WRITE: io=20480MB, aggrb=267620KB/s, minb=267620KB/s, maxb=267620KB/s, mint=78363msec, maxt=78363msec Disk stats (read/write): sda: ios=0/41090, merge=0/194, ticks=0/17786888, in_queue=17813224, util=99.70% benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=128
4 process / Random Read
Run status group 0 (all jobs): READ: io=45580KB, aggrb=1500KB/s, minb=1500KB/s, maxb=1500KB/s, mint=30379msec, maxt=30379msec Disk stats (read/write): sda: ios=11312/12, merge=0/7, ticks=4291100/444, in_queue=4331296, util=99.76% benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=128
# 4 Process / Random Write Run status group 0 (all jobs): WRITE: io=113208KB, aggrb=3689KB/s, minb=3689KB/s, maxb=3689KB/s, mint=30684msec, maxt=30684msec Disk stats (read/write): sda: ios=0/28298, merge=0/0, ticks=0/4648448, in_queue=4649856, util=99.76%