29. VMware Recommendations¶
This section offers FreeNAS® configuration recommendations and troubleshooting tips when using FreeNAS® with a VMware hypervisor.
29.1. FreeNAS® as a VMware Guest¶
This section has recommendations for configuring FreeNAS® when it is installed as a Virtual Machine (VM) in VMware.
To create a new FreeNAS® Virtual Machine in VMware, see the VMware ESXi section of this guide.
Configure and use the vmx(4) drivers for the FreeNAS® system.
Network connection errors for plugins or jails inside the FreeNAS® VM can be caused by a misconfigured virtual switch or VMware port group. Make sure MAC spoofing and promiscuous mode are enabled on the switch first, and then the port group the VM is using.
29.2. Hosting VMware Storage with FreeNAS®¶
This section has recommendations for configuring FreeNAS® when the system is being used as a VMware datastore.
Make sure guest VMs have the latest version of vmware-tools
installed. VMware provides instructions to
install VMware Tools
on different guest operating systems.
Increase the VM disk timeouts to better survive long disk operations. Set the timeout to a minimum of 300 seconds. See the guest operating system documentation for setting disk timeouts. VMware provides instructions for setting disk timeouts on some specific guest operating systems:
- Windows guest operating system: https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-EA1E1AAD-7130-457F-8894-70A63BD0623A.html
- Linux guests running kernel version 2.6: https://kb.vmware.com/s/article/1009465
When FreeNAS® is used as a VMware datastore, coordinated ZFS and VMware snapshots can be used.
29.3. VAAI for iSCSI¶
VMware’s vStorage APIs for Array Integration, or VAAI, allows storage tasks such as large data moves to be offloaded from the virtualization hardware to the storage array. These operations are performed locally on the NAS without transferring bulk data over the network.
VAAI for iSCSI supports these operations:
- Atomic Test and Set (ATS) allows multiple initiators to synchronize LUN access in a fine-grained manner rather than locking the whole LUN and preventing other hosts from accessing the same LUN simultaneously.
- Clone Blocks (XCOPY) copies disk blocks on the NAS. Copies occur locally rather than over the network. This operation is similar to Microsoft ODX.
- LUN Reporting allows a hypervisor to query the NAS to determine whether a LUN is using thin provisioning.
- Stun pauses virtual machines when a pool runs out of space. The space issue can then be fixed and the virtual machines can continue rather than reporting write errors.
- Threshold Warning the system reports a warning when a configurable capacity is reached. In FreeNAS®, this threshold is configured at the pool level when using zvols (see Table 13.2.1) or at the extent level (see Table 13.2.6) for both file and device based extents. Typically, the warning is set at the pool level, unless file extents are used, in which case it must be set at the extent level.
- Unmap informs FreeNAS® that the space occupied by deleted files should be freed. Without unmap, the NAS is unaware of freed space created when the initiator deletes files. For this feature to work, the initiator must support the unmap command.
- Zero Blocks or Write Same zeros out disk regions. When allocating virtual machines with thick provisioning, the zero write is done locally, rather than over the network. This makes virtual machine creation and any other zeroing of disk regions much quicker.