[Review]: What’s vSphere PMEM
What’s PMEM
Persistent memory (PMEM) is a new technology that has the characteristics of memory but retains data through power cycles. PMEM bridges the gap between DRAM and flash storage.
PMEM offers several advantages over current technologies like:
- DRAM-like latency and bandwidth
- CPU can use regular load/store byte-addressable instructions
- Persistence of data across reboots and crashes
These characteristics make PMEM very attractive for a varied set of applications and scenarios.
Currently, there are two PMEM solutions available in the market:
- NVDIMM-N by DELL EMC and HPE: NVDIMM-N is a type of DIMM that contains both DRAM and NAND-flash modules on the DIMM itself. Data is transferred between those two modules at startup, shutdown, or any power loss event. The DIMMs are backed by a battery power source on the mainboard in case of power loss. Currently, both HPE and DELL EMC are offering 16 GB NVDIMM-N’s
- Scalable PMEM by HPE: This combines HPE SmartMemory DIMMs with NVMe drives and battery backup to create logical NVDIMMs. Data is transferred between DIMMs and NVMe drives. This technology can be used to create large scale PMEM systems.
What’s vSphere PMEM
VMs can use PMEM in a vSphere environment via two ways of exposing PMEM to a VM:
- vPMEMDisk: vSphere presents PMEM as a regular disk attached to the VM. No guest OS or application change is needed to leverage this mode. For example, legacy applications on legacy OSes can utilize this mode. Note that vPMEMDisk configuration is available only in vSphere and not in bare-metal OS.
- vPMEM: vSphere presents PMEM as a NVDIMM device to the VM. Most of the latest operating systems (for example, Windows Server 2016 and CentOS 7.4) support NVDIMM devices and can expose them to the applications as block or byte-addressable devices. Applications can use vPMEM as a regular storage device by going through the thin layer of the direct-access (DAX) file system or by mapping a region from the device and accessing it directly in a byte-addressable manner. This mode can be used by legacy or newer applications running on newer OSes.
vSphere PMEM can be added to a VM via the below ways:
- vPMEMDisk can be added to a VM like any regular disk. Make sure VM Storage Policy is set to Host-local PMem Default Storage Policy.
- vPMEM can be added to a VM as new NVDIMM device.
Please note that PMEM is supported on vSphere 6.7 only (Now).
PMEM Benchmark Results
The results of testing PMEM performance in vSphere as vPMEM are as follows:
- Virtualization overhead of PMEM is less than 3%.
- The vPMEM-aware configuration can give up to 8x more bandwidth compared to that of an NVMe SSD.
- Latency with vPMEM configurations is less than 1 microsecond.
- The vPMEM-aware configuration can achieve bandwidth close to the device (memory) bandwidth.
HammerDB test results shows that performance improved and also latency reduced dramatically:
- 35% application performance improvement (Hammer-DB Transactions per minute) with vPMEM
- Up to 4.5x increase in Oracle IOPs
- 1.4x DB reads, 3x DB writes and up to ~17x increase in DB Log writes
- Up to more than 57x decrease in Oracle DB operations (read/write) latency
More on Teimouri.Net
[Review]: Amazing New Feature in vSphere HTML5 Web Client
[New Release]: PowerCLI 10.2.0
[Download]: Critical HPE Servers ROM Update – Spectre Vulnerability
[Download]: HPE Integrated Lights-Out 2 (iLO 2) Vulnerability