As I keep digging into documents and KB articles I keep finding more and more things to like about vSphere 4.1. Today’s find has to do with the PVSCSI driver.
With the release of vSphere 4.0, VMware added a new paravirtualized SCSI driver into the VMware Tools that provides better virtual disk performance than the standard LSI driver. The PVSCSI driver promised to deliver better performance and lower overall CPU utilization for workloads that had high I/O demands. Unfortunately the PVSCSI driver wasn’t supported on virtual machine boot volumes, so folks held off on making this the default SCSI driver for all virtual machines.
After vSphere 4 Update 1 was released, VMware lifted the restriction and now supported the PVSCSI driver on boot volumes. Folks began considering adopting the PVSCSI driver in all virtual machines similar to how the VMXNET driver is a standard for nearly all virtual NICs. Soon afterwards VMware came out with a knowledgebase article stating that virtual machines that did not have heavy I/O demands could actually experience worse performance using the PVSCSI driver. They recommended only using the driver for workloads that had I/O demands in excess of 2,000 IOPS.
With the release of vSphere 4.1 that is no longer a problem and you can use the PVSCSI driver in all circumstances. Want details? Read on!
The VMware KB states the following:
VMware evaluated the performance of PVSCSI and LSI Logic to provide a guideline to customers on choosing the right adapter for different workloads. The experiment results show that PVSCSI greatly improves the CPU efficiency and provides better throughput for heavy I/O workloads. For certain workloads, however, the ESX 4.0 implementation of PVSCSI may have a higher latency than LSI Logic if the workload drives low I/O rates or issues few outstanding I/Os.
In a post on his vPivot blog, former VMware performance guru Scott Drummonds echoed the sentiments and stated that this issue would be resolved in a future release of vSphere. He noted that the PVSCSI driver was only slightly slower than the LSI driver in certain scenarios but acknowledged that it could result in slightly less performance with no efficiency gains.
Thankfully in the same KB noted above VMware now states that the issue has been resolved:
The test results show that PVSCSI is better than LSI Logic, except under one condition–the virtual machine is performing less than 2,000 IOPS and issuing greater than 4 outstanding I/Os. This issue is fixed in vSphere 4.1, so that the PVSCSI virtual adapter can be used with good performance, even under this condition.
Having to maintain two separate templates, one with PVSCSI and one without, never made a lot of sense so most folks probably either accepted the small performance hit or never used PVSCSI to begin with. Now that the issue has been resolved I suspect most folks will default to using the PVSCSI driver going forward in all templates regardless of the VM’s I/O demands.