Applications are becoming increasingly parallel. What used to be done on a single application server is now spread across a cluster of servers operating in parallel. This allows for scaling in multiple dimensions. Need more bandwidth or compute power for your clustered application? Just add more servers to the cluster. Hyperscale customers have been doing this for nearly a decade now, and the number of customers that embrace this architecture are growing constantly.
This scale-out approach also is used to scale storage performance and capacity. Traditional all-flash-arrays cannot offer enough density and/or performance to satisfy these applications. Therefore, the servers in the clusters include direct-attached SSDs that are used to store a clustered application’s data. When you need to scale storage, you again just add nodes to the cluster with SSDs installed. These SSDs are transitioning from proprietary PCIe cards, and SCSI-based SAS/SATA SSDs, to standard NVMe SSDs.
SCSI-based SSDs (SAS and SATA) are not designed to deliver the low latency of PCIe. The initial products which did leverage the latency of PCIe effectively were proprietary implementations, most of which are now end of life. With NVMe, you get a standards-based SSD with inbox drivers that offers a massive performance boost over other SCSI-based standard SSDs. Also, NVMe SSDs offer parallel instead of serial access to the device by deploying multiple I/O queues, which lowers latency and drives more performance increases for applications. In fact, NVME is nearly 2000 times more parallel compared to SATA and 250 times compared to SCSI.
Here are some more comparisons:
Why NVME over Fabrics?
While NVMe SSDs offer great performance, there are drawbacks when deployed as direct-attached storage in large scale-out environments. Some of the issues include under-utilization of flash capacity, lack of data management features and the absence of the efficiency of using a shared pool of storage.
Many vendors are starting to discuss adding NVMe SSDs to their all flash storage arrays, but this will not deliver the true benefits of NVMe since they are still using SCSI-based protocols between the application servers and the storage array. These protocols will require protocol translations and are serial instead of parallel, which means losing the latency and performance benefits of the NVMe SSDs.
NVMe Over Fabrics (NVMeOF) is a storage protocol that delivers the performance benefits of NVMe SSDs across a standard, low-latency ethernet network. The reason that the NVMeOF protocol performs so much better than the old SCSi-based protocols (iSCSI, FC, iSER) is because it uses the same model as local NVMe:
- Parallel, per-core multi-queue connections from the host to the storage target
- No translation required over fabrics as it uses the same NVME commands
- NVMeOF only adds about 10 us of latency between the host and target, as opposed to 100s of us of latency added by legacy SCSI-based protocols.
Finally, the host will just see a standard block device that can be used by any application. It will appear just like a local NVMe drive, with the same standard NVMe utilities available to manage it.
The performance and features of the Pavilion Storage Platform are a big differentiator for us, and they are covered in other blog posts and the web site. However, I want to discuss our advantages as it relates to the overall NVMeOF features and ecosystem.
Pavilion is a strong proponent of standards-based software. This is why we have designed our product to deliver performance and features WITHOUT requiring users to install software in the host environment.
The NVMeOF client driver is now delivered in-box with Red Hat Enterprise Linux version 7.4 and Ubuntu 16. We have customers that are delighted by the fact that our array can be set up easily in minutes, partly due to the fact that they don’t need to install anything on the host to provision a volume from our array. Many vendors claim that they are delivering products based upon NVMeOF, but few can leverage the standard inbox drivers and require customers to install custom software on all application servers.
We are also working to advance the NVMeOF standard so that it can serve broader needs, by adding enterprise features like Multi-Pathing support to the driver. We have contributed it to the open source community by giving out the multipath working model design under GPL license. Our design is based on the Asynchronous Namespace Access specification, which is part of the NVME specification supporting fail over between active-active controllers on the target.
As an example of our commitment to standards, we plan to work with other vendors in the NVMeOF community to develop a standard version of NVMeOF Multi-Path Support. Our goal is to not have any proprietary IP that needs to run in our customers' application servers.
NVMeOF is an exciting standard that will unlock new levels of performance and shared storage benefits for modern parallel applications, and Pavilion is leading the charge into this future!