About a decade ago, new companies formed around building online applications in the areas of SaaS, Social Media, and other verticals that required the ability to scale effortlessly in multiple dimensions to support growth and peaks in demand. These companies and technologists built a new kind of infrastructure to service a rapidly-growing customer base that required real-time information. They relied upon low-latency storage resources directly installed in servers as direct-attached-storage (DAS) in order to put the data as close to the CPU as possible. The scale-out database technology that underpinned these applications could manage data across the cluster, and avoided the need to deploy traditional shared storage resources. Examples are shown below:
Recent PostsRead More
Storage system vendors have chosen to integrate flash in two ways: incorporate standard off-the-shelf SSDs, or design their own flash modules and controllers. Many of the early all-flash array pioneers, like Violin and TMS, designed their own custom flash modules for what were very sound reasons at the time. The choice to go in one direction or another in this area revolves around several criteria, most notably Performance, Time-to-Market, and Cost. I explore all 3 as it relates to this subject below:
Several new storage systems have come to market with the goal of delivering shared flash resources as a service to high-scale, distributed applications.
These products take advantage of some of the following technology developments in the storage and networking space: Standards-based PCIe-Connected SSDs, RDMA-Capable Ethernet Networking up to 100 Gbe, a standard storage protocol designed for PCIe-Connected SSDs (NVMe), and a standardp protocol for remotely Accessing NVMe devices (NVMe-Over-Fabrics, or NVMeOF). Note that Red Hat 7.4 and Ubuntu 16 both now include NVMeOF support inbox.