We've all heard about how software-defined networking will let us run applications and custom code on our switches, but there haven't been many examples of exactly what that means in the real world.
In this week's New Tech Forum, Andrew Warfield, co-founder and CTO of Coho Data takes us through how Coho uses SDN applications to get past bottlenecks imposed by flash storage and file-sharing protocols such as NFS. — Paul Venezia
Data, storage, and SDN: An application example
A lot of the usual discussion around SDN is about how the introduction of more flexible networks is going to solve a bunch of very real problems from a network operations perspective. The rise of server virtualization has made provisioning and isolation of network resources harder than it already was, and SDN promises to make it better. Similarly, large organizations like Microsoft and Google are talking about the wins they're getting in terms of explicit, wide-area traffic management within large-scale enterprise networks.
What's really exciting about these discussions is the idea that new applications will emerge. Fine-grained, data path programmability in the network might actually change how we approach broader issues in application design, and emerging applications might integrate directly with network infrastructure. To date, though, there haven't been many clear examples of application use cases for the exciting new functionalities that SDN switching supposedly offers.
In this article, I'd like to tell you a story of a storage application use case. For the past two years, we've been working on an enterprise storage system that embeds SDN switching hardware directly within our platform. We've worked with OpenFlow and with chip-set APIs to manipulate TCAMs (Ternary Content Addressable Memory) and forwarding tables directly. Despite the fact that SDN hasn't been broadly deployed in many of our customer networks yet, we're able to distill concrete value out of today's SDN switch hardware by using it as an embedded component of our storage system. And it is paying off spectacularly.
Why the network hasn't (really) mattered until now
Let's start with some background. For the past two decades, vendors have built big boxes full of spinning disks, aggregating them together with techniques like RAID, then exporting some abstraction like a virtual block device or file system over the network.
From a performance perspective, these spinning disks are awful. If you access them sequentially, modern disks will offer you about 100MBps of data — in other words, at best, a single disk can about saturate a 1Gb link. Unfortunately, this never happens. Random I/O incurs seeks on disks and throughput falls through the floor. With random I/O, that same disk will deliver 2MBps or less. The broad deployment of virtualization has meant that enterprise storage systems are serving more concurrent workloads (lots of VMs) and more opaque workloads (virtual hard disks, instead of individual documents), broadly known as the "I/O blender" effect.
Sign up for MIS Asia eNewsletters.