Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Open Networking foundation (ONF) Executive Director on the group's achievements, goals

John Dix | Sept. 16, 2014
Dan Pitt explains where the standards stand.

Standards aren't going to be the way we resolve these differences. So it's either going to be all-out war among proprietary solutions, or we create a community where people can leverage code to demonstrate out the pros and cons through multiple revs of the different technologies. And this is why in OpenDaylight we have the service abstraction layer, which, I think, has been very misunderstood. 

People ask the question, "If OpenFlow is the answer in the future, why would you create an abstraction layer that allows a whole bunch of old protocols and old technology to interoperate with these wonderful new methods?" And the answer is because we don't yet know whether the OpenFlow model is going to work or whether it will be something else.

Are overlays going to win or are they a transitional niche technology?  We don't know. And so what we're trying to do with OpenDaylight is provide a core controller that can work with multiple models, allow them to develop over many years, leverage them. Because I don't think that the world in 15 years has eight different southbound protocols, 30 controllers, five overlays, and so on and so forth.  I think there will be a shake-out.  Today, however, we need that brainstorming, and that's why so much investment is going into that abstraction layer.

So OpenFlow isn't critical to the success of SDN?
Scott Shenker (a professor in the Electrical Engineering and Computer Sciences Department at UC Berkeley)  talks about SDN 1.0 and SDN 2.0.  And I think OpenFlow, in its first iteration, was a really, really interesting departure from the past. It was a theoretical exercise, in a sense, to turn the world of networking on its head. And I think it did its job brilliantly. 

I think the ONF did their job brilliantly. 

On the other hand, it's really, really hard to invent something from scratch and get it right the first time. OpenFlow 1.0, in a sense, works better in theory than it works in practice. It can solve a few narrow problems decently well, but when you start to think about deploying it in production networks, you have to optimize not for one thing, but for 15 attributes, all of which are critically important -- performance and QoS and cost and OpEx, etc. -- and what you found is OpenFlow 1.0 wasn't going to work. Like any 1.0 product or 1.0 solution. 

So they revved it again and got to 1.3, and 1.3 solved some problems.  But guess what happens?  You know those typical regressions, you solve some problems and you create new ones.  So I think OpenFlow has great promise, and it serves some really good purposes, but here's what we don't know yet.  Will this evolve? 

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for MIS Asia eNewsletters.