Prince - The Future | Listen for free at bop.fm
Major technology platforms tend to last about 25 to 30 years. This gives them time to gather sufficient developer momentum, enable a set of transformational ideas, build those ideas, and form a large industry around it. The platforms then sustain for an additional 5-10 years due to inertia and lock-in. Finally, when the shortcomings of the old platform become too much to bear — as with the Mainframe, Hierarchical Databases, and more recently the PC — a new platform emerges to take the old platform’s place and bring the future forward.
When I was a young man in 1990, I designed and deployed Silicon Graphics’ first wide area network using technology from a little known networking startup called Cisco Systems. The technologies and techniques that I used would prove to be the basis for the next 2 decades of networking architecture. Through the years, the platform that I grew up on has served the world extremely well, enabling the massive build out of the Internet itself. However, certain characteristics of the platform have remained remarkably closed and stagnant over the years.
The hubs and routers I first worked with did little besides forward packets. Switches have become smarter and smarter over time, but the dark side of these advances is that they have been made in an exceptionally proprietary and brittle way. As a result, only a select few companies have any ability to add new functionality and those that do, do so in a manner that makes customers wary to make even the simplest changes to their networks.
To make matters worse, the brittle nature of today’s networking platform severely hamstrings cloud computing. Amazon’s AWS, the largest cloud offering in the world, runs an embarrassingly feature poor network in order to maintain enough flexibility to support its service. Networking functionality that has been around for over twenty years such as multicasting, traffic isolation, and security isolation are absent from Amazon’s offering. When people ask me why more enterprises haven’t moved to the cloud, I point out that doing so would require reversing out the last 15 years of networking features from their applications.
This is why the data networking architecture of the past 30 years — characterized by extremely smart but highly propriety switches that are closed to 99.99% of would-be network developers — has finally run its course.
The Time is Now
In order to overcome the massive inertia associated with a dominant platform technology, two conditions must exist. First, there must be new, overwhelmingly important functionality that the old platform cannot support in a reasonable way. Second, the new platform must be able to coexist and interoperate with the old. Until recently, neither of these pre-conditions held for networking, but in past few years, changes in the surrounding ecosystem have made both true:
The Rise of Cloud Computing – The current architecture was developed to deliver packets between two stationary hosts, in a network under unified administrative control. Thus, globally distributed routing protocols along with manual configuration and scripting were largely sufficient. Today’s datacenter networks must cope with multiple tenants, each requiring stringent performance and isolation guarantees for their migrating VMs. The current architecture simply cannot handle these requirements.
- The Rise of Server Virtualization – Previously, introducing new network functionality in a datacenter required a forklift upgrade. In a modern virtualized environment, all traffic passes through a software switch in the hypervisor before entering the physical network. By controlling these software switches at the edge, a new platform can assume control of the network without requiring a customer to change any parts of the physical networking infrastructure. Thus, the new architecture can be introduced without displacing the old platform.
Clearly, the time is now.
Enter the Innovator
Martin Casado began his career auditing networks at a government intelligence agency. While trying to secure our nation against terrorist attacks, he became increasingly frustrated that current networking technology prevented him from solving mission critical problems. He needed a better platform that was so radically different that he knew that he’d have to build it himself. To do so, he journeyed to Stanford University where he connected with like-minded people in Stanford Professor Nick McKeown and UC Berkeley Professor Scott Shenker. McKeown and Shenker were equally frustrated with the current state of the networking art, but for a different reason. They taught network innovation, but it was almost impossible to innovate on the network. You see, real network innovation requires being able to work with real networks – i.e. real production traffic. Sadly, today’s networks are so fragile that no right-minded network administrator would ever allow experimental traffic and programs on her production networks. In fact, even the network administrators at Stanford said “no” to experimental traffic.
Once they joined forces, Casado, McKeown and Shenker sought a platform that:
- Completely virtualized the network making the network every bit as flexible as a virtualized server
- Decoupled networking software from proprietary hardware, so that any company could add any kind of functionality to the network – not just the major networking companies.
- Was infinitely scalable using commodity hardware
Building this new platform was a humongous effort – too big an effort to fund at the University level, so Casado, McKeown, and Shenker started a company, Nicira Networks.
Today Nicira Networks publicly unveiled its widely anticipated first product, the Network Virtualization Platform along with 5 major customers – AT&T, eBay, NTT, Rackspace and Fidelity Investments, who are currently deploying NVP. I am thrilled to be a part of it and welcome all of you to the future of networking.
Disclaimer: Andreessen Horowitz is the major investor in Nicira and I am on the Board of Directors.