Network Virtualization in Multi-tenant Datacenters

This is a paper about network virtualization written by researchers in VMware and UC Berkley.

What is network virtualization?

No one has proposed a formal definition for it yet, so the paper gives its own explanation:

… a network virtualization layer allows for the creation of virtual networks, each with independent service models, topologies, and addressing architectures, over the same physical network.

Sounds familiar? Network virtualization is just a network version of hypervisors. While hypervisor allows for the creation of virtual machines with isolated hardware resources, network virtualization allows similar separation in network architecture and resources.

The following slide shows the analogy between a hypervisor (the paper calls it a server hypervisor to distinguish from the network hypervisor) and a network hypervisor. The network hypervisor is implemented on the standard IP connectivity and offers independent L2, L3, L4 – L7 services for each logical network.

screen-shot-2017-03-06-at-10-20-33-am

(Picture from slides at https://www.cs.northwestern.edu/~ychen/classes/cs450-w15/lectures/nvp_nsdi14.pdf)

Why do we need network virtualization?

Today, it is difficult for a single physical topology to support the configuration requirements of all of the workloads of an organization.

Since the workload of different applications varies greatly in a data center, the organization must build multiple physical networks to meet the requirement of different workloads. If all these network topologies can be virtualized on top of the same physical network, it would be much more convenient.

How to implement a network hypervisor?

To abstract the network, we need to deal with two aspects of abstraction: packet abstraction and control abstraction.

For packet abstraction, we need to reproduce the packet structure as if the packet were generated in a real physical network.

This abstraction must enable packets sent by endpoints in the MTD to be given the same switching, routing and filtering service they would have in the tenant’s home network. This can be accomplished within the packet forwarding pipeline model.

The control abstraction is more challenging because we must allow the tenants to define their own logical data paths, which is not supported in a real physical network. In NVM, they implement it with flow tables.

Each logical datapath is defined by a packet forwarding pipeline interface that, similar to modern forwarding ASICs, contains a sequence of lookup tables, each capable of matching over packet headers and metadata established by earlier pipeline stages.

In NVM, these forwarding pipelines are implemented at each server (hypovisor).  That is, they implement the whole network structure as a software, and each server is running a replicate of the software! Before leaving its host server, each packet needs to go through the whole forwarding pipeline. After that, the packet is directed tunneled to the destination server.

Screen Shot 2017-03-06 at 12.28.11 PM

(Picture from slides at https://www.cs.northwestern.edu/~ychen/classes/cs450-w15/lectures/nvp_nsdi14.pdf)

Link for video: https://www.youtube.com/watch?v=6WQKGrNqnc0

Reference: https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-koponen.pdf

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s