Frenetic: A Network Programming Language

With the development of software defined network (SDN) architectures, the network becomes programmable. For example, OpenFlow allows programmers to specify forwarding rules at the controllers, and the controller will install these rules to the switches. Unfortunately, these network programming languages such as OpenFlow and NOX can be difficult to use.

“… while OpenFlow and NOX now make it possible to implement exciting new network services, they do not make it easy.”

These languages have low-level of abstractions. While programming, programmers need to think about the switch-level rules, which are often composed of multiple tasks.

 

To Make Network Programming Easier

The goal of this paper is to simplify network programming by designing a higher level programming language. It starts from identifying three main problems in the existing programming language.

Problem 1: Anti-Modular

Here is an example for the anti-modular design in current network programming language.

Screen Shot 2017-03-09 at 10.41.47 AM

(picture from slides at http://frenetic-lang.org/publications/frenetic-icfp11-slides.pdf)

When we integrate the code of a repeater and a web monitor, we need to consider how these functions will influence each other at the switch level and carefully arrange the order and priority of each line of the code.

Problem 2: Two-Tiered Model

The current SDN architecture has a two-tiered model. When a packet arrives at the switch, the switch will check whether there are existing rules about how the packet should be forwarded. If so, the switch will forward the packet without notifying the controller. Otherwise, the switch will forward the packet to the controller. This design brings a problem: the controller will not see all the packets in the network. If we want to monitor all the packets in the network, we need to inspect and modify rules at the switch level.

Screen Shot 2017-03-09 at 10.55.44 AM

(picture from slides at http://frenetic-lang.org/publications/frenetic-icfp11-slides.pdf)

Problem 3: Network Race Conditions

When a switch S sees a packet p1 that it doesn’t know where to forward, it will ask the controller to help with the following steps:

  1. S forwards p1 to the Controller C
  2. finds out the forwarding rules for p1 and sends the rules r to S
  3. installs r received from C
  4. When S receives a new packet p2 that matches r, it uses to forward p2

However, since the switches and the controller form a distributed system, race conditions can occur. For example, step 4 can happen before step 3 because it takes time for r to be transferred from C to S. In this case, S would forward p2 to C because it has not received the rules for p2 yet.

Separation of Reading and Writing

All these three problems in current network programming can be contributed to the lack of efficient separation between reading (i.e., monitoring network conditions) and writing (i.e., specifying forwarding rules). Frenetic solves these problems by decomposing the language into two tasks: monitoring and forwarding.

Frenetic provides a declarative query language for classifying and aggregating net- work traffic as well as a functional reactive combinator library for describing high-level packet-forwarding policies.

Screen Shot 2017-03-09 at 11.28.51 AM

Presentation:

Reference:

http://frenetic-lang.org/publications/frenetic-icfp11.pdf

Advertisements

Network Virtualization in Multi-tenant Datacenters

This is a paper about network virtualization written by researchers in VMware and UC Berkley.

What is network virtualization?

No one has proposed a formal definition for it yet, so the paper gives its own explanation:

… a network virtualization layer allows for the creation of virtual networks, each with independent service models, topologies, and addressing architectures, over the same physical network.

Sounds familiar? Network virtualization is just a network version of hypervisors. While hypervisor allows for the creation of virtual machines with isolated hardware resources, network virtualization allows similar separation in network architecture and resources.

The following slide shows the analogy between a hypervisor (the paper calls it a server hypervisor to distinguish from the network hypervisor) and a network hypervisor. The network hypervisor is implemented on the standard IP connectivity and offers independent L2, L3, L4 – L7 services for each logical network.

screen-shot-2017-03-06-at-10-20-33-am

(Picture from slides at https://www.cs.northwestern.edu/~ychen/classes/cs450-w15/lectures/nvp_nsdi14.pdf)

Why do we need network virtualization?

Today, it is difficult for a single physical topology to support the configuration requirements of all of the workloads of an organization.

Since the workload of different applications varies greatly in a data center, the organization must build multiple physical networks to meet the requirement of different workloads. If all these network topologies can be virtualized on top of the same physical network, it would be much more convenient.

How to implement a network hypervisor?

To abstract the network, we need to deal with two aspects of abstraction: packet abstraction and control abstraction.

For packet abstraction, we need to reproduce the packet structure as if the packet were generated in a real physical network.

This abstraction must enable packets sent by endpoints in the MTD to be given the same switching, routing and filtering service they would have in the tenant’s home network. This can be accomplished within the packet forwarding pipeline model.

The control abstraction is more challenging because we must allow the tenants to define their own logical data paths, which is not supported in a real physical network. In NVM, they implement it with flow tables.

Each logical datapath is defined by a packet forwarding pipeline interface that, similar to modern forwarding ASICs, contains a sequence of lookup tables, each capable of matching over packet headers and metadata established by earlier pipeline stages.

In NVM, these forwarding pipelines are implemented at each server (hypovisor).  That is, they implement the whole network structure as a software, and each server is running a replicate of the software! Before leaving its host server, each packet needs to go through the whole forwarding pipeline. After that, the packet is directed tunneled to the destination server.

Screen Shot 2017-03-06 at 12.28.11 PM

(Picture from slides at https://www.cs.northwestern.edu/~ychen/classes/cs450-w15/lectures/nvp_nsdi14.pdf)

Link for video: https://www.youtube.com/watch?v=6WQKGrNqnc0

Reference: https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-koponen.pdf

OpenFlow: Enabling Innovation in Campus Networks

OpenFlow is originally designed for research purposes. It is like a sandbox for the network, separating a part of the network traffic for testing new protocols while protecting other working traffic from being influenced.

We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on hetero- geneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches.

The network has become so critical in our daily life that it is hard to apply any changes to the existing network architecture. It is even more challenging to test new protocols and research ideas since they accidentally influence other ongoing network traffics. This white paper aims at solving this problem by designing a virtualized programmable network for researchers to run experiments and test their ideas.

The idea of virtualized programmable network isn’t new, but the previous version GENI is too ambitious. It proposed a nationwide research facility, which may take years to be deployed. Compared to GENI, OpenFlow is more practical and focused. It focuses on a short-term question:

As researchers, how can we run experiments in our campus networks?

Based on this question, the author proposes 4 goals OpenFlow needs to achieve:

  1. Ameable to high-performance and low-cost implementations
  2. Capable of supporting a broad range of research
  3. Assured to isolate experimental traffic from production traffic
  4. Consistent with vendors’ need for closed platforms

1. High-Performance and Low-Cost Implementations

An OpenFlow Switch consists of at least three parts: a flow table, a secure channel, and an OpenFlow protocol. The controller uses OpenFlow protocol to communicate with the OpenFlow Switch through the secure channel, and the rules specified by the controller are stored in the flow table, which controls how to process each flow.

2. Support for a Broad Range of Research

The OpenFlow protocol allows researchers to specify detailed rules on how flows are processed.

3. Assurance to isolate Experimental Traffic from Production Traffic

OpenFlow-enabled switches support both OpenFlow features and normal Layer 2 and Layer 3 processing. Production traffic not specified in the flow table can be forwarded to the normal Layer 2 and Layer 3 pipeline of the switch.

4. Consistency with Vendors’ need for Closed Platforms

 

Reference:

http://pbg.cs.illinois.edu/courses/cs598fa10/readings/mabpprst08.pdf