Akihiro Koizumi

Evolution of Network in Data Center vol.3 (Rise of SDN)

Blog Post created by Akihiro Koizumi Employee on Feb 8, 2016

Challenges in Data Center Networking

 

In the previous post, we have covered challenges in Data Center Network, especially after server virtualization.

There have been several 'traditional' challenges in networking in Data Center, such as lack of agility, complicated maintenance and high cost.

 

After server virtualization, we see more needs to tackle these traditional challenges.

Now that storages and servers are highly virtualized and have gained agility, laggard networking is causing troubles.

Mobility of servers requires more flexibility in networking and configuration/operation/maintenance have gotten even more complicated.

High pressure to lower cost of infrastructure always exists and is getting even higher.

So something to address these challenges has been required and a concept called Software Defined Networking (SDN) has risen.

 

Table 1: Networking Challenges

NoNetworking Challenges
1Lack of Agility
2Complicated maintenance
3Cost

 

 

Open SDN Concept

 

A concept of "Open SDN" has been developed and defined in research institutions starting around 2008.

It evolved from prior proposals such as ForCES, 4D and Ethane, but actually was born with OpenFlow.

In that sense, SDN contains a kind of openness from the beginning.

 

Paul Goransson and Chuck Black have set five criteria in their book "Software Defined Networks: A comprehensive approach" to define Open SDN.

The table below (Table 2) shows them.

 

Table 2: Criteria for Open SDN

NoCriteria
1Plane separation
2Simplified device
3Centralized control
4Network automation and virtualization
5Openness

 

Let's look at the criteria one by one.

 

No.1 is plane separation. Roughly speaking there are two categories of functionalities in networking. One is Data Plane (D-Plane) which handles actual packet forwarding, and the other is Control Plane (C-Plane), which has intelligence and decides how packets should be treated and orders D-Plane to do so.

Prior to SDN evolution, network gears used to be monolithic and have both functionalities (C-Plane and D-Plane) within in a box. A network appliance decides how to modify and forward packets and the same appliance did actual forwarding task. De-coupling C-plane from the hardware will make it easier to define how network should be from single software since all the rules that D-plane should follow can be poured in from the software. There is a discussion on how much we can pull C-plane out of hardware, though. The de-coupling also brought about some level of openness, which will be mentioned in No.5.

 

No.2 is simplified device.

In order to solve high cost issue of network gears, one of the objectives of SDN is reducing cost. The idea is that by separating C-Plane from D-Plane, actual packet-forwarding gears will be simplified since they only handle packet forwarding (D-Plane).

 

No.3 is centralized control.

As mentioned in No.1 section above, SDN concept is to control SDN (capable) switches from a single point of view. By doing so, although each SDN switch knows only about what it should do next, it will act perfectly as a piece of network as a whole.

 

No.4 is network automation and virtualization. The term of "Software Defined" includes a concept of automation implicitly. One of the reasons that we adopt software to define/manage network instead of manipulating manually is that we would like to let software to work automatically on behalf of human beings. And when we want to use software to define network, it is very important for network to be virtualized because configuring many types of network gears as they are is a burden for software. Some level of abstraction is needed to make software simple. With that said, both network automation and virtualization are both in the characteristics for SDN.

 

No.5 is openness. This criterion is a kind of tricky one since SDN can be achieved by only proprietary technologies. However if we think about the origin of SDN momentum, we cannot set aside openness in this discussion. All challenges in networking, which is lack of agility, maintenance complexity and high cost, are, in some way, the results of proprietary and lock-in technologies provided by incumbent vendors.

 

These five criteria were what Paul and Chuck used to distinguish "Open SDN" from others, and OpenFlow is the representative example of it.

 

OpenFlow®

 

OpenFlow® is the first standard communications interface defined between the control and forwarding layers (D-plane) of an SDN architecture. OpenFlow allows direct access to and manipulation of the forwarding plane of network devices such as switches and routers, both physical and virtual (hypervisor-based).

 

 

Programmability

  • Enable innovation /differentiation
  • Accelerate new features and services introduction


Centralized Intelligence

  • Simplify provisioning
  • Optimize performance
  • Granular policy management


Abstraction

  • Decouple:
  • Hardware & Software
  • Control plane & forwarding
  • Physical & logical config.          

3-1_12-8-OpenFlow-Diagram.jpg

Ref: OpenFlow - Open Networking Foundation

 

OpenFlow is fully aligned to the five criteria of Open SDN and therefore it is often recognized as the true SDN technology.

Yes, it is the true Open SDN and we don't see much of what can be called as Open SDN other than OpenFlow.

However you might have heard of other technologies such as vmware NSX, Cisco ACI, etc. as SDN solution.

Actually, Open SDN in not the only SDN technology in the world. There are some alternatives.

 

 

Alternatives to OpenSDN

 

SDN via APIs

One of the different approaches from Open SDN to accomplish (at least some of) SDN capabilities is SDN via APIs.

The idea is to achieve network automation and virtualization by exposing additional APIs to manipulate network gears.

Compared with traditional ways of configuration; CLI, when we have access to APIs to manipulate, we can control network via software.

And centralized control can be also realized by utilizing APIs if you want to do so. Since each network gear is exposing APIs, if we prepare centralized control software for them, we can manage the network in a centralized way.

However, the other criteria of Open SDN are not met so much.

For example, simply exposing APIs of network gear does not require plane separation. Network device might become even more complicated since additional API might be implemented to the existing network gear.

For the openness, exposing APIs does not necessarily mean openness. It opens up APIs but they might be proprietary APIs and might not have compatibility with other standards.

 

SDN via Hypervisor-based Overlays

Another approach is SDN via Hypervisor-based Overlays.

The idea is to deploy virtualized network as overlay network.

Overlay network is the way of deploying virtualized network on the top of existing physical network infrastructure (Underlay) by using tunneling technology.

As mentioned in the previous post, network has come into a physical server in a form of virtual switch after server virtualization.

There have been some implementations to deploy virtual (tunneled) network starting from virtual switch in a physical server.

In that case, existing network infrastructure does not have to change.

However, the essence of networking is handled by overlay network, actual control of network is isolated from physical network gears and the underlay network can be simplified.

Overlay network is managed in a centralized manner, and automation and virtualization are also achieved with its nature.

For the openness, tunneling technologies such as VXLAN, NVGRE, STT (as of today, VXLAN dominates the market) are standardized technologies and are open.

 

The table below (Table 3) shows summary of comparison among the SDN approaches mentioned above, corresponding to five criteria of Open SDN.

 

Table 3: SDN solution comparison

3-2_Screen Shot 2015-11-23 at 2.37.53 PM.png

 

Since we used the criteria that define Open SDN, of course the assessed results for Open SDN are all 'High' (highly compliant). On the other hand, alternative approaches; SDN via APIs and SDN via Hypervisor-based Overlays do not meet all the criteria.

However, what is important is not whether it complies the definitions of Open SDN or not, but whether it can solve issues in the real world or not.

So let's look at if these options can be a solution for the current data center issues.

 

Effectiveness to current data center networking issues

 

Let us look at the three approaches a little bit deeper.

 

We have an extended table (Table 4) for SDN solution comparison below.

Six items that are directly related to the current DC issues are added to the table (from item No. 6 to No.11).

Please refer the next table (Table 5) to confirm what were the current DC issues covered in the previous post.

 

Table 4: SDN solution comparison (version 2)

3-3_Screen Shot 2015-11-23 at 2.38.31 PM.png

 

Looking at the extended table, Open SDN can again address all the issues in the current data centers.

And other two alternatives can address some of the issues, but not all of them.

 

Table 5: Challenges of networking after Server Virtualization and Counter measures

3-4_Screen Shot 2016-02-05 at 1.16.51 PM.png

 

Sounds like a self-serving argument?

Looks as if looking at only good aspects of Open SDN??

OK. Let's bring up some potential drawbacks for Open SDN.

Does it sound fair?

 

Extension of the comparison table with shortcomings of Open SDN

 

Potential shortcomings for Open SDN would be as follows. (They are added to the comparison table to make the version 3.)

 

1. Too much changes (No.12 in Table 6)

   Introducing Open SDN means replacing existing complicated networking gears. Interfaces to manipulate them will also change. The magnitude of the change might be too big to be adopted.

 

2. Single point of failure (No.13 in Table 6)

    Centralized control (No.3) and single point of failure are two faces of the same coin. Controllers must use High Availability or redundancy system.

 

3. Performance and scale (No.14 in Table 6)

    Again, centralized control (No.3) implies that a single (or limited number of) control function plays an important role so that it might be a bottle-neck for performance and scalability.

 

4. Deep packet inspection (No.15 in Table 6)

   Some network application, such as advanced firewall or advanced load balancer may require deep packet inspection (DPI) capability, however, OpenFlow devices are not able to check data in the packet payload.

 

5. Stateful flow awareness (No.16 in Table 6)

   OpenFlow device basically examines every incoming packet independently without consideration of the state of flow (stateless).

 

After adding these potential shortcomings for Open SDN, it shows some weak points in the comparison table (Table 6).

However, interestingly the alternative approaches can't address all of those issues either.

How does it look like now?

 

Table 6: SDN solution comparison (version 3)

3-5_Screen Shot 2015-11-23 at 2.40.20 PM.png

 

Now that Open SDN shows some Red marks (Low compliant) in important criteria such as too much change.

SDN via APIs still has a lot of Red marks (Low compliant) on it.

SDN via Hypervisor-based Overlays has some Green marks (High compliant) and more Yellow marks (Medium compliant) on it. Good thing for it is that it does not have a Red mark. It looks like there is no critical obstacle to adopt SDN via Hypervisor-based Overlays.

At this moment, SDN via Hypervisor-based Overlay seems like getting most popular as SDN solution in enterprise data centers.

 

Do white-box switches play here?

 

In the end of this post, we would like to try bringing white-box switch into the discussion.

White-box switches are the name we call when the switch has an explicit separation of hardware provided by ODMs (Original Design Manufacturers) and the network operating system (NOS) supplied by software vendors. They are gaining traction in recent data center deployment.

 

What if we combine white-box switches with SDN via Hypervisor-based Overlay?

We might be able to turn some Yellow marks of Hypervisor-based SDN to Green marks.

 

For example, a white-box switch is a switch that consists of server-like hardware and NOS running on it. As for hardware, Open Compute Project is (OCP, which is an organization to design and enable the delivery of the most efficient data center hardware designs for scalable computing) publishing several specifications. According to them, simple architecture(s) have been adopted. And in general, NOS is Linux-based and supports basic set of functionalities as default. So utilizing white-box switches as underlay network for Hypervisor-based SDN, you will have more simplified devices in your environment (High compliant in No.2).

 

As mentioned above, white-box switches have multiple open attributes in them such as open architecture by OCP, Linux-based NOS (High compliant in No.5).

This openness and eco-system of white-box switches lead to cost reduction (High compliant in No.8), and since NOS is typically Linux-based so that maintenance tools for servers often can be utilized for maintaining white-box switches. It makes the maintenance easier (High compliant in No.7).

 

Now let’s look at the table (Table 7) again. Doesn’t Hypervisor-based SDN with white-box switches look prominent although it is not perfect?

 

Table 7: SDN solution comparison (version 4)

3-6_Screen Shot 2015-11-23 at 2.40.36 PM.png

In this post, we investigated several approached to realize SDN; Open SDN, SDN via APIs and SDN via Hypervisor-based Overlays.

As comparison tables show, each approach has its own strong and weak points, and SDN via Hypervisor-based Overlays looks like a kind of moderate approach to take advantage of SDN concept without taking a major risk. In addition, if we combine white-box switch with it, we might able to increase the benefits that we can get from it.

In fact, we can see more customers are adopting vmware's NSX, PlumGrid, Contrail, which are all overlay approach to achieve SDN capability.

Open SDN concept is great in terms of playing an important roll to drive SDN discussion and having brought multiple solutions (alternatives) as a result, however since the change it brings is disruptive, we can expect high switching cost to do that. (BTW, punning is not my intention ☹.)

Taking time to understand brand-new concept, training engineers with new technology, establishing new support scheme with new partners, etc. all these things are sufficient not to adopt new things for the moment.

 

Which is the future?

For short-term (and maybe mid-term), Hypervisor-based Overlay SDN seems like to have great acceptance. It realizes a stepwise introduction of SDN while it mitigates the magnitude of the change.

If the amount of what we will get from a solution is not much different from other options, the better is the one with less impact to current environment. But if you could come up with big positive differences in major use cases, the story would change.

Outcomes