Akihiro Koizumi

Evolution of Network in Data Center vol.1

Blog Post created by Akihiro Koizumi Employee on Jan 11, 2016

Introduction of the blog series

 

Software Defined Network or SDN was introduced a few years ago and is getting people's interest day by day.

At the same time, discussions around SDN sometimes sound like too difficult to understand or at least confusing.

 

One of the reasons above might be that these discussions try to handle multiple topics siting multiple backgrounds at the same time. For example, a change from physical network to virtual network is not only the change in networking field. Physical networking architecture is also changing according to the changes of data center (DC) architecture. Another example would be that there are challenges for networking, however we are seeing new challenges due to the evolution of DC as well as traditional challenges such as lack of agility. Those kind of things are often mixed up in SDN related discussions.

 

Another reason for confusion might be that network can be found at multiple layers and places. Even if we use network virtualization technology (e.g. overlay network), we still need physical network (it's sometimes referred as underlay network as opposed to overlay network). Even overlay network is realized at the edge (compute), underlay network often supports tunneling technologies which is fundamental for network virtualization. An example of finding network at multiple places is that now we have virtual switch, switching gear exists not only outside of a server but also inside a server. Even more advanced network gear can be inside a virtual machine (NFV). The same network function (e.g. Firewall) can be running at different place serving a different need at the same time. In the firewall case, a physical firewall can be deployed at the edge of DC and at the same time, virtual firewall can run on each computing host.

 

In this serial blog post, I will try to give a sort of step-by-step, unified view of DC network, SDN and related topics.

 

Evolution of Network in Data Center

 

First post is about evolution of network in DC.

 

A while ago, networking in DC was (and maybe many of them still are) what is called "Fat tree".

Top of rack (ToR) switch in each rack is connected to an aggregation switch(es) and each aggregation switch is connected to a core switch(es), creating a tree-like topology in DC.

From virtualization point of view, VLAN (virtual Local Area Network) was a main scheme to separate networks.

So “Fat tree” architecture and segregation of the network resource by VLAN were how networking looked like in a DC.

 

How about other components in a DC ?

If we look at storage, it had been well virtualized early on and a couple of years after, server virtualization was introduced.

As shown in the figure below, as server virtualization rises instead of using a physical server which is connected to ToR switch (SW), users began to use  virtual machines (VMs) which are connected to virtual switch (vSW) inside the physical server and eventually connected to ToR SW outside of the server.

Those changes themselves happened with networking almost unchanged, however the server virtualization affected the way how networking in DC looks like.

 

1_Screen Shot 2015-10-28 at 3.30.21 PM.png

 

The changes brought by the server virtualization to networking were;

(1) Networking seeps to server territory (vSW inside a server)

(2) Demand rise to connect lots of VMs

(3) VM can be a network equipment (NFV)

 

Since virtual switch was introduced according to the server virtualization, server administrator has to take care not only network interface card (NIC) but also switch(es) which connect multiple servers (VMs). At the same time, networking is not terminated at the edge of physical server but seeps inside of a physical server (ref. item no.1).

When server is virtualized, user can easily deploy new servers (VMs) and servers became dense (ref. item no.2).

NFV topic (ref. item no.3) will be discussed in another post.

These are the changes in architecture.

 

Challenges in Network

 

Next, we will look at the changes in challenges.

Since before virtualization trend started in DC, there have been several challenges in networking.

The first of the “traditional” challenges in networking is lack of agility. When we want to change the connectivity between servers, we had to carefully redesign the architecture, often stop all traffic in the network, change the configuration of relevant network gears. If we want a new feature introduced, we had to update firmware or even replace the network gear itself. That limits us to introduce better features in timely manner.

Second one is complicated maintenance. Networking was closed world which a few vendors dominated. Each vendor has proprietary interfaces and network engineer had to learn different CLIs if he/she had to maintain network gears from different vendors.

Third one is cost. Due to the dominance by a couple of vendors, the price of networking gears have remained relatively higher compared with other component in DC in fierce competition (e.g. server).

 

In addition to those “traditional” challenges in networking, we started to see new challenges in DC network after the server virtualization.

As servers get dense, layer 2 (L2) network in which servers use the same subnet (meaning that servers can communicate each other without IP address routing) expanded. In L2 network, the address (MAC address) is maintained by actual address (c.f. IP address utilizes 'block' to identify the next hop) so that the number of MAC addresses which have to be maintained exploded. A space of MAC address table of networking gear is limited and can not follow up the exploding MAC address to be learned.

 

Another new challenge is VLAN limitation. Due to the specification of VLAN, the maximum number of VLAN is technically limited to 4,096 (12bits). Since some of them are reserved for specific purposes, actual number of VLAN that users can use is even smaller. As DC operators began to provide virtualized environment to end users, the number of available VLANs became not enough to accommodate a lot of users.

 

The last challenge which will be mentioned is inefficiency in path utilization. As a lot of computing workload is being done at DC, East-West traffic (traffic between in the same DC) considerably increased. In the fat tree architecture, a traffic basically has to go up to the core SW and come down to the destination rack to communicate east-west. In that case, core SW becomes bottle-neck while other switches have capacity not utilized fully. In addition, in order to avoid network trouble (L2 loop), a mechanism called STP (spanning tree protocol) was used. It prevents L2 loop with blocking other ports of switches, which results in a poor path utilization.

So after server virtualization, network in DC faces challenges even more.

 

1_Screen Shot 2015-10-28 at 3.54.52 PM.png

 

In this way, I have quickly covered the changes in DCs and how they have affected networking in DC. Due to the changes of DC architecture (e.g. server virtualization) and traffic patterns (e.g. from North-South traffic to East-West traffic), networking has come to face new challenges along with traditional challenges.

 

At this moment, virtual switch (vSW) was introduced in the discussion, but other technologies/concept such as VXLAN, SDN were not. I tried to cover purely the background of the network evolution.

I think it is important not to look at technologies first. From users’ point of view, they see issues in their environment and seek solutions to address them. Technology is only a building block to make a solution and users don’t care about it. (However, I would also like to state that technology often takes very important rolls to make evolutionary or disruptive progress to solutions. I think it is a responsibility for technologists to create or find a good technology and shape it into a useful solution :-) .)

 

In the next post, we will look at a couple of approaches to address these challenges and will see some more newly introduced technologies/concept.

Stay tuned.

 

Ref: “Software Defined Networks: A comprehensive approach” by  Paul Goransson, Chuck Black

Outcomes