MEM Fabric – Preface (Part 1)

“Why does my application yield such bad performance?”

What if you are a cloud operator and this is coming from one of your customers? Private or  public cloud, it does not really matter, the only difference is whether the complaint comes from an internal or external customer. Nonetheless, you, as an operator responsible for the IaaS, will have to answer to your customers and provide fixes to the network in the shortest time possible.

SDN and virtual networks in particular, are great enablers for cloud networking, they have many virtues, but, at the same time, they bring a new challenge in monitoring, debugging and troubleshooting the network. This is because of the new layer added on top of the existing physical network, that is, the virtual network layer. Nothing new, the more moving parts there are, the more complex it is.

The results of a survey of 150 enterprises by Enterprise Management Associates and CA has shown that SDN in general lack operational tools to manage and monitor the new shape of the network. The almost empty toolbox imposes a real challenge in terms of operational aspects and might even hold back production deployments in some cases.

Let’s describe first the problem we are talking about.
Most of the networking solutions for the cloud are based on overlay technology. Overlay technology is using some sort of tunneling. We will discuss overlays shortly, but let’s first discuss the components of the new virtual network layer and describe how it looks.

The virtual network is all about software. There are no physical boxes, no wires and no power supplies and most importantly, no cooling fans Small smiley it is so quiet you could hear a pin drop. In this new kingdom, physical switches are replaced by software switches (e.g. Linux bridge, OpenVswitch, VMWare Virtual Switch etc.), Network Interface Cards (NICs) are replaced by vNICs emulated using server virtualization technologies (e.g. QEMU) and cables are replaced by tunneling protocols and software pipes (e.g. Linux veth).

On every compute there is at least one software switch to which the hosted virtual entities (VMs or containers) are connected. This software switch is the first networking gear (ignoring for a moment the emulated NIC inside the VM) which a packet sent from a VM or container arrives to. Virtual network’s traffic is not special in terms of forwarding, it is based on ip and ethernet headers. The main difference here, though, is the decoupling of the virtual (logical) topology from the physical topology.

In the physical world, network functions (NFs) like routers, load-balancer, IPS, IDS, firewall etc. are bumped in the wire. Looking at a physical network scheme one can easily find which NFs a packet whose source port is X and destination port is Y would hit, and which links it would traverse. In the virtual network, though, where source and destination are virtual entities and NFs turned to be Virtual Network Functions (VNFs), which can be hosted on any compute, understanding the packet’s physical path with the tools in hand today, is almost an impossible mission. What makes it even harder is SDN’s service insertion capability which makes adding/removing VNFs very easy, contributes to the network dynamic characteristics and adds a lot of headache when trying to find the actual route a packet took at a given time.

Figure 1: The picture below gives an idea of these physical and virtual layers.
Layers

From now on we will use the terms underlay and overlay when referring to physical and virtual networks respectively.

The virtual entities VMs/containers do not have ip-addresses in the underlay, the only addresses they have are in the overlays. Their overlay addresses are meaningless in the underlay. But, the underlay is the actual network to push and forward traffic around. So how does that work?

This is one of the things overlay technology solves. To enable communication between entities connected to the overlay, it uses the underlay address of the source and destination hypervisors. It essentially sends encapsulated packets (generated by VMs/containers) from the source hypervisor to the destination hypervisor. You are probably familiar with at least one of the tunneling protocols used today by overlays in the datacenter, GRE, VXLAN, NVGRE, STT, GENEVE etc. The common denominator between all of protocols is that they all use the hypervisors identifiers, namely their ip addresses, to forward traffic between virtual entities in the overlay.

This is how an encapsulated packet looks like:
Tunneling

We can now understand that trying to look in the underlay for packets carrying source/destination ip-addresses of the virtual entities (i.e. their originators/destinations in the overlays) is not a good start as they are now part of the payload.

A little more about this operational headache and a lot more on the tools Midokura provides for alleviating this headache in my next blogs.

Alon Harel

About Alon Harel

Alon is a Principal Architect At Midokura who leads the integration between Midonet overlay network and the underlay fabric. Before Midokura, Alon has worked in several architecture positions in Marvell, Voltaire and Mellanox in the area of Ethernet switching technologies. Going back in time, Alon was developing software for embedded systems in the communication industry.

Comments are closed.

Post Navigation