The advent of virtualisation initiates many changes to infrastructure paradigms. Initially, it started with data centre compute and storage modules. Compute virtualization gave rise to the virtual machine enabling data centres to host thousands of customers requiring flexible compute capacity. However, the network module lagged severely limiting the agility of cloud data centres. Why deploy compute in seconds when you have to wait weeks for network provisioning? The network box by box mentality hindered operations, preventing the entire infrastructure to be viewed and managed as one entity.
Finally, the network caught up, overlays aided network virtualization and provided multitenancy. Network overlays create flexible topologies instead of physical network topologies with Ethernet connections. Now that all elements of storage, compute and network are virtualized the entire data centre becomes more agile. The ability to put network and security services in virtualized format brings many advantages to cloud network operations. Physical devices were never born for the cloud. They were designed to support physical endpoints not virtual. Physical devices were forced to the cloud but they were never meant to be there.
The birth of the cloud gave rise to massive multi tenant environments. Segregation based on VLANs was never sufficient for multi application tiers with segments between each tier. VXLAN overlays overcome the VLAN ID limitation enabling environments to scale. Cloud environments can now support many tenants with every tenant requiring an isolated view of the network. They all require individual islands but more importantly each tenant must be properly secured.
Tenant A should not be able to communicate or affect Tenant’s B operations unless explicitly configured. A compromised tenant should not move horizontally, leak or beachhead to another tenant. There must be shaping, policing and secure segmentation between individual departments or customers. Measures must be in place to detect anomalies deep in the network, not solely at perimeter edge. Security is one of the most important aspects for cloud environments. Stateful and Deep Packet Inspection service VM are spun up to secure north south and east west traffic flows. There is a big gap securing east west traffic flow which Intel’s Open Security Controller ( OSC ) fills. Putting security services in VM format is only part of the puzzle. The orchestration system acts as the brains of the operation, coordinating the service chain logic.
Traditional Networking & Security
The early days of networking induced a very static approach to security. The network was modular, specific areas had certain roles. This was adequate for protecting north south traffic but east west would trombone to a central perimeter for scrubbing. Traffic tromboning is a zigzagging suboptimal traffic flow and many non optimized leaf and spine designs may witness an increase in latency. Latency is the real killer for application performance. Within the data center bandwidth is never a big problem as we can just throw money at it. But latency is more delicate to solve and occurs in different parts of the network. To reduce latency and enable a scalable model one should always aim to reduce trombones.
An over congested central firewall or IPS/IDS can have a massive effect on end-to-end latency. Utilizing one central security node serving all north south and east west traffic becomes one big bottleneck and single point of failure. Physical security nodes serving multiple types of traffic are configured to support a variety of security zones. To support multi tenant environments physical devices are sliced into virtual context and become very complicated to design and support. Changing the security paradigm by moving all security services closer to the workload in VM format removes the single point of failure and kludges physical nodes can bring to networking.
Security should be throughout the data centre not pinned to individual sections. Introducing Network Function Virtualization ( NFV ) to virtualise the IPS/IDS as a network function to individual racks or compute nodes propagates security and saves on latency. Local east west traffic routes to the local IPS and as it traverse to another VM or container. This offers a linearly scalable solution – adding Nova nodes enables the addition of IPS/IDS.
What is Network Function Virtualization ( NFV )?
Network Function Virtualization decouples the physical topology from logical services by placing network and security services in a VM. If you look inside a low end network appliance from any vendor, it looks very similar to an x86 server. The ASIC does little on low end appliances and really only comes to play on specialized high end network equipment. If service appliances look the same as x86 servers, why can’t we virtualise on x86 compute hosts instead of using proprietary hardware?
NFV with service chaining allows networking to be free from physical constraints. The network becomes topologically independent from the physical infrastructure. NFV Virtual Network Functions ( VNF ) service VMs are grouped together on compute nodes and service chained according to business logic. Coupled with an orchestration system the network and security services are deployed in the best fitted position – reducing latency and traffic trombones.
Every device is divided into 3 planes – control, management, and data plane. The data plane forwards packets as quickly as possible while the control and management plane don’t necessary need high forwarding performance. Initially, this is where the origins of NFV were seen – on control plane only devices. An example of a control plane only device would be a BGP route reflector or a LISP mapping database. No data forwarding is forwarded through them, they are the ones that create the controls for data to forward.
Recently, there has been some major improvements to forwarding in software. And now we are seeing a lot of data plane services going to VM format. Deep packet inspection was never done in hardware so why do you need a specialized appliance to perform this action?
Benefits of NFV
Disaster recovery and active active designs is much simple with NFV. Trying to do this in the physical world is difficult as you have to make sure configuration and state are sync between locations. Some designs attempt to use clustering products but these add to network complexity which is always bad for networking. Others share state across locations but additional hacks are implemented to support ingress and egress traffic flow. If the Data Centre Interconnect ( DCI ) link goes down you could be in a nasty split brain scenario. It’s far better to put services in VM or container format and scale out when needed.
In the physical world there is no way to migrate a physical server to another rack in any reasonable time. This is Ok for a short planned migration but not for Disaster Recovery and Disaster Avoidance events. Physical appliance are also heavily over provisioned. If you want a 10G box you always buy more just in case. And then there is the case of spares. Sparing physical nodes results in appliances left idle in stock just incase you need them.
NFV overcomes a lot of these challenges, increasing flexibility and agility while lowering provisioning time. Sparing is reduced and too are the number of redundant devices. All these factors coupled with less lock-in simplify networking.
Midokura Network Design
Midokura offers a overlay network virtualization solution enabling network services not available in the physical network. The solution is linear matching scale to demand, removing all single points of failure. Simplicity is key for good network design and Midokura focus on solid network design for Layer 2 to Layer 4. They decided to focus on what they know best ( Layer 2 to Layer 4 ) and hand off the complexities for anything higher in the stack to trusted companies specializing in that field. Companies, like Intel and their Security Controller platform.
For Layer 2 to Layer 4 ( source NAT and FWaaS, load balancing ) all work is carried out by the MidoNet agent, installed locally on the compute hosts. They don’t use virtual or physical appliances for Layer 2 to Layer 4 services. Instead of moving flows to network appliances where state is held, they move the state around the network. The state is never held in one place enabling linear scalability. The agent is where all the magic happens, removing network hosts that cause traffic trombones.
Service Insertion and chaining constructs are introduced with MEM 5.0. The Midokura SDN architecture combines with Intel’s Open Security Controller providing a complete DPI solution focusing on both stateful and stateless services. Cloud security solutions require more than simple packet filtering at Layer 2 to Layer 4. Solutions must delve deeper into the application payload to effectively protect workloads.
Stateful and Stateless Services
Packet filtering provides only partial protection. They usually match on Layer 2 to Layer 4 headers but not on things like TCP SYN, making it challenging to match on established sessions. Packet filtering is referred to as a stateless function. No additional analyses on TCP control flags, sequence numbers, and ACK fields is performed. If you want to look deeper into a packet a stateful function is carried out. Deep packet inspection requires a different type of security engine to track state in a packet.
To check whether a session is new or existing specific TCP flags are analyzed. This is known as stateful inspection. Stateful devices track the actual packet state and goes deeper into the application. For example, stateful services enable the detection of packets within a certain TCP Window parameter.
Open Security Controller
The MidoNet SDN architecture enables linear Layer 2 to Layer 4 networking. All networking services are pushed to the edges and the security paradigm is now closer to workloads where it belongs. All VM to VM or internal application tier traffic does not flow through a central device. Everything is looking bright again but what about the orchestration? NFV is only an enabler, the orchestration provides the brains for the entire operation. A mandatory requirement for NFV and service chaining is automation and orchestration. Without an orchestrator you will struggle, doing everything by hand is impossible.
Open Security Controller from Intel is a software defined security controller allowing security to be scaled in an automated way. It provides the orchestration and automation of security functions and supports a wide range of network and security services, including Web Application Firewall ( WAF ), IPS/IDS, and Application Delivery Controller. All of which can be deployed in Midokura network virtualization solution.
As discussed, service chaining and NFV are just enablers. The orchestrator takes care of VNF management, placement of workload, failover, and the load balancing of workloads onto service VM. The OSC does not actually manage the policy but acts as the brain performing the coordination. Environments still require network managers to manage the policy.
MidoNet and Open Security Controller
MidoNet and OSC enable the insertion of security VMs between workloads offering the ability to detect attacks and network anomalies closer to workloads. An attack can happen anywhere in the flow. Initially, a flow could look innocent but later on it could become compromised. As a result, steps must be taken throughout the network. For full workload protection, OSC and MidoNet API deploy security functions and perform service chaining for traffic inspection.
The service function model consist of two interfaces. A numbered management interface and a unnumbered inspection interface. The management interface is assigned a Floating IP enabling access from outside to inside. The inspection interface receives the traffic of inspected VMs and it’s here the VLAN tag and PCP bits are added.
The service chain logic intercepts the packet and adds a VLAN header with a tag that signals the policy. The PCP (priority code point) bits are set to inform the service chain logic of packet direction. The PCP bits state weather a flow is actually leaving or arriving at the VM. Under normal operations MidoNet doesn’t need to see the PCP but in cases that two VM’s are protected by the same SF packets may traverse twice. MAC addresses are used to direct traffic between two service VMs in a chain.
We now have the ability to detect if an attacker is behind a Source NAT – unmasking the attacker. This is not something you can do with a Neutron API. The solution interacts with the ELK cluster to go back through the flow record database, querying the flow history to unmask the attacker.
Midokura and Intel recommend integrating the OSC service VM’s as a layer 2 bump in the wire. Services are transparently inserted into the traffic path without changing the current topology. The ability to enable Layer 7 services without altering current design is a big deployment advantage. All these new technologies and new ways of doing things might look great in PowerPoint but they still need to be deployed and integrated. A bump in the wire deployment model is easy. The controller then installs service modules into the topology based on your specifications – one per rack, or one per compute node.