Service Chaining Openstack

Hello, in this blog article we want to describe two ways of doing service chaining with OpenStack network traffic.

What is service chaining and why should i care?

Service chaining is used by telco providers and by cloud service providers to inspect traffic using a firewall or an IDS appliance.

The appliance will get to see the traffic in two possible ways: traffic is either transparently diverted or cloned.

Inline protection in the context of service chaining means that the traffic will be transparently sent to the IDS or VM appliance, then forwarded to the receiver.

The vm will usually not notice that the cloud traffic destined for it is running through a ‘bump in the wire’.

Advantages of this approach are full control over what traffic gets through to the vm, disadvantages are network speed – the limitation is the throughput of the appliance.

MidoNet 2.0 will support fail-open service chaining so that the traffic will be directly transmitted to the receiving vm if the link on the appliance fails.

An important aspect of this mode is that the appliance must reinsert the whitelisted traffic into the network.

The overlay will then make sure to deliver it to the virtual machine as the ultimate receiver.

For this it may use the same virtual port or another one, depending on the design of the virtual appliance.

In monitoring mode the traffic that is subject to service chaining will be duplicated (using MidoNet port mirroring) and sent to the appliance.

The appliance is not expected to re-insert the traffic into the network.

Advantages of this approach is the high speed of the network because we do not have to wait for inspection.

The obvious disadvantage is the opportunity for an attacker to have a small network connectivity window to reach a vulnerable machine.

This vulnerability of the vm persists until the appliance has analyzed the mirrored traffic and tells the network platform to block the attack traffic.


The first approach to do service chaining we describe here is possible in the current MEM 1.9.3 stable release.

We will also shortly discuss another, more integrated approach that will be available in MEM 2.0.

We define service chaining in the scope of this article as the modification of routing for traffic to be sent to a certain virtual machine.

This virtual machine will be acting as an appliance, called the service function (or SF) from here on.

During this blog article we will guide you through the following steps to set up service chaining with MidoNet and OpenStack:

Creating the networks and routers

Creating the virtual machines

Removing anti-spoofing rules from the bridge port of the SF

Creating the service chaining using source based routing in MidoNet

Testing the routing and the L3 insertion

Outlook at MEM 2.0 L2insertion api macros

Creating the networks and routers

The networks and routers are created in OpenStack Horizon. You need three networks and two tenant routers. Make sure the second tenant router has no uplink configured.

networks and routers for service chaining

networks and routers for service chaining

Please note that in this demo setup the network 10.0.0.0/24 is the public floating ip range.

Creating the virtual machines

For this demo you need three virtual machines, we named them green, cyan and SF.

virtual machines

virtual machines

To keep things simple we have set up the security group for these vms to allow any-any traffic.

security group rules

security group rules

Removing anti-spoofing rules from the bridge port of the SF

For the service chaining in the SF to work you must remove the protection rules from the VM port. Otherwise the SF cannot route the traffic back to the original receiver.

This is an example how to remove the anti spoofing rules from a port on a bridge in MidoNet.

A bridge in MidoNet is a network in OpenStack neutron.

midonet> cleart
tenant_id: None

midonet> bridge list
bridge bridge0 name Network 2 state up
bridge bridge1 name Network 3 state up
bridge bridge2 name public state up
bridge bridge3 name Network 1 state up

midonet> bridge bridge1 port list
port port0 device bridge1 state up plugged yes infilter chain0 outfilter chain1
port port1 device bridge1 state up plugged yes
port port2 device bridge1 state up plugged yes infilter chain2 outfilter chain3
port port3 device bridge1 state up plugged no peer router0:port0
port port4 device bridge1 state up plugged no peer router1:port0

midonet> bridge bridge1 port port0 infilter show
chain chain0 name OS_PORT_914eed42-895d-41ff-a503-c2a4e2c6861e_INBOUND

midonet> chain chain0 rule list
rule rule0 ethertype 2048 src !192.168.3.105 proto 0 tos 0 fragment-policy any pos 1 type drop
rule rule1 hw-src !fa:16:3e:82:e7:dc proto 0 tos 0 fragment-policy any pos 2 type drop
rule rule2 proto 0 tos 0 flow return-flow fragment-policy unfragmented pos 3 type accept
rule rule3 proto 0 tos 0 fragment-policy unfragmented pos 4 type jump jump-to chain4
rule rule4 ethertype !2054 proto 0 tos 0 fragment-policy any pos 5 type drop

midonet> chain chain0 delete rule rule0
midonet> chain chain0 delete rule rule1

midonet> chain chain0 rule list
rule rule2 proto 0 tos 0 flow return-flow fragment-policy unfragmented pos 1 type accept
rule rule3 proto 0 tos 0 fragment-policy unfragmented pos 2 type jump jump-to chain4
rule rule4 ethertype !2054 proto 0 tos 0 fragment-policy any pos 3 type drop

For a guide how to use midonet-cli and to learn more about rule chains please consult the following documentation: http://docs.midokura.com/docs/latest/operation-guide/content/midonet_rule_chain_example.html

This is how the chain rules for this port now look in Midonet Manager:

chain rules without spoofing protection

chain rules without spoofing protection

In comparison this is a normal port with spoofing protection enabled as chain rules:

chain rules with spoofing protection

chain rules with spoofing protection

Creating the service chaining using source based routing in MidoNet

Our goal is to put a service chain of a FW appliance between traffic that enters the Tenant Router 1 and wants to go to the VM cyan from the VM green.

For this reason we are adding a route in Tenant Router 1 that the traffic for ip address of the VM cyan shall be routed via Network 3 to the gw SF as a L3 nexthop.

service chaining route

service chaining route

The SF will use Tenant Router 2 as L3 nexthop for sending the processed traffic back to the VM cyan. Tenant Router 2 knows how to reach the VM cyan locally through Network 2.

Using Tenant Router 1 for this is wrong.

It will inevitably lead the service chained traffic into a nice routing loop unless MidoNet implements src-port based routing on top of src-ip based routing.

Then we can do this with one router. But until then in MidoNet you need two routers for the src ip based routing to work without a loop.

We need to configure the routing on the SF.

The service chained traffic must not go back to Tenant Router 1 but goes through Tenant Router 2 to reach the original recipient of this traffic.

root@sf:~# ip route add 192.168.2.0/24 via 192.168.3.2
root@sf:~# ip route show
default via 192.168.3.1 dev eth0
169.254.169.254 via 192.168.3.100 dev eth0
192.168.2.0/24 via 192.168.3.2 dev eth0
192.168.3.0/24 dev eth0 proto kernel scope link src 192.168.3.105

root@sf:~# ping -c1 192.168.2.103
PING 192.168.2.103 (192.168.2.103) 56(84) bytes of data.
64 bytes from 192.168.2.103: icmp_seq=1 ttl=63 time=22.5 ms

Testing the routing and the L3 service insertion

We can tcpdump on the SF instance to see if the source based routing works and the service chaining of the vm traffic works as we want it to be.

For this we ping -c1 the cyan instance from the green instance.

root@green:~# ping -c1 192.168.2.103
PING 192.168.2.103 (192.168.2.103) 56(84) bytes of data.

At the same time we tcpdump on eth0 of the SF instance to see if the packet comes in.

root@sf:~# tcpdump -i eth0 -l -nnn -vvv -e -X ! port 22
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
02:10:54.172528 ac:ca:ba:5b:18:fc > fa:16:3e:82:e7:dc, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 58484, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.1.103 > 192.168.2.103: ICMP echo request, id 2157, seq 1, length 64
0x0000: 4500 0054 e474 4000 3f01 d215 c0a8 0167 E..T.t@.?……g
0x0010: c0a8 0267 0800 9742 086d 0001 ae9f ce55 …g…B.m…..U
0x0020: 0000 0000 1b87 0100 0000 0000 1011 1213 …………….
0x0030: 1415 1617 1819 1a1b 1c1d 1e1f 2021 2223 ………….!”#
0x0040: 2425 2627 2829 2a2b 2c2d 2e2f 3031 3233 $%&'()*+,-./0123
0x0050: 3435 3637 4567

The mac address ac:ca:ba:5b:18:fc is from the port of the Tenant Router 1:

midonet> cleart
tenant_id: None
midonet> router list
router router0 name Tenant Router 1 state up infilter chain0 outfilter chain1
router router1 name MidoNet Provider Router state up
router router2 name Tenant Router 2 state up infilter chain2 outfilter chain3
midonet> router router0 port list
port port0 device router0 state up plugged no mac ac:ca:ba:5b:c3:b2 address 10.0.0.4 net 169.254.255.0/30 peer router1:port0
port port1 device router0 state up plugged no mac ac:ca:ba:71:a5:35 address 192.168.2.1 net 192.168.2.0/24 peer bridge0:port0
port port2 device router0 state up plugged no mac ac:ca:ba:5b:18:fc address 192.168.3.1 net 192.168.3.0/24 peer bridge1:port0
port port3 device router0 state up plugged no mac ac:ca:ba:ab:b3:ff address 192.168.1.1 net 192.168.1.0/24 peer bridge2:port0

The tcpdump has shown the incoming packet, yet the green machine did not see a reply. the reason for this is that we did not enable ip forwarding in the SF.

This way we simulate a service chaining where the traffic is blocked. We have proven that the traffic does not otherwise leak through and reach they cyan VM.

If we enable ip forwarding on the SF instance now we can see that routing works and the cyan VM will see the traffic as soon as we forward it through the SF:

root@sf:~# echo 1 > /proc/sys/net/ipv4/ip_forward
root@sf:~# tcpdump -i eth0 -l -nnn -vvv -e -X ! port 22
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
02:14:37.430653 ac:ca:ba:5b:18:fc > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.3.105 tell 192.168.3.1, length 28
0x0000: 0001 0800 0604 0001 acca ba5b 18fc c0a8 ………..[….
0x0010: 0301 0000 0000 0000 c0a8 0369 ………..i
02:14:37.431091 fa:16:3e:82:e7:dc > ac:ca:ba:5b:18:fc, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Reply 192.168.3.105 is-at fa:16:3e:82:e7:dc, length 28
0x0000: 0001 0800 0604 0002 fa16 3e82 e7dc c0a8 ……….>…..
0x0010: 0369 acca ba5b 18fc c0a8 0301 .i…[……
02:14:38.406745 ac:ca:ba:5b:18:fc > fa:16:3e:82:e7:dc, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 1961, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.1.103 > 192.168.2.103: ICMP echo request, id 2158, seq 1, length 64
0x0000: 4500 0054 07a9 4000 3f01 aee1 c0a8 0167 E..T..@.?……g
0x0010: c0a8 0267 0800 74a5 086e 0001 8ea0 ce55 …g..t..n…..U
0x0020: 0000 0000 5a22 0500 0000 0000 1011 1213 ….Z”……….
0x0030: 1415 1617 1819 1a1b 1c1d 1e1f 2021 2223 ………….!”#
0x0040: 2425 2627 2829 2a2b 2c2d 2e2f 3031 3233 $%&'()*+,-./0123
0x0050: 3435 3637 4567
02:14:38.409748 fa:16:3e:82:e7:dc > ac:ca:ba:8f:ad:5a, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 62, id 1961, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.1.103 > 192.168.2.103: ICMP echo request, id 2158, seq 1, length 64
0x0000: 4500 0054 07a9 4000 3e01 afe1 c0a8 0167 E..T..@.>……g
0x0010: c0a8 0267 0800 74a5 086e 0001 8ea0 ce55 …g..t..n…..U
0x0020: 0000 0000 5a22 0500 0000 0000 1011 1213 ….Z”……….
0x0030: 1415 1617 1819 1a1b 1c1d 1e1f 2021 2223 ………….!”#
0x0040: 2425 2627 2829 2a2b 2c2d 2e2f 3031 3233 $%&'()*+,-./0123
0x0050: 3435 3637 4567
02:14:43.423076 fa:16:3e:82:e7:dc > ac:ca:ba:8f:ad:5a, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.3.2 tell 192.168.3.105, length 28
0x0000: 0001 0800 0604 0001 fa16 3e82 e7dc c0a8 ……….>…..
0x0010: 0369 0000 0000 0000 c0a8 0302 .i……….
02:14:43.427709 ac:ca:ba:8f:ad:5a > fa:16:3e:82:e7:dc, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Reply 192.168.3.2 is-at ac:ca:ba:8f:ad:5a, length 28
0x0000: 0001 0800 0604 0002 acca ba8f ad5a c0a8 ………….Z..
0x0010: 0302 fa16 3e82 e7dc c0a8 0369 ….>……i

So here you have it: this is basically how an inline firewall or an inline IDS does service chaining using a faux L3 hop in MidoNet.

It is a very simple example but we hope you get the idea now how to use a L3 hop to form a service chain for traffic going to a VM.

Here is the ping from the green instance:

root@green:~# ping -c1 192.168.2.103
PING 192.168.2.103 (192.168.2.103) 56(84) bytes of data.
64 bytes from 192.168.2.103: icmp_seq=1 ttl=63 time=18.8 ms

— 192.168.2.103 ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 18.823/18.823/18.823/0.000 ms

Here is the corresponding tcpdump from the cyan instance:

root@cyan:~# tcpdump -i eth0 -l -nnn -vvv -e -X ! port 22
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
02:20:29.751024 ac:ca:ba:b1:b2:a8 > fa:16:3e:3d:cc:23, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 61, id 62161, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.1.103 > 192.168.2.103: ICMP echo request, id 2163, seq 1, length 64
0x0000: 4500 0054 f2d1 4000 3d01 c5b8 c0a8 0167 E..T..@.=……g
0x0010: c0a8 0267 0800 a668 0873 0001 eda1 ce55 …g…h.s…..U
0x0020: 0000 0000 c458 0a00 0000 0000 1011 1213 …..X……….
0x0030: 1415 1617 1819 1a1b 1c1d 1e1f 2021 2223 ………….!”#
0x0040: 2425 2627 2829 2a2b 2c2d 2e2f 3031 3233 $%&'()*+,-./0123
0x0050: 3435 3637 4567
02:20:29.754616 fa:16:3e:3d:cc:23 > ac:ca:ba:71:a5:35, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 46235, offset 0, flags [none], proto ICMP (1), length 84)
192.168.2.103 > 192.168.1.103: ICMP echo reply, id 2163, seq 1, length 64
0x0000: 4500 0054 b49b 0000 4001 40ef c0a8 0267 E..T….@.@….g
0x0010: c0a8 0167 0000 ae68 0873 0001 eda1 ce55 …g…h.s…..U
0x0020: 0000 0000 c458 0a00 0000 0000 1011 1213 …..X……….
0x0030: 1415 1617 1819 1a1b 1c1d 1e1f 2021 2223 ………….!”#
0x0040: 2425 2627 2829 2a2b 2c2d 2e2f 3031 3233 $%&'()*+,-./0123
0x0050: 3435 3637 4567

As you can see, the packet comes into the vm through tenant router 2:

midonet> router router2 port list
port port0 device router2 state up plugged no mac ac:ca:ba:8f:ad:5a address 192.168.3.2 net 192.168.3.0/24 peer bridge1:port1
port port1 device router2 state up plugged no mac ac:ca:ba:90:49:50 address 192.168.1.2 net 192.168.1.0/24 peer bridge2:port1
port port2 device router2 state up plugged no mac ac:ca:ba:b1:b2:a8 address 192.168.2.2 net 192.168.2.0/24 peer bridge0:port1

However, as you can also notice, the ICMP echo reply goes out via tenant router 1:

midonet> router router0 port list
port port0 device router0 state up plugged no mac ac:ca:ba:5b:c3:b2 address 10.0.0.4 net 169.254.255.0/30 peer router1:port0
port port1 device router0 state up plugged no mac ac:ca:ba:71:a5:35 address 192.168.2.1 net 192.168.2.0/24 peer bridge0:port0
port port2 device router0 state up plugged no mac ac:ca:ba:5b:18:fc address 192.168.3.1 net 192.168.3.0/24 peer bridge1:port0
port port3 device router0 state up plugged no mac ac:ca:ba:ab:b3:ff address 192.168.1.1 net 192.168.1.0/24 peer bridge2:port0

Asymmetric routing is no problem for the flow controllers implementing the MidoNet distributed virtual edge because in an overlay all that is installed in the hosts is kernel flows and transformations on packets. Unlike in other overlay solutions, based on BGP for example, there is no ‘real’ routing taking place between the hosts, it is all virtual and simulated.

Outlook at MEM 2.0 L2insertion api calls

In Midonet 2.0 the service chaining will be possible using a new api call called L2insertion.

With this api call you will be able to transparently set up a service function (a virtual appliance doing IDS or firewalling) in the same L2 segment.

Also this means that everything you saw in this blog article will be automated by MidoNet and you do not need to modify the routing tables of your tenant routers or will be required to use a faux L3 hop any more to redirect traffic to an NFV appliance.

When you configure the L2insertion, MidoNet will automatically take care to move the L2 traffic first to the L2 appliance in the virtual machine when traffic for a virtual machine enters the virtual switch.

Wrapping it up

If you made it to the end of this article you surely have a lot of questions now. Please feel free to write to us at info@midokura.com and also let us know your opinion about this blog article via twitter @midokura.

You will find contact information on our homepage http://www.midokura.com and you can also leave us a message in the Userlike addon that will pop up on the homepage once you stay there for a couple of minutes.

Thank you for reading this article and enjoy your virtual networking!

Alexander Gabert

About Alexander Gabert

Systems Engineer - Midokura Alexander Gabert spent the last 6 years doing systems engineering and system operations projects in large companies throughout germany, working for inovex in Karlsruhe. During that time he was responsible for running clusters of databases, big data applications (Hadoop) and high-volume web servers with Varnish caches and CDN infrastructure in front of them. Coming from this background as an infrastructure engineer he looks at our solution from the view point of our customers. After having worked together with several networking departments in big companies as someone responsible for applications and (virtual) servers he knows from first-hand practical experience what the actual benefits will be when we introduce network virtualization to face todays data center networking challenges.

Comments are closed.

Post Navigation