Building a MidoNet demo environment

Hello, my name is Alex. I work as a field engineer for Midokura.

This blog post will serve as an overview how our field team builds OpenStack MidoNet demos for customers and partners and to give you an impression of our daily work as systems engineers.

If you like what you see here: we are hiring and we are happy to hear back from you!

We will show the OSPF underlay network configuration of the servers, an example active-active switch configuration for two Dell S6000 switches, the configuration of the demo installer and go over the BGP setup for the gateways. Finally we will wrap up with some performance testing and a cool SDN trick for allowing us to install software in virtual machines when not yet having internet connectivity from the point of view of the MidoNet gateways!


Part 1: diagram of the network solution

DSC_Frankfurt2


Part 2: network configuration of servers

This is the network configuration of all relevant servers.

root@mido86:~# cat /etc/network/interfaces

auto lo
iface lo inet loopback

auto lo:1
iface lo:1 inet static
 address 192.168.2.86
 netmask 255.255.255.255

auto em1
iface em1 inet static
 address 10.204.96.86
 netmask 255.255.255.0
 gateway 10.204.96.1
 dns-nameserver 10.204.0.11 10.204.0.12 8.8.8.8

auto p4p2
iface p4p2 inet static
 address 192.168.1.2
 netmask 255.255.255.252

auto p4p1
iface p4p1 inet static
 address 192.168.1.6
 netmask 255.255.255.252

root@mido86:~# ssh mido87 cat /etc/network/interfaces

auto lo
iface lo inet loopback

# OSPF: use this IP for midonet agents
auto lo:1
iface lo:1 inet static
 address 192.168.2.87
 netmask 255.255.255.255

auto em1
iface em1 inet static
 address 10.204.96.87 
 netmask 255.255.255.0
 gateway 10.204.96.1
 dns-nameserver 10.204.0.11 10.204.0.12 8.8.8.8

auto p5p2
iface p5p2 inet static
 address 192.168.1.10
 netmask 255.255.255.252

auto p5p1
iface p5p1 inet static
 address 192.168.1.14
 netmask 255.255.255.252

root@mido86:~# ssh mido88 cat /etc/network/interfaces

auto lo
iface lo inet loopback

auto lo:1
iface lo:1 inet static
 address 192.168.2.88
 netmask 255.255.255.255

auto em1
iface em1 inet static
 address 10.204.96.88
 netmask 255.255.255.0
 gateway 10.204.96.1
 dns-nameserver 10.204.0.11 10.204.0.12 8.8.8.8

auto p6p2
iface p6p2 inet static
 address 192.168.1.18
 netmask 255.255.255.252

# p6p1 will be used and managed by midonet gateway

root@mido86:~# ssh mido89 cat /etc/network/interfaces

auto lo
iface lo inet loopback

auto lo:1
iface lo:1 inet static
 address 192.168.2.89
 netmask 255.255.255.255

auto em1
iface em1 inet static
 address 10.204.96.89
 netmask 255.255.255.0
 gateway 10.204.96.1
 dns-nameserver 10.204.0.11 10.204.0.12 8.8.8.8

auto p7p2
iface p7p2 inet static
 address 192.168.1.26
 netmask 255.255.255.252

# p7p1 will be managed by midonet gateway

The rest of the configuration of the servers is pretty straightforward: Ubuntu 14.04, normal /etc/hosts, normal /etc/resolv.conf.

Note that the demo installer will update the /etc/hosts of each machine with data about the containers and the hosts.


Part 3: network configuration of switches

This is the configuration of both switches.

Current Configuration ...
! Version 9.8(0.0)
! Last configuration change at Mon Aug  3 21:32:00 2015 by xoxo
! Startup-config last updated at Tue Jul 28 21:25:47 2015 by xoxo
!
boot system stack-unit 0 primary system: A:
boot system stack-unit 0 secondary system: B:
boot system stack-unit 0 default system: A:
!
redundancy auto-synchronize full
!
hardware watchdog stack-unit 0
hardware watchdog stack-unit 1
hardware watchdog stack-unit 2
hardware watchdog stack-unit 3
hardware watchdog stack-unit 4
hardware watchdog stack-unit 5
!
hostname S6000-1
!
username xoxo password 7 xoxo privilege 15
username xoxo password 7 xoxo privilege 15
!
stack-unit 0 quad-port-profile 0,4,16,24,32,36,40,44,48,52,56,60,64,68,72,76,80,84,88,92,100,108,116,124
!
stack-unit 0 provision S6000
!
stack-unit 0 port 0 portmode quad
!
stack-unit 0 port 4 portmode quad
!
interface TenGigabitEthernet 0/0
 ip address 192.168.1.1/30
 mtu 9000
 no shutdown
!
interface TenGigabitEthernet 0/1
 ip address 192.168.1.9/30
 mtu 9000
 no shutdown
!
interface TenGigabitEthernet 0/2
 ip address 192.168.1.17/30
 mtu 9000
 no shutdown
!
interface TenGigabitEthernet 0/3
 ip address 192.168.1.21/30
 mtu 9000
 no shutdown
!
 port-channel-protocol LACP
  port-channel 1 mode active
 no shutdown
!
interface fortyGigE 0/120
 ip address 10.204.27.11/25
 mtu 9000
 no shutdown
!
 port-channel-protocol LACP
  port-channel 1 mode active
 no shutdown
!
interface ManagementEthernet 0/0
 ip address 10.204.96.200/24
 no shutdown
!
interface Loopback 0
 ip address 192.168.2.1/32
 no shutdown
!
interface Port-channel 1
 ip address 192.168.3.1/30
 no shutdown
!
interface Vlan 1
!
router ospf 1
 network 192.168.1.0/24 area 0
 network 192.168.2.0/24 area 0
 network 192.168.3.0/24 area 0
 network 10.204.27.0/25 area 0
 redistribute static
!
router bgp 65821
 network 0.0.0.0/0
 bgp four-octet-as-support
 neighbor 192.168.1.22 remote-as 65788
 neighbor 192.168.1.22 next-hop-self
 neighbor 192.168.1.22 default-originate
 neighbor 192.168.1.22 no shutdown
!
management route 0.0.0.0/0 10.204.96.1
!
ip route 0.0.0.0/0 10.204.27.1
!
ip ssh server enable
!
protocol lldp
!
line console 0
line vty 0
line vty 1
line vty 2
line vty 3
line vty 4
line vty 5
line vty 6
line vty 7
line vty 8
line vty 9
!
reload-type
 boot-type normal-reload
 config-scr-download enable
!
end
Current Configuration ...
! Version 9.8(0.0)
! Last configuration change at Mon Aug  3 21:36:43 2015 by xoxo
! Startup-config last updated at Mon Aug  3 21:36:47 2015 by xoxo
!
boot system stack-unit 0 primary system: A:
boot system stack-unit 0 secondary system: B:
boot system stack-unit 0 default system: A:
!
redundancy auto-synchronize full
!
hardware watchdog stack-unit 0
hardware watchdog stack-unit 1
hardware watchdog stack-unit 2
hardware watchdog stack-unit 3
hardware watchdog stack-unit 4
hardware watchdog stack-unit 5
!
hostname S6000-2
!
username xoxo password 7 xoxo privilege 15
username xoxo password 7 xoxo privilege 15
!
stack-unit 0 quad-port-profile 0,4,16,24,32,36,40,44,48,52,56,60,64,68,72,76,80,84,88,92,100,108,116,124
!
stack-unit 0 provision S6000
!
stack-unit 0 port 0 portmode quad
!
stack-unit 0 port 4 portmode quad
!
interface TenGigabitEthernet 0/4
 ip address 192.168.1.5/30
 mtu 9000
 no shutdown
!
interface TenGigabitEthernet 0/5
 ip address 192.168.1.13/30
 mtu 9000
 no shutdown
!
interface TenGigabitEthernet 0/6
 ip address 192.168.1.25/30
 mtu 9000
 no shutdown
!
interface TenGigabitEthernet 0/7
 ip address 192.168.1.29/30
 mtu 9000
 no shutdown
!
 port-channel-protocol LACP
  port-channel 1 mode active
 no shutdown
!
interface fortyGigE 0/120
 ip address 10.204.27.12/25
 mtu 9000
 no shutdown
!
interface fortyGigE 0/124
 no ip address
 mtu 9000
!
 port-channel-protocol LACP
  port-channel 1 mode active
 no shutdown
!
interface ManagementEthernet 0/0
 ip address 10.204.96.201/24
 no shutdown
!
interface Loopback 0
 ip address 192.168.2.2/32
 no shutdown
!
interface Port-channel 1
 ip address 192.168.3.2/30
 no shutdown
!
interface Vlan 1
!
router ospf 1
 network 192.168.2.0/24 area 0
 network 192.168.1.0/24 area 0
 network 192.168.3.0/24 area 0
 network 10.204.27.0/27 area 0
 redistribute static
!
router bgp 65821
 network 0.0.0.0/0
 bgp four-octet-as-support
 neighbor 192.168.1.30 remote-as 65788
 neighbor 192.168.1.30 next-hop-self
 neighbor 192.168.1.30 default-originate
 neighbor 192.168.1.30 no shutdown
!
management route 0.0.0.0/0 10.204.96.1
!
ip route 0.0.0.0/0 10.204.27.1
!
ip ssh server enable
!
protocol lldp
!
line console 0
line vty 0
line vty 1
line vty 2
line vty 3
line vty 4
line vty 5
line vty 6
line vty 7
line vty 8
line vty 9
!
reload-type
 boot-type normal-reload
 config-scr-download enable
!
end

Passwords have been removed.

As you can see this is a rather simple setup, we just want to show active-active paths for the tunnel traffic of MidoNet and be able to simulate a switch failure with uninterrupted cloud connectivity.


Part 4: OSPF configuration of servers

This is the interesting part of the demo lab.

While the MidoNet installation is based on our demo installer and nothing spectacular we are using OSPF loopback ip addresses for the agents in the tunnel zone.

This way we can load share the traffic across multiple network cards in the server and have a L3 active-active fabric to run our MidoNet overlay platform on.

Note that for OSPF to work the MTU of the server interfaces must adhere to the MTU of the switches.

You must run this command for all NICs on all servers to adjust their MTU. Otherwise your OSPF sessions between the servers and the switches will not establish.

ip link set dev p4p1 mtu 8982
ip link set dev p4p2 mtu 8982

Here comes the OSPF configuration for all servers.

root@mido86:~# cat /etc/quagga/daemons 
zebra=yes
bgpd=no
ospfd=yes
ospf6d=no
ripd=no
ripngd=no
isisd=no
babeld=no

root@mido86:~# dpkg -l | grep quagga
ii quagga 0.99.22.4-99mido1 amd64 BGP/OSPF/RIP routing daemon

root@mido86:~# cat /etc/quagga/zebra.conf 
hostname mido86

password zebra
enable password zebra

interface p4p1
 ip address 192.168.1.6/30
 no shutdown

interface p4p2
 ip address 192.168.1.2/30
 no shutdown

root@mido86:~# cat /etc/quagga/ospfd.conf 
hostname mido86

password midokura
enable password midokura

log file /var/log/quagga/ospfd.log
log record-priority

! debug ospf ism
! debug ospf nsm
! debug ospf lsa
! debug ospf zebra
! debug ospf event
! debug ospf packet all detail

router ospf
 router-id 192.168.2.86
 network 192.168.1.0/30 area 0
 network 192.168.1.4/30 area 0
 network 192.168.2.86/32 area 0
 redistribute static

The zebra.conf is similar on all nodes and can be derived from the one shown here, we will only show the ospfd.conf from the remaining servers now

root@mido86:~# ssh mido87 cat /etc/quagga/ospfd.conf 
hostname mido87

password midokura
enable password midokura

log file /var/log/quagga/ospfd.log
log record-priority

! debug ospf ism
! debug ospf nsm
! debug ospf lsa
! debug ospf zebra
! debug ospf event
! debug ospf packet all detail

router ospf
 router-id 192.168.2.87
 network 192.168.1.8/30 area 0
 network 192.168.1.12/30 area 0
 network 192.168.2.87/32 area 0
 redistribute static

root@mido86:~# ssh mido88 cat /etc/quagga/ospfd.conf 
hostname mido88

password midokura
enable password midokura

log file /var/log/quagga/ospfd.log
log record-priority

! debug ospf ism
! debug ospf nsm
! debug ospf lsa
! debug ospf zebra
! debug ospf event
! debug ospf packet all detail

router ospf
 router-id 192.168.2.88
 network 192.168.1.16/30 area 0
 network 192.168.2.88/32 area 0
 redistribute static

root@mido86:~# ssh mido89 cat /etc/quagga/ospfd.conf 
hostname mido89

password midokura
enable password midokura

log file /var/log/quagga/ospfd.log
log record-priority

! debug ospf ism
! debug ospf nsm
! debug ospf lsa
! debug ospf zebra
! debug ospf event
! debug ospf packet all detail

router ospf
 router-id 192.168.2.89
 network 192.168.1.24/30 area 0
 network 192.168.2.89/32 area 0
 redistribute static

The routing tables of the servers show only the first route learned but with iperf and nload you can check if both links are used equally.

root@mido86:~# route -n | grep 192.168.2.8
192.168.2.87 192.168.1.1 255.255.255.255 UGH 21 0 0 p4p2
192.168.2.88 192.168.1.1 255.255.255.255 UGH 21 0 0 p4p2
192.168.2.89 192.168.1.5 255.255.255.255 UGH 21 0 0 p4p1

Also note that this is not a classic OSPF unnumbered setup, we have decided to use /30 networks for the L2 networks between the server NICs and the switch ports.


Part 5: BGP configuration of MidoNet gateways

Here we want to show the BGP configuration of the MidoNet gateways.

We will show the respective midonet-cli commands as well as screenshots from Midonet Manager.

This is the port configuration for the MidoNet Provider Router, note the interfaces bound to the midonet gateways mido88 and mido89.

MidoNet Provider Router port list

Here you can see a screenshot with the advertised routes for a bgp session.

BGP advertised routes

You are also able to see details for BGP session (the output comes directly from quagga).

BGP session details

This is part of the BGP configuration in midonet-cli.

midonet> router list
router router0 name MidoNet Provider Router state up
midonet> router router0 port list
port port2 device router0 state up plugged yes mac ac:ca:ba:8e:ec:82 address 192.168.1.30 net 192.168.1.28/30
port port5 device router0 state up plugged yes mac ac:ca:ba:07:33:c9 address 192.168.1.22 net 192.168.1.20/30
midonet> router router0 port port5 bgp list
bgp bgp0 local-AS 65788 peer-AS 65821 peer 192.168.1.21
midonet> router router0 port port2 bgp list
bgp bgp0 local-AS 65788 peer-AS 65821 peer 192.168.1.29
midonet> router router0 port port5 bgp bgp0 route list
ad-route ad-route0 net 203.0.113.0/24
ad-route ad-route1 net 192.0.2.0/24
ad-route ad-route2 net 198.51.100.0/24
midonet> router router0 port port2 bgp bgp0 route list
ad-route ad-route0 net 203.0.113.0/24
ad-route ad-route1 net 192.0.2.0/24
ad-route ad-route2 net 198.51.100.0/24

This is an example from a different testbed how to set up a bgp session with a peer using midonet-cli:

midonet> cleart
midonet> router list name 'MidoNet Provider Router'
midonet> router router0 add port address 192.168.7.22 net 192.168.7.20/30
midonet> router router0 port port0 add bgp local-AS 65700 peer-AS 65424 peer 192.168.7.21
midonet> router router0 port port0 bgp bgp0 add route net 10.0.0.0/24

You have to create a port on the router, then create a bgp session and then you can advertise routes through this bgp session.


Part 6: demo installer

This is the configuration of the demo installer. For more information about how it works consult the website: https://github.com/midonet/orizuru

#
# Dell EMEA DSC Frankfurt/Main
#
# if you have questions: alexander@midokura.com
#
config:

 verbose: True
 debug: False

 domain: midokura.emea.dsc.local

 midonet_repo: MEM

 midonet_mem_version: 1.9
 midonet_mem_openstack_plugin_version: kilo

 openstack_release: kilo

 newrelic_license_key: xxx

 # must be a file under /usr/share/zoneinfo
 timezone: Europe/Berlin

 # used for the vpn between different testbed hosts, not used globally
 vpn_base: 192.168.253

 HEAP_INITIAL: 2048M
 MAX_HEAPSIZE: 2048M
 HEAP_NEWSIZE: 512M

roles:
 zookeeper:
 - mido86
 - mido87
 - mido88

 cassandra:
 - mido86

 midonet_api:
 - mido86

 midonet_manager:
 - mido86

 midonet_cli:
 - mido86

 openstack_rabbitmq:
 - mido86

 openstack_mysql:
 - mido86

 openstack_keystone:
 - mido86

 openstack_glance:
 - mido86

 openstack_neutron:
 - mido86

 openstack_horizon:
 - mido86

 openstack_controller:
 - mido86

 physical_openstack_compute:
 - mido86
 - mido87

 physical_midonet_gateway:
 - mido88
 - mido89

servers:
 mido86:
 ip: 10.204.96.86
 # used for the containers on this host
 dockernet: 192.168.252.0/24

 mido87:
 ip: 10.204.96.87
 # used for the containers on this host
 dockernet: 192.168.251.0/24

 mido88:
 ip: 10.204.96.88
 # used for the containers on this host
 dockernet: 192.168.250.0/24

 mido89:
 ip: 10.204.96.89
 # used for the containers on this host
 dockernet: 192.168.249.0/24

The docker networks are used for hosting the containers running the openstack and midonet services in the physical machines.

As you can see, MidoNet gateways and compute nodes are not containerized for this demo.


Part 7: tunnel zone setup

To keep the demo installer generic we use the native ips from the hosts for the initial tunnel-zone.

However, this limits our throughput to the management interface.

By changing the agent ip addresses in the GRE tunnel zone we can direct the MidoNet agents to use the new OSPF routed ip addresses when building tunnels to another agent.

root@mido86:~# midonet-cli
midonet> tunnel-zone list
tzone tzone0 name gre type gre
tzone tzone1 name vtep type vtep
midonet> tunnel-zone tzone0 member list
zone tzone0 host host0 address 192.168.2.88
zone tzone0 host host1 address 192.168.2.89
zone tzone0 host host2 address 192.168.2.86
zone tzone0 host host3 address 192.168.2.87
zone tzone0 host host4 address 192.168.252.19

As you can see, one of the hosts (the neutron node) is still using the container ip.

For learning how to modify hosts in a tunnel zone there is an excellent operations guide written by jan@midokura.com where you can learn about all the details how to do this: http://docs.midokura.com/docs/latest/operation-guide/content/admitting_a_host.html

After changing the address of the agents to the OSPF ips we can run some basic performance tests between vms using larger MTUs:

iperf client

As you can see we are able to achieve 9.81 Gbits/sec on a 10Gbit/s Link with GRE tunneling between the MidoNet Agents.

Further tests with faster hardware have shown that we are even able to saturate a 40 Gbits/sec link with Vxlan offloading 40GB NICs, however in this lab we are limited to 10Gbits/sec NICs.

Both virtual machines are hosted on two different hypervisors, we are typically measuring these vm-to-vm speed for our tests of the network overlay performance.

Do not forget to set the security group in OpenStack to allow these ports, otherwise MidoNet will block the traffic of the iperfs.


Part 8: local MidoNet Provider Router port

In this part we will use the management network (which has internet connectivity) and the squid running on the physical openstack controller node for installing programs to virtual machines running in your cloud.

This is useful if you have not yet configured your BGP peers for full internet connectivity but want to install tools like tcpdump or iperf or nload in your virtual machines for some initial performance testing before you get full network connectivity for your BGP gateways from your network administrators. Please note that the full internet bandwidth of your virtual machines will still have to go through the midonet gateways, this is just a quick networking solution for having access to Ubuntu packages on some test instances.


 

Now that we have MidoNet running we are allowed to leverage the ability of our network platform: create arbitrary virtual router ports.

We can create those on any device on any server running the MidoNet agent, not only gateways. Because of this we choose the openstack controller to host such a virtual router port for us.

In this use case we have installed squid on the mido86 machine and just need to add the following configuration to the virtual machines who want to use this squid proxy:

ubuntu@alex-1:~$ cat /etc/apt/apt.conf.d/squid.conf 
Acquire {
 Retries "0";
 HTTP { Proxy "http://192.168.4.1:3128"; };
};

Now the question remains: how do we reach the ip where squid is running?

We create a virtual port on the MidoNet Provider Router and allow virtual machines to access the squid proxy.

We achieve this by binding this virtual port to one end of a virtual ethernet pair (veth-pair) on the mido86 box. Binding the virtual port to the device will connect it to the midonet datapath on this machine and the agent will be able to use it for traffic going into the overlay and coming out of it.

The other end of the veth pair will live in the Linux server and have an ip in the default namespace. This way we will have routing to this ip from the overlay and into the overlay.

The MidoNet Provider Router does not care if a port is a physical NIC on a gateway or a veth-pair.

For performance reasons of course it should be a physical NIC and you should use real midonet gateways for your production cloud traffic.

root@mido86:~# cat /etc/rc.local
#!/bin/bash

ip link set dev p4p1 mtu 8982
ip link set dev p4p2 mtu 8982

ip a | grep underlay || ip link add underlay type veth peer name overlay

ifconfig underlay 192.168.4.1/30

ifconfig overlay up
ifconfig underlay up

route add -net 192.0.2.0/24 gw 192.168.4.2
route add -net 198.51.100.0/24 gw 192.168.4.2
route add -net 203.0.113.0/24 gw 192.168.4.2

echo 1 >/proc/sys/net/ipv4/ip_forward

exit 0

This is the overlay and underlay device in the mido86 host:

root@mido86:~# ip addr show overlay
89: overlay: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master midonet state UP group default qlen 1000
 link/ether c6:d4:cb:87:75:52 brd ff:ff:ff:ff:ff:ff
 inet6 fe80::c4d4:cbff:fe87:7552/64 scope link 
 valid_lft forever preferred_lft forever

root@mido86:~# ip addr show underlay
90: underlay: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 8e:1b:e4:dc:d6:81 brd ff:ff:ff:ff:ff:ff
 inet 192.168.4.1/30 brd 192.168.4.3 scope global underlay
 valid_lft forever preferred_lft forever
 inet6 fe80::8c1b:e4ff:fedc:d681/64 scope link 
 valid_lft forever preferred_lft forever

This is the corresponding virtual port and binding in the provider router:

root@mido86:~# midonet-cli
midonet> cleart
tenant_id: None
midonet> router list
router router0 name MidoNet Provider Router state up
midonet> router router0 port list
port port1 device router0 state up plugged yes mac ac:ca:ba:e0:68:07 address 192.168.4.2 net 192.168.4.0/30
midonet> host list
host host2 name mido86 alive true
midonet> host host2 binding list
host host2 interface overlay port router0:port1

The output of these commands has been shortened for brevity.

Using this port and local routing on the mido86 host the virtual machines in the overlay can use the squid on this host by reaching out to the ip address 192.168.4.1.

root@mido86:~# route -n | grep 192.168.4.2
192.0.2.0 192.168.4.2 255.255.255.0 UG 0 0 0 underlay
198.51.100.0 192.168.4.2 255.255.255.0 UG 0 0 0 underlay
203.0.113.0 192.168.4.2 255.255.255.0 UG 0 0 0 underlay

This does not affect the normal routing of traffic coming from the virtual machines and going into the internet through the MidoNet BGP gateways.

Using this port you can also ssh into the virtual machines from the mido86 host without having to go through the midonet BGP gateways.

Alexander Gabert

About Alexander Gabert

Systems Engineer - Midokura Alexander Gabert spent the last 6 years doing systems engineering and system operations projects in large companies throughout germany, working for inovex in Karlsruhe. During that time he was responsible for running clusters of databases, big data applications (Hadoop) and high-volume web servers with Varnish caches and CDN infrastructure in front of them. Coming from this background as an infrastructure engineer he looks at our solution from the view point of our customers. After having worked together with several networking departments in big companies as someone responsible for applications and (virtual) servers he knows from first-hand practical experience what the actual benefits will be when we introduce network virtualization to face todays data center networking challenges.

Comments are closed.

Post Navigation