Datacenter as code

Hello, today we want to look at a new way of running applications.

You certainly know the phrase ‘infrastructure as code’.

Also you probably heard of big vendors coining the phrase ‘software defined datacenter’.

To a developer who wants to run applications, datacenters have become more and more a code abstraction.

And this is a good thing. It enables us to build applications fast and without having to think about how it can be deployed and operated on server infrastructure.

From the website of Apache Mesos:

Program against your datacenter like it’s a single pool of resources

Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively.

Guess what is also a fault-tolerant and distributed system to easily be built and run effectively? MidoNet!

So the obvious choice for us was to build a small lab to integrate Mesosphere with MidoNet.

To achieve this goal we have added a small piece of code to deimos (the original Docker containerizer for Mesos) and built a small lab nested in Midocloud Japan.

This is an overview of the lab, built using virtual machines running Mesosphere and MidoNet:

mesos001

We have named the midonet containerizer mcp, it is a small python program, based on Deimos, that runs when the mesos slave wants to set up a container for an application deployed in Marathon.

As you can see, a lot of the things in the containerizer are still hardcoded:

Marathon

Mesos

This is how it looks like on the machine:

root@mesos-a4b62b90-3610-4887-a8a3-63c2875e566e:~# mm-dpctl –show-dp midonet
Datapath name : midonet
Datapath index : 5
Datapath Stats:
Flows :0
Hits :3680
Lost :24
Misses:376
Port #0 “midonet” Internal Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}
Port #1 “tngre-overlay” Gre Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}
Port #2 “tnvxlan-overlay” VXLan Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}
Port #3 “tnvxlan-vtep” VXLan Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}
Port #4 “d0775d57-eth0” NetDev Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}
Port #5 “veth4” NetDev Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}
Port #6 “ec9c56ed-eth0” NetDev Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}
Port #7 “7c347f79-eth0” NetDev Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}
Port #8 “veth0” NetDev Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}
Port #9 “veth6” NetDev Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}
Port #10 “veth8” NetDev Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}
Port #11 “veth2” NetDev Stats{rxPackets=0, txPackets=0, rxBytes=0, txBytes=0, rxErrors=0, txErrors=0, rxDropped=0, txDropped=0}

The veth ports are for debugging the overlay, the ports ending with -eth0 are for the docker containers being started by MCP via the mesos-slave.

For this to happen the mesos slave had to be specially configured:

root@mesos-a4b62b90-3610-4887-a8a3-63c2875e566e:~# cat /etc/mesos-slave/containerizer_path
/usr/local/bin/mcp/mcpw
root@mesos-a4b62b90-3610-4887-a8a3-63c2875e566e:~# cat /etc/mesos-slave/isolation
external

Here you can see the modification of the deimos containerizer to call midonet wiring:

                    log.debug("Wiring the container to MidoNet")
                    try:
                        bridge_id = env_dict.get(
                            "MIDONET_BRIDGE_ID",
                            "78488c47-d1de-4d16-a27a-4e6419dc4f88")
                        container_id = state.docker_id
                        ip_addr = env_dict.get(
                            "MIDONET_IP_ADDRESS",
                            "192.168.100.42")
                        default_gw = env_dict.get("MIDONET_DEFAULT_GATEWAY", None)
                        midonet.wire_container_to_midonet(
                            container_id, bridge_id, ip_addr, default_gw)
                        log.debug("Successfully wired the container %s to MidoNet " \
                                  "bridge %s", container_id, bridge_id)

This is how it then looks in midonet-manager:

midonet manager mesos ports

As you can see, the interesting part is the modification of the mesos containerizer to do the wiring of the container.

This will later be much more easier when doing it with the new Docker libnetwork and a vendor plugin to do this network setup in a more integrated way.

For this small lab we have decided to modify an existing containerizer and show that it works.

This short blog article shows you that MidoNet is a network platform which can support a variety of cloud platforms and distributed application platforms.

The future for deploying and operating distributed applications is bright, we are very glad that we are allowed to be part of this ecosystem!

Alexander Gabert

About Alexander Gabert

Systems Engineer - Midokura Alexander Gabert spent the last 6 years doing systems engineering and system operations projects in large companies throughout germany, working for inovex in Karlsruhe. During that time he was responsible for running clusters of databases, big data applications (Hadoop) and high-volume web servers with Varnish caches and CDN infrastructure in front of them. Coming from this background as an infrastructure engineer he looks at our solution from the view point of our customers. After having worked together with several networking departments in big companies as someone responsible for applications and (virtual) servers he knows from first-hand practical experience what the actual benefits will be when we introduce network virtualization to face todays data center networking challenges.

Comments are closed.

Post Navigation