Open Sourcing SAND, Scalingo's Networking Daemon

May 16, 2019 - 10 min read
Open Sourcing SAND, Scalingo's Networking Daemon

First introduced by Scalingo in January 2019, SAND is an autonomous service managing overlay networks that integrates with Docker. SAND (Scalingo Awesome Network Daemon) is designed for web-scale companies like Scalingo, managing thousands of containers every day. SAND features a simple API to create private overlay networks based on VXLAN to link containers together. One of the main advantages of this solution is that SAND is agnostic of the underlying container technology. It's one of the cornerstones of the Software-Defined Networking (SDN) infrastructure of Scalingo's platform.

At Scalingo, SAND is a critical piece of infrastructure powering some parts of container networks. It is currently used to run hundreds database clusters.

We are excited today, to announce the open sourcing of SAND. By allowing others in the software-defined networks community to leverage unified schedulers and workload co-location. Scalingo will open the door for more ways to manage networks. Moreover, open sourcing SAND will enable greater industry collaboration and open up the software to feedback and contributions from industry engineers, independent developers, and academics across the world.

Features in the current release

Scalingo has been running SAND for about half a year in production and it has been running and scaling very well. It aims at providing a way to manage private overlay networks. It is written in Go. The API lets you CRUD a private network and adds any entity (called endpoint) to this network.

The initial release also provides a Docker driver to easily integrate with this container engine.

Benefits of using SAND

Private networks created with SAND are based on the VXLAN (Virtual Extensible LAN) technology. VXLAN isolates different networks just like VLAN would, but encapsulate the layer 2 Ethernet frames into layer 4 UDP datagrams. This protocol makes VLAN more scalable.

While Docker provides libnetwork which allows to define an overlay network, it is strictly bound to Docker. On the contrary SAND is completely agnostic of the underlying container technology and lets you use something else than Docker.

Design

The SAND network daemon should be installed on all the hosts which will have containers running in one of the overlay networks. The network configuration is stored on the key value store etcd.

Each created overlay network can use its own IP range, but there is no problem using the same range for different networks as they are completely isolated from each other. By default each overlay network will get IP addresses in 10.0.0.0/24.

Creating a network is a no-op operation where a unique VXLAN ID is allocated and where the network configuration is stored on etcd.

When a first endpoint is added to a network, the service will create a dedicated network namespace containing the network VXLAN interface on the server which is adding the endpoint. A pair of virtual Ethernet interfaces (veth) will link the targeted namespace (the container network) and the overlay namespace. All the veth interfaces are linked to the VXLAN with a bridge interface. As a consequence, there is no impact on root namespace of the server running SAND, everything is handled in dedicated namespaces.

When an endpoint is added or removed, all the other hosts having at least one endpoint in the same network are adding routes to the newly created endpoint modifying ARP and FDB tables of the VXLAN interface, allowing routing between the different members of the overlay network.

How to use it

Standalone Command Line Interface

SAND provides a command line interface allowing operators to manipulate and overview the overlay networks and their members:

# On any member of the cluster
$ sand-agent-cli network-create
New network created:
* id=ed80c475-782d-4507-bd80-631b476d9ecc name=net-sc-ed80c475-782d-4507-bd80-631b476d9ecc type=overlay ip-range=10.0.0.0/24, vni=13

# On the node which is running the container to link
$ sand-agent-cli endpoint-create --network ed80c475-782d-4507-bd80-631b476d9ecc --ns /var/run/docker/netns/73d3c79736d3
New endpoint created:
* [ACTIVE]  ID=6ffc3990-b212-47f2-a5d1-2afe7526e548 networkID=ed80c475-782d-4507-bd80-631b476d9ecc hostname=dev.172.17.0.1.xip.st-sc.fr IP=10.0.0.2/24 NS=/var/run/docker/netns/73d3c79736d3

# The SAND overlay network is now accessible from the targeted namespace
$ nsenter --net=/var/run/docker/netns/73d3c79736d3 ip addr
483: sand0@if484: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:84:0a:00:00:02 brd ff:ff:ff:ff:ff:ff link-netns sc-ns-ed80c475-782d-4507-bd80-631b476d9ecc
        inet 10.0.0.2/24 brd 10.0.0.255 scope global sand0

Create an endpoint for each entity which has to join the overlay network, they will be able to communicate with each other instantly.

Docker Integration

It is also possible to integrate SAND with Docker using Docker libnetwork RemotePlugin

The initial Network resource should be created using SAND API/CLI.

$ sand-agent-cli network-create
New network created:
* id=ed80c475-782d-4507-bd80-631b476d9ecc name=net-sc-ed80c475-782d-4507-bd80-631b476d9ecc type=overlay ip-range=10.0.0.0/24, vni=13

# Integrate the SAND network with Docker networks
$ docker network create \
  --driver sand --opt sand-id=ed80c475-782d-4507-bd80-631b476d9ecc \
  --ipam-opt sand --ipam-opt sand-id=ed80c475-782d-4507-bd80-631b476d9ecc \
  my-overlay-network

# Use the docker network when creating a container, the SAND network will be setup
$ docker run --network my-overlay-network -it ubuntu:latest bash
root@feb4eb485d57:/# ip addr
487: sand0@if488: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:84:0a:00:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 10.0.0.2/24 brd 10.0.0.255 scope global sand0

Trying out SAND

We hope you try out SAND by yourself and send us your feedback! You can learn how we use SAND at Scalingo by reading our blog articles announcing Elasticsearch clusters and Redis clusters.

Future usage

With SAND we developed the underlying technology to build private networks automatically via an API aka Software Defined Networking. Currently we're using it to isolate database clusters from our Database as a Service offering. In the future we envision many fun applications like Private Spaces: all your apps and databases in the same private network.

Photo by Sumner Mahaffey on Unsplash

Share the article
Étienne Michon
Étienne Michon
Étienne Michon is one of the first employee at Scalingo. With a PhD in computer science Étienne takes care of Research and Development at Scalingo. He also regularly contributes to this blog with technical articles.

Try Scalingo for free

30-day free trial / No credit card required / Hosted in Europe