Deploy your Docker containers in an Azure VNet

Azure Container Networking released CNM (libnetwork) and IPAM plugins for Docker (and CNI for k8s and DC/OS), making containers first degree citizens in your Azure VNet. The days of doing NAT on the host’s IP address should be now counted 🙂 if it wasn’t because the plugin is still in public preview!

‘Public preview’ is similar to the old “beta” tag Google and other have been using on services the public could already consume and were pretty stable, but you could expect a some quirks or missing features. In this case it’s very similar: The plugin is easy to install and works like a charm, albeit there’s some manual configuration required. Unfortunately I haven’t had the chance to stress it out to find any problems.

If you’re reading this is because you might have an interest on deploying your containers inside an Azure VNet, so let’s stop beating around the bush and get our hands dirty.

1- Deploy the Azure VM that will host your containers

In my case I’ve chosen a D2s v3 running CoreOS on its Alpha branch, because who doesn’t like living on the edge?

Make sure you deploy the VM in a VNet 🙂

2- Download and run the plugin’s binary blob

Check their releases and substitute v0.9.1 below for the latest and greatest available when you read this. Also change pjperez for your username 🙂

curl -sSL -o /home/pjperez/cnm.tgz

Untar and unzip it:

tar xvfz /home/pjperez/cnm.tgz

Then finally run it in the background:

sudo ./azure-vnet-plugin &

3- Create a Docker network that uses the plugin

The idea here is that you create a network that relies on the plugin, then you choose which containers are part of this network. Easy enough, right? Let’s do this!

Substitute for the specific subnet address space where your containers will reside in. You can also choose to chance azure for another name for your network:

sudo docker network create --driver=azure-vnet --ipam-driver=azure-vnet --subnet= azure

4- First quirk: Manually add a second ipconfiguration to your VM’s NIC

You can easily do this from the portal. Here’s mine:


Make sure you add it to the same subnet you’ve choosen when creating the Docker network. In my case it’s still

As I’ve taken this screenshot after everything was up and running, you can see a second IP address already assigned to the interface. This won’t show up until you end this tutorial though, so don’t panic if you don’t see an IP address there!

Note: It might seem obvious, but you will need to allocate one IP address per container you’d like to deploy on the VNet.

5- Time to rock on! let’s deploy a container!

Alright folks, the time has arrived! let’s deploy a container and make it available as a 1st degree citizen in our Azure VNet. The trick here is to specify the container has to be connected to the Docker network we’ve created in step 3. In our case the network’s name is azure and we’ll use the –net= parameter to specify it.

pjperez@dockerVNetTest ~ $ sudo docker run -it --rm --net=azure alpine:latest
/ #

Well, I got a prompt. Would I have an IP address from the VNet inside the container though? Let’s find it out!

/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet scope host lo
 valid_lft forever preferred_lft forever
10: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
 link/ether 7e:da:11:fe:90:c1 brd ff:ff:ff:ff:ff:ff
 inet scope global eth0
 valid_lft forever preferred_lft forever

There you have it: !

Let’s start an iPerf3 container and test it from another VM in the same subnet:

CoreOs container deployment:
pjperez@dockerVNetTest ~ $ sudo docker run -dit --net=azure pedroperezmsft/iperf3 -s
pjperez@dockerVNetTest ~ $ sudo docker ps -a
ed8a2e37a58e pedroperezmsft/iperf3 "iperf3 -s" 2 seconds ago Up 1 second clever_ritchie
iPerf3 installation on an Ubuntu VM:
pjperez@ubuntuVM:~$ sudo apt install iperf3
Setting up iperf3 (3.1.3-1) ...
Processing triggers for libc-bin (2.24-9ubuntu2.2) ...
Starting the test from the Ubuntu VM:
pjperez@ubuntuVM:~$ iperf3 -c
Connecting to host, port 5201
[ 4] local port 48886 connected to port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 83.5 MBytes 701 Mbits/sec 0 916 KBytes
[ 4] 1.00-2.00 sec 109 MBytes 911 Mbits/sec 0 968 KBytes
[ 4] 2.00-3.00 sec 105 MBytes 881 Mbits/sec 0 1.29 MBytes
[ 4] 3.00-4.00 sec 101 MBytes 843 Mbits/sec 0 1.40 MBytes
[ 4] 4.00-5.00 sec 97.5 MBytes 815 Mbits/sec 0 1.47 MBytes
[ 4] 5.00-6.00 sec 104 MBytes 877 Mbits/sec 0 1.47 MBytes
[ 4] 6.00-7.00 sec 108 MBytes 906 Mbits/sec 0 1.54 MBytes
[ 4] 7.00-8.00 sec 109 MBytes 911 Mbits/sec 0 1.54 MBytes
[ 4] 8.00-9.00 sec 108 MBytes 909 Mbits/sec 0 1.54 MBytes
[ 4] 9.00-10.00 sec 108 MBytes 908 Mbits/sec 0 1.54 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1.01 GBytes 866 Mbits/sec 0 sender
[ 4] 0.00-10.00 sec 1.00 GBytes 863 Mbits/sec receiver
iperf Done.

Let’s pull the logs from the container:

pjperez@dockerVNetTest ~ $ sudo docker logs ed8
Server listening on 5201
Accepted connection from, port 48884
[ 5] local port 5201 connected to port 48886
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 58.8 MBytes 493 Mbits/sec
[ 5] 1.00-2.00 sec 108 MBytes 909 Mbits/sec
[ 5] 2.00-3.00 sec 107 MBytes 900 Mbits/sec
[ 5] 3.00-4.00 sec 97.3 MBytes 816 Mbits/sec
[ 5] 4.00-5.00 sec 102 MBytes 860 Mbits/sec
[ 5] 5.00-6.00 sec 99.2 MBytes 832 Mbits/sec
[ 5] 6.00-7.00 sec 108 MBytes 909 Mbits/sec
[ 5] 7.00-8.00 sec 108 MBytes 910 Mbits/sec
[ 5] 8.00-9.00 sec 108 MBytes 907 Mbits/sec
[ 5] 9.00-10.00 sec 108 MBytes 908 Mbits/sec
[ 5] 10.00-10.20 sec 22.0 MBytes 904 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.20 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-10.20 sec 1.00 GBytes 846 Mbits/sec receiver
Server listening on 5201

Note how we didn’t have to expose any ports when creating our container. This is because we’re not doing NAT on the VM host and the container is fully exposed to the VNet. As I’ve said: 1st degree citizen!

6- Second quirk: Containers deployed as above can’t access Internet resources

The real reason is a bit more complex, but it could be summarized (almost accurately) as “they can’t access Internet resources because your private IP address has no assigned public IP address for NAT when going out to Internet“.

Solution: Assign a public IP address to your ip configuration.

/ # ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=116 time=2.506 ms
64 bytes from seq=1 ttl=116 time=2.380 ms
64 bytes from seq=2 ttl=116 time=2.088 ms
64 bytes from seq=3 ttl=116 time=2.540 ms
64 bytes from seq=4 ttl=116 time=2.216 ms
--- ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 2.088/2.346/2.540 ms

Yay! Internet access works 🙂

Oh! don’t forget to secure your container with NSGs!

Closing notes

We have seen how can we leverage the Azure Container Networking to be able to deploy containers inside an Azure VNet as 1st degree citizens. Even though the plugins are still in public preview, we can already use them without big stoppers.

The fact that you have to manually add IP addresses on the VM’s interface isn’t great, but if your deployment scripts call Azure’s API to add these programmatically as needed you should be good to go.

If you find any issues, please report them on Github.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.