Skip to content

Release 0.11.0#

2021-03-09

Multi-node labs with VxLAN tunneling#

Most containerlab users are fine with running labs within a single server/VM. Truthfully speaking, a VM with 32GB RAM can host a very decent lab with dozens of nodes1.

But yet, it is not always the case, sometimes we have a shortage of resources or the workloads are quite heavyweighters themselves. In all these scenarios it would be really nice to scope containerlab out of a single server and make it reach far horizons.

And there are several way of making containerlab nodes to reach nodes/systems outside of its container host:

  • Exposing services/ports
  • Exposing management network with routing
  • Connecting remote nodes with bridging
  • Creating VxLAN tunnels across the network

To help you navigate these options we've created a multi-node labs documentation article that explains each approach in details.

With 0.11.0 specifically, we add the last option in that list, which is most flexible and far-reaching - VxLAN Tunneling.

To help containerlab users to provision VxLAN tunnels the helper commands have been created - vxlan create, vxlan delete.

The multi-node lab provides a step-by-step coverage of how this all nicely play together and builds a playground to demonstrate the multi-node capabilities.

Openvswitch (ovs) bridges support#

The versatility of containerlab is in its ability to run anywhere where linux runs and wire up anything that can be packaged in a docker container. But it is quite hard to package in a container a piece of a real hardware. Still, we need it more often than not to connect devices like traffic generators or physical chassis to containerlab labs.

It was possible before with linux bridges, now we are adding support for ovs bridges that will allow for even more elaborated and flexible connectivity options.

Enhanced topology checks#

Gradually we add checks that will make containerlab fail early if anything in the topology file or the host system is not right.

In this release containerlab will additionally perform these checks:

  1. if node name matches already existing container the deployment won't start
  2. if vrnetlab nodes are defined, but virtualization is not available the deployment won't start
  3. if <2 vCPU is available a warning will be emitted. Some containerized NOSes demand 2vCPU, so running with 1vCPU is discouraged
  4. abrupt deployment if host link referenced in topo file already exists in the host

Shell completions#

Thanks to our contributor @steiler we now have shell completions, so you can navigate containerlab CLI with guidance and confidence.

Manual installation#

All these time we relied on package managers to handle containerlab installation. Having support for deb/rpm packages is enough for most of us, but those nerds on Arch and Gentoo 😄.

For minorities we added

Vrnetlab changes#

Vrnetlab version 0.2.0 has been issued to freeze the code compatible with containerlab 0.11.0.

If stopped working when you upgraded to 0.11.0, then re-build the images with vrnetlab 0.2.0

Boot delay for vrnetlab routers#

To schedule the startup of vrnetlab routers the BOOT_DELAY env variable may be used. It takes a

Added Arista vEOS support#

Support for virtualized EOS from Arista has been added. Now it is possible to use containerized cEOS and virtualized vEOS.

SR OS route to management network#

SR OS routers will now have a static route in their Management routing instance to reach the docker management network. This enables SR OS initiated requests to reach services runnign on the management network.

Miscellaneous#

  1. Default MTU on the management network/bridge is changed to 1500 from 1450. If you run containerlab in the environment where management network MTU is less than 1500, consider setting the needed MTU in the topo file.
  2. Fixed multiple cEOS links creation.
  3. Add bridge and linux kinds docs.
  4. To allow for multi-node labs and also to make it possible to connect nodes with the container host a new endpoint connection was created - host links.
  5. MTU on data links is set to 65000 from 1500. This will allow for jumbo frames safe passage.
  6. Additional explanations provided for vrnetlab integration and inter-dependency between projects.

  1. especially if memory optimization techniques are enabled