VM-based routers integration#
Containerlab focuses on containers, but many routing products ship only in virtual machine packaging. Leaving containerlab users without the ability to create topologies with both containerized and VM-based routing systems would have been a shame.
Keeping this requirement in mind from the very beginning, we added
ovs-bridge kind that allows bridging your containerized topology with other resources available via a bridged network. For example, a VM based router:
With this approach, you could bridge VM-based routing systems by attaching interfaces to the bridge you define in your topology. However, it doesn't allow users to define the VM-based nodes in the same topology file. With
vrnetlab integration, containerlab is now capable of launching topologies with VM-based routers defined in the same topology file.
Vrnetlab packages a regular VM inside a container and makes it runnable as if it was a container image.
To make this work, vrnetlab provides a set of scripts that build the container image out of a user-provided VM disk. This integration enables containerlab to build topologies that consist both of native containerized NOSes and VMs:
Ensure that the VM that containerlab runs on has Nested virtualization enabled to support vrnetlab-based containers.
To make vrnetlab images to work with container-based networking in containerlab, we needed to fork vrnetlab project and implement the necessary improvements. VM-based routers that you intend to run with containerlab should be built with
hellt/vrnetlab project, and not with the upstream
Containerlab depends on
hellt/vrnetlab project, and sometimes features added in containerlab must be implemented in
vrnetlab (and vice-versa). This leads to a cross-dependency between these projects.
The following table provides a link between the version combinations:
| || ||Initial release. Images: sros, vmx, xrv, xrv9k|
| || ||added vr-veos, support for boot-delay, SR OS will have a static route to docker network, improved XRv startup chances|
|--|| ||added timeout for SR OS images to allow eth interfaces to appear in the container namespace. Other images are not touched.|
|--|| ||fixed serial (telnet) access to SR OS nodes|
|--|| ||set default cpu/ram for SR OS images|
| || ||added support for Cisco CSR1000v via |
|--|| ||enhanced SR OS boot sequence|
|--|| ||fixed SR OS CPU allocation and added Palo Alto PAN support |
| || ||added support for Cisco Nexus 9000v via |
| || ||added experimental support for Juniper vQFX via |
| ||support for IPv6 management for SR OS; support for RouterOS v7+|
| ||startup-config support for vqfx and vmx|
| || ||startup-config support for the rest of the kinds, support for multi line card SR OS|
| || ||startup-config support for PANOS, ISA support for Nokia VSR-I and MGMT VRF for VMX|
| ||Support for IPInfusion OcNOS with vrnetlab|
| || ||Added support for Juniper vSRX3.0 via |
| || ||Added support for Juniper vJunos-switch via |
Building vrnetlab images#
To build a vrnetlab image compatible with containerlab, users first need to ensure that the versions of both projects follow compatibility matrix.
hellt/vrnetlaband checkout to a version compatible with containerlab release:
- Enter the directory for the image of interest
- Follow the build instructions from the README.md file in the image directory
Supported VM products#
The images that work with containerlab will appear in the supported list as we implement the necessary integration.
|Nokia SR OS||vr-sros||SRL & SR OS||When building SR OS vrnetlab image for use with containerlab, do not provide the license during the image build process. The license shall be provided in the containerlab topology definition file1.|
|Juniper vMX||vr-vmx||SRL & vMX|
|Juniper vQFX||vr-vqfx||Coming soon|
|Juniper vSRX||vr-vsrx||Coming soon|
|Cisco XRv||vr-xrv||SRL & XRv|
|Cisco XRv9k||vr-xrv9k||SRL & XRv9k|
|Palo Alto PAN||vr-pan|
|Cisco Nexus 9000v||vr-n9kv|
Containerlab offers several ways of connecting VM-based routers with the rest of the docker workloads. By default, vrnetlab integrated routers will use tc backend2, which doesn't require any additional packages to be installed on the container host and supports transparent passage of LACP frames.
Any other datapaths?
tc based datapath should cover all the needed connectivity requirements, if other bridge-like datapaths are needed, Containerlab offers OpenvSwitch and Linux bridge modes.
Users can plug in those datapaths by specifying
CONNECTION_MODE env variable:
A simultaneous boot of many qemu nodes may stress the underlying system, which sometimes renders in a boot loop or system halt. If the container host doesn't have enough capacity to bear the simultaneous boot of many qemu nodes, it is still possible to successfully run them by scheduling their boot time.
Delaying the boot process of specific nodes by a user-defined time will allow nodes to boot successfully while "gradually" loading the system. The boot delay can be set with
BOOT_DELAY environment variable that supported
vr-xxxx kinds will recognize.
Consider the following example where the first SR OS nodes will boot immediately, whereas the second node will sleep for 30 seconds and then start the boot process:
Typically a lab consists of a few types of VMs which are spawned and interconnected with each other. Consider a lab consisting of 5 interconnected routers; one router uses VM image X, and four routers use VM image Y.
Effectively we run just two types of VMs in that lab, and thus we can implement a memory deduplication technique that drastically reduces the memory footprint of a lab. In Linux, this can be achieved with technologies like UKSM/KSM. Refer to this article that explains the methodology and provides steps to get UKSM working on Ubuntu/Fedora systems.
to have a guaranteed compatibility checkout to the mentioned tag and build the images. ↩