Cumulus VX#
Cumulus VX is identified with cvx
or cumulus_cvx
kind in the topology file. The cvx
kind defines a supported feature set and a startup procedure of a cvx
node.
CVX nodes launched with containerlab come up with:
- the management interface
eth0
is configured with IPv4/6 addresses as assigned by either the container runtime or DHCP root
user created with passwordroot
Mode of operation#
CVX supports two modes of operation:
- Using only the container runtime -- this mode runs Cumulus VX container image directly inside the container runtime (e.g. Docker). Due to the lack of Cumulus VX kernel modules, some features are not supported, most notable one being MLAG. In order to use this mode, add
runtime: docker
under the cvx node definition (see also this example). -
Using Firecracker micro-VMs -- this mode runs Cumulus VX inside a micro-VM on top of the native Cumulus kernel. This mode uses
ignite
runtime and is the default way of running CVX nodes.Warning
This mode was broken in containerlab between v0.27.1 and v0.32.1 due to dependencies issues in ignite2.
Note
When running in the default ignite
runtime mode, the only host OS dependency is /dev/kvm
1 required to support hardware-assisted virtualisation. Firecracker VMs are spun up inside a special "sandbox" container that has all the right tools and dependencies required to run micro-VMs.
Additionally, containerlab creates a number of directories under /var/lib/firecracker
for nodes running in ignite
runtime to store runtime metadata; these directories are managed by containerlab.
Managing cvx nodes#
Cumulus VX node launched with containerlab can be managed via the following interfaces:
Info
Default user credentials: root:root
User-defined config#
It is possible to make cvx nodes to boot up with a user-defined config by passing any number of files along with their desired mount path:
name: cvx_lab
topology:
nodes:
cvx:
kind: cvx
binds:
- cvx/interfaces:/etc/network/interfaces
- cvx/daemons:/etc/frr/daemons
- cvx/frr.conf:/etc/frr/frr.conf
Configuration persistency#
When running inside the ignite
runtime, all mount binds work one way -- from host OS to the cvx node, but not the other way around. Currently, it's up to a user to manually update individual files if configuration updates need to be persisted. This will be addressed in the future releases.
Lab examples#
The following labs feature CVX node:
- Cumulus and FRR
- Cumulus in Docker runtime and Host
- Cumulus Linux Test Drive
- EVPN with MLAG and multi-homing scenarios
Known issues or limitations#
- CVX in Ignite is always attached to the default docker bridge network
-
this device is already part of the linux kernel, therefore this can be read as "no external dependencies are needed for running cvx with
ignite
runtime". ↩ -
see https://github.com/srl-labs/containerlab/pull/1037 and https://github.com/srl-labs/containerlab/issues/1039 ↩