CVX nodes launched with containerlab come up with:
- the management interface
eth0is configured with IPv4/6 addresses as assigned by either the container runtime or DHCP
rootuser created with password
Mode of operation#
CVX supports two modes of operation:
- Using only the container runtime -- this mode runs Cumulus VX container image directly inside the container runtime (e.g. Docker). Due to the lack of Cumulus VX kernel modules, some features are not supported, most notable one being MLAG. In order to use this mode, add
runtime: dockerunder the cvx node definition (see also this example).
Using Firecracker micro-VMs -- this mode runs Cumulus VX inside a micro-VM on top of the native Cumulus kernel. This mode uses
igniteruntime and is the default way of running CVX nodes.
This mode was broken in containerlab between v0.27.1 and v0.32.1 due to dependencies issues in ignite2.
When running in the default
ignite runtime mode, the only host OS dependency is
/dev/kvm1 required to support hardware-assisted virtualisation. Firecracker VMs are spun up inside a special "sandbox" container that has all the right tools and dependencies required to run micro-VMs.
Additionally, containerlab creates a number of directories under
/var/lib/firecracker for nodes running in
ignite runtime to store runtime metadata; these directories are managed by containerlab.
Managing cvx nodes#
Cumulus VX node launched with containerlab can be managed via the following interfaces:
Default user credentials:
It is possible to make cvx nodes to boot up with a user-defined config by passing any number of files along with their desired mount path:
When running inside the
ignite runtime, all mount binds work one way -- from host OS to the cvx node, but not the other way around. Currently, it's up to a user to manually update individual files if configuration updates need to be persisted. This will be addressed in the future releases.
The following labs feature CVX node:
- Cumulus and FRR
- Cumulus in Docker runtime and Host
- Cumulus Linux Test Drive
- EVPN with MLAG and multi-homing scenarios
Known issues or limitations#
- CVX in Ignite is always attached to the default docker bridge network
this device is already part of the linux kernel, therefore this can be read as "no external dependencies are needed for running cvx with