Nokia SR OS#
Nokia SR OS virtualized router is identified with
vr-nokia_sros kind in the topology file. It is built using vrnetlab project and essentially is a Qemu VM packaged in a docker container format.
vr-sros nodes launched with containerlab come up pre-provisioned with SSH, SNMP, NETCONF and gNMI services enabled.
Managing vr-sros nodes#
Containers with SR OS inside will take ~3min to fully boot.
You can monitor the progress with
watch docker ps waiting till the status will change to
Nokia SR OS node launched with containerlab can be managed via the following interfaces:
using the best in class gnmic gNMI client as an example:
Default user credentials:
vr-sros container uses the following mapping for its interfaces:
eth0- management interface connected to the containerlab management network
eth1- first data interface, mapped to the first data port of SR OS line card
eth2+- second and subsequent data interface
Interfaces can be defined in a non-sequential way, for example:
When containerlab launches vr-sros node, it will assign IPv4/6 address to the
eth0 interface. These addresses can be used to reach management plane of the router.
eth1+ need to be configured with IP addressing manually using CLI/management protocols.
Features and options#
Virtual SR OS simulator can be run in multiple HW variants as explained in the vSIM installation guide.
vr-sros container images come with pre-packaged SR OS variants as defined in the upstream repo as well as support custom variant definition. The pre-packaged variants are identified by the variant name and come up with cards and mda already configured. On the other hand, custom variants give users total flexibility in emulated hardware configuration, but cards and MDAs must be configured manually.
To make vr-sros to boot in one of the packaged variants, set the type to one of the predefined variant values:
A custom variant can be defined by specifying the TIMOS line for the control plane and line card components:
for distributed chassis CPM and IOM are indicated with markers
notice the delimiter string
___that must be present between CPM and IOM portions of a custom variant string.
max_nicsvalue must be set in the
lcpart and specifies a maximum number of network interfaces this card will be equipped with.
memis provided in GB.
It is possible to define a custom variant with multiple line cards; just repeat the
lc portion of a type. Note that each line card is a separate VM, increasing pressure on the host running such a node. You may see some issues starting multi line card nodes due to the VMs being moved between CPU cores unless a cpu-set is used.
How to define links in a multi line card setup?
When a node uses multiple line cards users should pay special attention to the way links are defined in the topology file. As explained in the interface mapping section, SR OS nodes use
ethX notation for their interfaces, where
X denotes a port number on a line card/MDA.
Things get a little more tricky when multiple line cards are provided. First, every line card must be defined with a
max_nics property that serves a simple purpose - identify how many ports at maximum this line card can bear. In the example above both line cards are equipped with the same IOM/MDA and can bear 6 ports at max. Thus,
max_nics is set to 6.
Another significant value of a line card definition is the
slot position. Line cards are inserted into slots, and slot 1 comes before slot 2, and so on.
Knowing the slot number and the maximum number of ports a line card has, users can identify which indexes they need to use in the
link portion of a topology to address the right port of a chassis. Let's use the following example topology to explain how this all maps together:
topology: nodes: R1: kind: vr-sros image: vr-sros:22.7.R2 type: >- cp: cpu=2 min_ram=4 chassis=sr-7 slot=A card=cpm5 ___ lc: cpu=4 min_ram=4 max_nics=6 chassis=sr-7 slot=1 card=iom4-e mda/1=me6-10gb-sfp+ ___ lc: cpu=4 min_ram=4 max_nics=6 chassis=sr-7 slot=2 card=iom4-e mda/1=me6-10gb-sfp+ R2: kind: vr-sros image: sros:22.7.R2 type: >- cp: cpu=2 min_ram=4 chassis=sr-7 slot=A card=cpm5 ___ lc: cpu=4 min_ram=4 max_nics=6 chassis=sr-7 slot=1 card=iom4-e mda/1=me6-10gb-sfp+ ___ lc: cpu=4 min_ram=4 max_nics=6 chassis=sr-7 slot=2 card=iom4-e mda/1=me6-10gb-sfp+ links: - endpoints: ["R1:eth1", "R2:eth3"] - endpoints: ["R1:eth7", "R2:eth8"]
Starting with the first pair of endpoints
R1:eth1 <--> eth3:R2; we see that port1 of R1 is connected with port3 of R2. Looking at the slot information and
max_nics value of 6 we see that the linecard in slot 1 can host maximum 6 ports. This means that ports from 1 till 6 belong to the line card equipped in slot=1. Consequently, links ranging from
eth6 will address the ports of that line card.
The second pair of endpoints
R1:eth7 <--> eth8:R2 addresses the ports on a line card equipped in the slot 2. This is driven by the fact that the first six interfaces belong to line card in slot 1 as we just found out. This means that our second line card that sits in slot 2 and has as well six ports, will be addressed by the interfaces
eth7 is port1 and
eth12 is port6.
An integrated variant is provided with a simple TIMOS line:
lcmarkers are needed to define an integrated variant.
vr-sros nodes come up with a basic "blank" configuration where only the card/mda are provisioned, as well as the management interfaces such as Netconf, SNMP, gNMI.
SR OS nodes launched with hellt/vrnetlab come up with some basic configuration that configures the management interfaces, line cards, mdas and power modules. This configuration is applied right after the node is booted.
Since this initial configuration is meant to provide a bare minimum configuration to make the node operational, users will likely want to apply their own configuration to the node to enable some features or to configure some interfaces. This can be done by providing a user-defined configuration file using
startup-config property of the node/kind.
When a user provides a path to a file that has a complete configuration for the node, containerlab will copy that file to the lab directory for that specific node under the
/tftpboot/config.txt name and mount that dir to the container. This will result in this config to act as a startup-config for the node:
With the above configuration, the node will boot with the configuration specified in
myconfig.txt, no other configuration will be applied. You have to provision interfaces, cards, power-shelves, etc. yourself.
Quite often it is beneficial to have a partial configuration that will be applied on top of the default configuration that containerlab applies. For example, users might want to add some services on top of the default configuration provided by containerlab and do not want to have the full configuration file.
This can be done by providing a partial configuration file that will be applied on top of the default configuration. The partial configuration file must have
.partial string in its name, for example,
The partial config can contain configuration in a MD-CLI syntax that is accepted in the configuration mode of the SR OS. The way partial config is applied is by sending lines from the partial config file to the SR OS via SSH. A few important things to note:
- Entering the configuration mode is not required, containerlab will do that for you.
edit-config exclusivemode is used by containerlab.
commitcommand must not be included in the partial config file, containerlab will do that for you.
flat and normal syntax can be used in the partial config file. For example, the following partial config file adds a static route to the node in the regular CLI syntax:
Remote partial files#
It is possible to provide a partial config file that is located on a remote http(s) server. This can be done by providing a URL to the file. The URL must start with
https:// and must point to a file that is accessible from the containerlab host.
The URL must have
.partial in its name:
Embedded partial files#
Users can also embed the partial config in the topology file itself, making it a hermetic artifact that can be shared with others. This can be done by using multiline string in YAML:
- It is mandatory to use YAML's multiline string syntax to denote that the string below is a partial config and not a file.
Embedded partial configs will persist on containerlab's host and use the same directory as the remote startup-config files.
save command will perform a configuration save for
vr-sros nodes via Netconf. The configuration will be saved under
config.txt file and can be found at the node's directory inside the lab parent directory:
Boot Options File#
vr_nokia_sros nodes boot up with a pre-defined "Boot Options File" (BOF). This file includes boot settings including:
- license file location
- config file location
When the node is up and running you can make changes to this BOF. One popular example of such changes is the addition of static-routes to reach external networks from within the SR OS node. Although you can save the BOF from within the SROS system, the location the file is written to is not persistent across container restarts. It is also not possible to define a BOF target location.
A workaround for this limitation is to automatically execute a CLI script that configures BOF once the system boots.
SR OS has an option (introduced in SR OS 16.0.R1) to automatically execute a script upon successful boot. This option is accessible in SR OS by the
/configure system boot-good-exec MD-CLI path:
By mounting a script to SR OS container node and using the
boot-good-exec option, users can make changes to the BOF the second the node boots and thus complete the task of having a somewhat persistent BOF.
As an example the following SR OS MD-CLI script was created to persist custom static routes to the BOF:
######################################## # Configuring static management routes ######################################## /bof private router "management" static-routes route 10.0.0.0/24 next-hop 172.31.255.29 router "management" static-routes route 10.0.1.0/24 next-hop 172.31.255.29 router "management" static-routes route 192.168.0.0/24 next-hop 172.31.255.29 router "management" static-routes route 172.20.20.0/24 next-hop 172.31.255.29 commit exit all
This script is then placed somewhere on the disk, for example in the containerlab's topology root directory, and mounted to
vr-nokia_sros node tftpboot directory using binds property:
post-boot-exec.cfgfile contains the script referenced above and it is mounted to
/tftpbootdirectory that is available in SR OS node.
Once the script is mounted to the node, users need to instruct SR OS to execute the script upon successful boot. This is done by adding the following configuration line on SR OS MD-CLI:
- The tftpboot location is always at
tftp://172.31.255.29/address and the name of the file needs to match the file you used in the binds instruction.
By combining file bindings and the automatic script execution of SROS it is possible to create a workaround for persistent BOF settings.
Path to a valid license must be provided for all vr-sros nodes with a
If your SR OS license file is issued for a specific UUID, you can define it with custom type definition:
When a user starts a lab, containerlab creates a node directory for storing configuration artifacts. For
vr-sros kind containerlab creates
tftpboot directory where the license file will be copied.
The following labs feature vr-sros node: