Skip to content

Cisco Nexus 9000v#

Cisco Nexus900v virtualized router is identified with vr-n9kv or vr-cisco_n9kv kind in the topology file. It is built using vrnetlab project and essentially is a Qemu VM packaged in a docker container format.

vr-n9kv nodes launched with containerlab comes up pre-provisioned with SSH, SNMP, NETCONF, NXAPI and gRPC services enabled.

Managing vr-n9kv nodes#


Containers with Nexus 9000v inside will take ~8-10min to fully boot.
You can monitor the progress with docker logs -f <container-name>.

Cisco Nexus 9000v node launched with containerlab can be managed via the following interfaces:

to connect to a bash shell of a running vr-n9kv container:

docker exec -it <container-name/id> bash

to connect to the Nexus 9000v CLI

ssh admin@<container-name/id>

NETCONF server is running over port 830

ssh admin@<container-name> -p 830 -s netconf

gRPC server is running over port 50051


Default user credentials: admin:admin

Interfaces mapping#

vr-n9kv container can have up to 128 interfaces and uses the following mapping rules:

  • eth0 - management interface connected to the containerlab management network
  • eth1 - first data interface, mapped to first data port of Nexus 9000v line card
  • eth2+ - second and subsequent data interface

When containerlab launches vr-n9kv node, it will assign IPv4/6 address to the eth0 interface. These addresses can be used to reach management plane of the router.

Data interfaces eth1+ needs to be configured with IP addressing manually using CLI/management protocols.

Features and options#

Node configuration#

vr-n9kv nodes come up with a basic configuration where only admin user and management interfaces such as NETCONF, NXAPI and GRPC provisioned.