Clabernetes Quickstart#
The best way to understand how clabernetes works is to walk through a short example where we create a three-node k8s cluster and deploy a lab there.
This quickstart uses kind to create a local kubernetes cluster and then deploys clabernetes into. Once clabernetes is installed we deploy a small topology with two SR Linux nodes connected back to back together.
Once the lab is deployed, we explain how clabverter & clabernetes work in unison to to make the original topology files deployable onto the cluster with tunnels stitching lab nodes together to form point to point connections between the nodes.
Buckle up!
Creating a cluster#
Clabernetes goal is to allow users to run networking labs with containerlab's simplicity and ease of use, but with the scaling powers of kubernetes. To simulate the scaling aspect, we'll use kind
to create a local multi-node kubernetes cluster. If you already have a k8s cluster, feel free to use it instead -- clabernetes can run in any kubernetes cluster1!
With the following command we instruct kind to set up a three node k8s cluster with two worker and one control plane nodes.
kind create cluster --name c9s --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
EOF
Don't forget to install kubectl
!
When the cluster is ready we can proceed with installing clabernetes.
Installing clabernetes#
Clabernetes is installed into a kubernetes cluster using helm:
We use alpine/helm
container image here instead of installing the tool locally; you can skip this step if you already have helm
installed.
alias helm="docker run --network host -ti --rm -v $(pwd):/apps -w /apps \
-v ~/.kube:/root/.kube -v ~/.helm:/root/.helm \
-v ~/.config/helm:/root/.config/helm \
-v ~/.cache/helm:/root/.cache/helm \
alpine/helm:3.12.3"
helm upgrade --install --create-namespace --namespace clabernetes \
clabernetes oci://ghcr.io/srl-labs/clabernetes/clabernetes
A successful installation will result in a clabernetes-manager
deployment of three pods running in the cluster:
- Note, that
clabernetes-manager
is installed as a 3-node deployment, and you can see that two pods might be in Init stay for a little while until the leader election is completed.
We will also need clabverter
CLI to convert containerlab topology files to clabernetes manifests. As per clabverter installation instructions we will setup an alias for its latest version:
clabverter
aliasdocker pull ghcr.io/srl-labs/clabernetes/clabverter
alias clabverter="mkdir -p converted && chown -R 65532:65532 converted && \
docker run -v $(pwd):/clabernetes/work --rm \
ghcr.io/srl-labs/clabernetes/clabverter"
Installing Load Balancer#
To get access to the nodes deployed by clabernetes from outside of the k8s cluster we will install need a load balancer. Any load balancer will do, we will use up kube-vip in this quickstart. Moreover, if no external access to the nodes is required, load balancer installation can be skipped altogether.
Following kube-vip + kind installation instructions we execute the following commands:
kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml
kubectl create configmap --namespace kube-system kubevip --from-literal range-global=172.18.1.10-172.18.1.250
Next we setup kube-vip's CLI tool:
KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")
alias kube-vip="docker run --network host --rm ghcr.io/kube-vip/kube-vip:$KVVERSION"
And install kube-vip load balancer daemonset in ARP mode:
We can check kube-vip daemonset pods are running on both worker nodes:
$ kubectl get pods -A -o wide | grep kube-vip
kube-system kube-vip-cloud-provider-54c878b6c5-qwvf5 1/1 Running 0 91s 10.244.0.5 c9s-control-plane <none> <none>
kube-system kube-vip-ds-fj7qp 1/1 Running 0 9s 172.18.0.3 c9s-worker2 <none> <none>
kube-system kube-vip-ds-z8q67 1/1 Running 0 9s 172.18.0.4 c9s-worker <none> <none>
Deploying a topology#
Clabernetes biggest advantage is that it uses the same topology file format as containerlab; as much as possible. Understandably though, the original Containerlab's topology file is not something you can deploy on k8s as is.
We've created a converter tool called clabverter
that takes containerlab topology file and converts it to kubernetes manifests. The manifests can then be deployed on a k8s cluster.
So how do we do that? Just enter the directory where original clab.yml
file is located; for the Two SR Linux nodes lab this would look like this:
- The path is relative to containerlab repository root.
And let clabverter
do its job:
clabverter --stdout | kubectl apply -f - #(1)!
-
clabverter
converts the original containerlab topology to a set of k8s manifests and applies them to the cluster.We will cover what
clabverter
does in more details in the user manual some time later, but if you're curious, you can check the manifests it generates by runningclabverter --stdout > manifests.yml
and inspecting themanifests.yml
file.
In the background, clabverter
created Containerlab
custom resource (CR) in the clabernetes
namespace that defines our topology and also created a set of config maps for each startup config used in the lab.
Verifying the deployment#
Once clabverter is done, clabernetes controller casts its spell which is called reconciliation in k8s world. It takes the spec of the Containerlab
CR (custom resource) and creates a set of deployments, config maps and services that are required to deploy the lab.
Let's run some verification commands to see what we have in our cluster so far.
Starting with listing Containerlab
CRs in the clabernetes
namespace:
Looking in the Containerlab CR we can see that clabverter put original topology under the spec.config
field. Clabernetes controller on its turn took the original topology and split it to sub-topologies that are outlined in the status.configs
section of the resource:
The sub-topologies are then deployed as deployments (which in their turn create pods) in the cluster, and containerlab is then run inside each pod deploying the topology as it would normally do on a single node:
Besides the clabernetes-manager
pods, we see that two pods running (one per each lab node our original topology had) on different worker nodes2. These pods run containerlab inside in a docker-in-docker mode and each node deploys a subset of the original topology. We can enter the pod and use containerlab CLI to verify the topology:
And in the pod's shell we swim in the familiar containerlab waters:
If you do not see any nodes in the inspect
output give it a few minutes, as containerlab is pulling the image and starting the nodes. The logs of this process can be seen by running tail -f clab.log
.
We can cat topo.clab.yaml
to see the subset of a topology that containerlab started in this pod.
Note
It is worth repeating that unmodified containerlab runs inside a pod as if it would've run on a Linux system in a standalone mode. It has access to the Docker API and schedules nodes in exactly the same way as if no k8s exists.
Accessing the nodes#
There are two common ways to access the lab nodes deployed by clabernetes:
- External access using Load Balancer service.
- Entering the pod's shell and from there login to the running NOS. No LB is required.
We are going to show you both options.
Load Balancer#
Adding a Load Balancer to the k8s cluster makes accessing the nodes almost as easy as when working with containerlab. The kube-vip load balancer that we added a few steps before is going to create a LoadBalancer k8s service for each exposed port.
By default, clabernetes exposes3 the following ports for each lab node:
Protocol | Ports |
---|---|
tcp | 21 , 80 , 443 , 830 , 5000 , 5900 , 6030 , 9339 , 9340 , 9559 , 57400 |
udp | 161 |
The good work that LB is doing can be listing services in the clabernetes
namespace:
The two LoadBalancer services provide external IPs (172.18.1.10
and 172.18.1.11
) for the lab nodes. The long list of ports are the ports clabernetes exposes by default which spans both regular SSH as well as other common automation interfaces.
You can immediately SSH into one of the nodes using its External-IP:
Other services, like gNMI, JSON-RPC, SNMP are available as well since those ports are already exposed.
gNMI access
Pod Shell#
Load Balancer makes it easy to get external access to the lab nodes, but don't panic if for whatever reason you can't install one. It is still possible to access the nodes without LB, it will just be less convenient.
For example, to access srl1
lab node in our k8s cluster we just need to figure out which pod runs this node.
Since all pods are named after the nodes they are running, we can find the right one by listing all pods in a namespace:
Looking at the pod named srl02-srl1-56675cdbfd-7tbk2
we understand that it runs srl1
node we specified in the topology. To get shell access to this node we can run:
We essentially execute ssh admin@srl1
command inside the pod, as you'd normally do with containerlab.
Datapath stitching#
One of the challenges associated with distributed labs is to enable connectivity between the nodes as per user's intent.
Thanks to k8s and accompanying Load Balancer service, the management network access is taken care of. You get access to the management interfaces of each pod out of the box. But what about the non-management links we defined in the original topology file?
In containerlab the links defined in the topology most often represented by the veth pairs between the nodes, but things are a bit more complicated in distributed environments like k8s.
Remember our manifest file we deployed in the beginning of this quickstart? It had a single link between two nodes defined in the same way you'd do it in containerlab:
How does clabernetes layout this link when the lab nodes srl1 and srl2 can be scheduled on different worker nodes? Well, clabernetes takes the original link definition as provided by a user and transforms it into a set of point-to-point VXLAN tunnels4 that stitch the nodes together.
Two nodes appear to be connected to each other as if they were connected with a veth pair. We can check that LLDP neighbors are discovered on either other side of the link:
- Logging to
srl1
node
We can also make sure that our startup-configuration that was provided in external files in original topology is applied in good order and we can perform ping between two nodes:
--{ running }--[ ]--
A:srl1# ping 192.168.0.1 network-instance default -c 2
Using network instance default
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=74.8 ms
64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=8.82 ms
--- 192.168.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 8.823/41.798/74.773/32.975 ms
VXLAN and MTU
VXLAN tunnels are susceptible to MTU issues. Check the MTU value for vx-*
link in your pod to see what value has been set by the kernel and adjust your node's link/IP MTU accordingly.
VM-based nodes?#
In this quickstart we used native containerized Network OS - SR Linux - as it is lightweight and publicly available. But what if you want to use a VM-based Network OS like Nokia SR OS, Cisco IOS-XRv or Juniper vMX? Can you do that with clabernetes?
Short answer is yes. Clabernetes should be able to run VM-based nodes as well, but your cluster nodes must support nested virtualization, same as you would need to run VM-based nodes in containerlab.
Also you need to ensure that your VM-based container image is accessible to your cluster nodes, either via a public registry or a private one.
When these considerations are taken care of, you can use the same topology file as you would use with containerlab. The only difference is that you need to specify the image in the topology file as a fully qualified image name, including the registry name.
-
In general there are no requirements for clabernetes from a kubernetes cluster perspective, however, many device types may have requirements for nested virtualization or specific CPU flags that your nodes would need to support in order to run the device. ↩
-
They may run on the same node, this is up to the kubernetes scheduler whose job it is to schedule pods on the nodes it deems most appropriate. ↩
-
Default exposed ports can be overwritten by a user via Containerlab CR. ↩
-
Using containerlab's vxlan tunneling workflow to create tunnels. ↩