Clabernetes Quickstart#
The best way to understand how clabernetes works is to walk through a short example where we deploy a simple but representative lab using clabernetes.
Do you already have a kubernetes cluster? Great! You can skip the cluster creation step and jump straight to Installing Clabernetes part.
But if you don't have a cluster yet, don't panic, we'll create one together. We are going to use kind to create a local kubernetes cluster and then install clabernetes into it. Once clabernetes is installed we deploy a small topology with two SR Linux nodes and two client nodes.
If all goes to plan, the lab will be successfully deployed! Clabverter & clabernetes work in unison to make the original topology files deployable onto the cluster with tunnels stitching lab nodes together to form point to point connections between the nodes.
Let's see how it all works, buckle up!
Creating a cluster#
Clabernetes goal is to allow users to run networking labs with containerlab's simplicity and ease of use, but with the scaling powers of kubernetes. Surely, it is best to have a real deal available to you, but for demo purposes we'll use kind
v0.22.0 to create a local multi-node kubernetes cluster. If you already have a k8s cluster, feel free to use it instead -- clabernetes can run in any kubernetes cluster1!
With the following command we instruct kind to set up a three node k8s cluster with two worker and one control plane nodes.
kind create cluster --name c9s --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".containerd]
discard_unpacked_layers = false
EOF
Don't forget to install kubectl
!
Check that the cluster is ready and proceed with installing clabernetes.
❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
c9s-control-plane Ready control-plane 5m6s v1.29.2
c9s-worker Ready <none> 4m46s v1.29.2
c9s-worker2 Ready <none> 4m42s v1.29.2
Installing clabernetes#
Clabernetes is installed into a kubernetes cluster using helm:
We use alpine/helm
container image here instead of installing Helm locally; you can skip this step if you already have helm
installed.
GKE clusters require gke-gcloud-auth-plugin
to be available. Make sure you have it installed and mounted into the container.
helm upgrade --install --create-namespace --namespace c9s \
clabernetes oci://ghcr.io/srl-labs/clabernetes/clabernetes
Note, that we install clabernetes in a c9s
namespace. This is not a requirement, but it is a good practice to keep clabernetes manager deployment in a separate namespace.
A successful installation will result in a clabernetes-manager
deployment of three pods running in the cluster:
- Note, that
clabernetes-manager
is installed as a 3-node deployment, and you can see that two pods might be in Init stay for a little while until the leader election is completed.
Installing Load Balancer#
To get access to the nodes deployed by clabernetes from outside the k8s cluster we need a load balancer. Any load balancer will do, but we will use kube-vip in this quickstart.
Note
Load Balancer installation can be skipped if you don't need external access to the lab nodes. You can still access the nodes from inside the cluster by entering the pod's shell and then logging into the node.
Following kube-vip + kind installation instructions we execute the following commands:
kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml
kubectl create configmap --namespace kube-system kubevip \
--from-literal range-global=172.18.1.10-172.18.1.250
Next we set up the kube-vip CLI:
KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | \
jq -r ".[0].name")
alias kube-vip="docker run --network host \
--rm ghcr.io/kube-vip/kube-vip:$KVVERSION"
And install kube-vip load balancer daemonset in ARP mode:
We can check kube-vip daemonset pods are running on both worker nodes:
Clabverter#
Clabernetes motto is "containerlab at scale" and therefore we wanted to make it work with the same topology definition file format as containerlab does. Understandably though, the original Containerlab's topology file is not something you can deploy on Kubernetes cluster as is.
To make sure you have a smooth sailing in the clabernetes waters we've created a clabernetes companion tool called clabverter
; it takes a containerlab topology file and converts it to several manifests native to Kubernetes and clabernetes. Clabverter then can also apply those manifests to the cluster on your behalf.
Clabverter is not a requirement to run clabernetes, but it is a helper tool to convert containerlab topologies to clabernetes resources and kubernetes objects.
As per clabverter's installation instructions we will setup an alias that uses the latest available clabverter container image:
clabverter
aliasalias clabverter='sudo docker run --user $(id -u) \
-v $(pwd):/clabernetes/work --rm \
ghcr.io/srl-labs/clabernetes/clabverter'
Deploying with clabverter#
We are now ready to deploy our lab using clabernetes with the help of clabverter. First we clone the lab repository:
git clone --depth 1 https://github.com/srl-labs/srlinux-vlan-handling-lab.git \
&& cd srlinux-vlan-handling-lab
And then, while standing in the lab directory, let clabverter
do its job:
clabverter --stdout --naming non-prefixed | \
kubectl apply -f - #(1)!
-
clabverter
converts the original containerlab topology to a set of k8s manifests and applies them to the cluster.We will cover what
clabverter
does in more details in the user manual some time later, but if you're curious, you can check the manifests it generates by runningclabverter --stdout > manifests.yml
and inspecting themanifests.yml
file.The
non-prefixed
naming scheme instructs clabernetes to not use additional prefixes for the resources, as in this scenario we control the namespace and the resources are not going to clash with other resources in the cluster.
In the background, clabverter
created the Topology
custom resource (CR) in the c9s-vlan
5 namespace that defines our topology and also created a set of config maps for each startup config used in the lab.
Verifying the deployment#
Once clabverter is done, clabernetes controller casts its spell known as reconciliation in the k8s world. It takes the spec of the Topology
CR and creates a set of deployments, config maps and services that are required for lab's operation.
Let's run some verification commands to see what we have in our cluster so far.
Namespace#
Remember how in Containerlab world if you wanted to run multiple labs on the same host you would give each lab a distinct name and containerlab would use that name to create a unique prefix for containers? In k8s world, we use namespaces to achieve the same goal.
When Clabverter parses the original topology file, it takes the lab name value, prepends it with c9s-
string and uses it as a namespace for the lab resources. This way, we can run multiple labs in the same k8s cluster without worrying about resource name clashes.
As you can see, we have two namespaces: c9s
and c9s-vlan
. The c9s
namespace is where clabernetes manager is running and the c9s-vlan
namespace is where our lab resources are deployed.
Topology resource#
The main clabernetes resource is called Topology
and we should be able to find it in the c9s-vlan
namespace where all lab resources are deployed:
Looking in the Topology CR we can see that the original containerlab topology definition can be found under the spec.definition.containerlab
field of the custom resource. Clabernetes took the original topology and split it to sub-topologies that are outlined in the status.configs
section of the resource:
If you take a closer look at the sub-topologies you will see that they are just mini, one-node-each, containerlab topologies. Clabernetes deploys these sub-topologies as deployments in the cluster.
Deployments#
The deployment objects created by Clabernetes are the vessels that carry the lab nodes. Let's list those deployments:
Those deployment names should be familiar as they are named exactly as the nodes in the original topology file.
Pods#
Each deployment consists of exactly one k8s pod.
We see four pods running, one pod per each lab node of our original containerlab topology. Pods are scheduled on different worker nodes by the k8s scheduler ensuring optimal resource utilization2.
Each pod is a docker-in-docker container with Containerlab running inside.
Inside each pod, containerlab runs the sub-topology as if it would run on a standalone Linux system. It has access to the Docker API and schedules nodes in exactly the same way as if no k8s exists
We can enter the pod's shell and use containerlab CLI as usual:
And in the pod's shell we swim in the familiar containerlab waters:
- If you do not see any nodes in the
inspect
output give it a few minutes, as containerlab is pulling the image and starting the nodes. Monitor this process withtail -f containerlab.log
.
We can cat topo.clab.yaml
to see the subset of a topology that containerlab started in this pod.
topo.clab.yaml
[*]─[client1]─[/clabernetes]
└──> cat topo.clab.yaml
name: clabernetes-client1
prefix: ""
topology:
defaults:
ports:
- 60000:21/tcp
- 60001:22/tcp
- 60002:23/tcp
- 60003:80/tcp
- 60000:161/udp
- 60004:443/tcp
- 60005:830/tcp
- 60006:5000/tcp
- 60007:5900/tcp
- 60008:6030/tcp
- 60009:9339/tcp
- 60010:9340/tcp
- 60011:9559/tcp
- 60012:57400/tcp
nodes:
client1:
kind: linux
image: ghcr.io/srl-labs/alpine
exec:
- ash -c '/config.sh 1'
binds:
- configs/client.sh:/config.sh
ports: []
links:
- endpoints:
- client1:eth1
- host:client1-eth1
debug: false
It is worth reiterating, that unmodified containerlab runs inside a pod as if it would've run on a Linux system in a standalone mode. It has access to the Docker API and schedules nodes in exactly the same way as if no k8s exists.
Accessing the nodes#
There are two common ways to access the lab nodes deployed with clabernetes:
- Using external address provided by the Load Balancer service.
- Entering the pod's shell and from there log in the running lab node. No load balancer required.
We are going to show you both options and you can choose the one that suits you best.
Load Balancer#
Adding a Load Balancer to the k8s cluster makes accessing the nodes almost as easy as when working with containerlab. The kube-vip load balancer that we added before is going to provide an external IP address for a LoadBalancer k8s service that clabernetes creates for each deployment under its control.
By default, clabernetes exposes3 the following ports for each lab node:
Protocol | Ports |
---|---|
tcp | 21 , 80 , 443 , 830 , 5000 , 5900 , 6030 , 9339 , 9340 , 9559 , 57400 |
udp | 161 |
Let's list the services in the c9s-vlan
namespace (excluding the services for VXLAN tunnels6):
We see four LoadBalancer
services created for each node of our distributed topology. Each service points (using selectors) to the corresponding pod.
The LoadBalancer services (powered by the kube-vip
) also provide us with the external IPs for the lab nodes. The long list of ports are the ports clabernetes exposes by default which spans both regular SSH and other well-known management interfaces and their ports.
For instance, we see that srl1
node has been assigned 172.18.1.10
IP, and we can immediately SSH into it from the outside world using the following command:
Other services, like gNMI, JSON-RPC, SNMP are available as well since those ports are already exposed.
"gNMI access"
Pod shell#
Load Balancer makes it easy to get external access to the lab nodes, but don't panic if for whatever reason you can't have one. It is still possible to access the nodes without the load balancer and external IP addresses using various techniques. One of them is to enter the pod's shell and from there log in the running lab node.
For example, to access srl1
lab node in our k8s cluster we can leverage kubectl exec
command to get to the shell of the pod that runs srl1
node.
Note
You may have a stellar experience with k9s
project that offers a terminal UI to interact with k8s clusters. It is a great tool to have in your toolbox.
If Terminal UI is not your jam, take a look at kubectl
shell completions. They come in super handy, install them if you haven't yet.
Since all pods are named after the nodes they are running, we can find the right one by listing all pods in a namespace:
Looking at the pod named srl1-78bdc85795-l9bl4
we clearly see that it runs the srl1
node we specified in the topology. To get shell access to this node we can run:
We essentially execute ssh srl1
command inside the pod, as you'd normally do with containerlab.
Datapath stitching#
One of the challenges associated with the distributed labs is the connectivity between the lab nodes running on different computes.
Thanks to Kubernetes and its services, the management network access is taken care of. You get access to the management interfaces of each pod out of the box. But what about the non-management/datapath links we have in the original topology file?
In containerlab the links defined in the topology are often represented by the veth pairs between the containers running on a single host, but things are a bit more complicated in the distributed environments.
If you rewind it back to the beginning of the quickstart where we looked at the Topology CR you would notice that it has the familiar links
section in the spec.definition.containerlab
field:
This link connects srl1
node with srl2
, and as we saw these nodes are running on different worker nodes in the k8s cluster.
How does clabernetes lays out this link? Well, clabernetes takes the original link definition as provided by a user and transforms it into a set of point-to-point VXLAN tunnels4 that stitch the nodes together.
Two nodes appear to be connected to each other as if they were connected with a veth pair. We can check that LLDP neighbors are discovered on either other side of the link:
NS=c9s-vlan POD=srl1; \
kubectl -n $NS exec -it \
$(kubectl -n $NS get pods | grep ^$POD | awk '{print $1}') -- \
docker exec $POD sr_cli show system lldp neighbor
We can also make sure that our startup-configuration that was provided in external files in original topology is applied in good order and we can perform the ping between two clients
With the command above we:
- connected to the
client1
pod that runs theclient1
lab node - executed
ping
command inside theclient1
node to ping theclient2
node - Ensured that the datapath stitching is working as expected
VM-based nodes?#
In this quickstart we used native containerized Network OS - SR Linux - as it is lightweight and publicly available. But what if you want to use a VM-based Network OS like Nokia SR OS, Cisco IOS-XRv or Juniper vMX? Can you do that with clabernetes?
Short answer is yes. Clabernetes should be able to run VM-based nodes as well, but your cluster nodes must support nested virtualization, same as you would need to run VM-based nodes in containerlab.
Also you need to ensure that your VM-based container image is accessible to your cluster nodes, either via a public registry or a private one.
When these considerations are taken care of, you can use the same topology file as you would use with containerlab. The only difference is that you need to specify the image in the topology file as a fully qualified image name, including the registry name.
Cleaning up#
When you are done with the lab, you may free resources using:
-
In general there are no requirements for clabernetes from a kubernetes cluster perspective, however, many device types may have requirements for nested virtualization or specific CPU flags that your nodes would need to support in order to run the device. ↩
-
They may run on the same node, this is up to the kubernetes scheduler whose job it is to schedule pods on the nodes it deems most appropriate. ↩
-
Default exposed ports can be overwritten by a user via Topology CR. ↩
-
Using containerlab's vxlan tunneling workflow to create tunnels. ↩
-
The namespace name is derived from the name of the lab in the
.clab.yml
file. ↩ -
VXLAN services are used for datapath stitching and are not meant to be accessed from outside the cluster. ↩