Coder Social home page Coder Social logo

srl-labs / containerlab Goto Github PK

View Code? Open in Web Editor NEW
1.3K 37.0 233.0 35.23 MB

container-based networking labs

Home Page: https://containerlab.dev

License: BSD 3-Clause "New" or "Revised" License

Go 86.05% Shell 1.91% RobotFramework 10.79% Makefile 0.70% Dockerfile 0.05% Smarty 0.50%
srlinux containers networking network-automation ceos docker-topo crpd networking-labs lab-topologies labs

containerlab's Introduction

github release Github all releases Doc Twitter Discord Go Report


With the growing number of containerized Network Operating Systems grows the demand to easily run them in the user-defined, versatile lab topologies.

Unfortunately, container orchestration tools like docker-compose are not a good fit for that purpose, as they do not allow a user to easily create connections between the containers which define a topology.

Containerlab provides a CLI for orchestrating and managing container-based networking labs. It starts the containers, builds a virtual wiring between them to create lab topologies of users choice and manages labs lifecycle.

pic

Containerlab focuses on the containerized Network Operating Systems which are typically used to test network features and designs, such as:

In addition to native containerized NOSes, containerlab can launch traditional virtual machine based routers using vrnetlab or boxen integration:

And, of course, containerlab is perfectly capable of wiring up arbitrary linux containers which can host your network applications, virtual functions or simply be a test client. With all that, containerlab provides a single IaaC interface to manage labs which can span contain all the needed variants of nodes:

This short clip briefly demonstrates containerlab features and explains its purpose:

vid

Features

  • IaaC approach
    Declarative way of defining the labs by means of the topology definition clab files.
  • Network Operating Systems centric
    Focus on containerized Network Operating Systems. The sophisticated startup requirements of various NOS containers are abstracted with kinds which allows the user to focus on the use cases, rather than infrastructure hurdles.
  • VM based nodes friendly
    With the vrnetlab integration it is possible to get the best of two worlds - running virtualized and containerized nodes alike with the same IaaC approach and workflows.
  • Multi-vendor and open
    Although being kick-started by Nokia engineers, containerlab doesn't take sides and supports NOSes from other vendors and opensource projects.
  • Lab orchestration
    Starting the containers and interconnecting them alone is already good, but containerlab packages even more features like managing lab lifecycle: deploy, destroy, save, inspect, graph operations.
  • Scaled labs generator
    With generate capabilities of containerlab it possible to define/launch CLOS-based topologies of arbitrary scale. Just say how many tiers you need and how big each tier is, the rest will be done in a split second.
  • Simplicity and convenience
    Starting from frictionless installation and upgrade capabilities and ranging to the behind-the-scenes link wiring machinery, containerlab does its best for you to enjoy the tool.
  • Fast
    Blazing fast way to create container based labs on any Linux system with Docker.
  • Automated TLS certificates provisioning
    The nodes which require TLS certs will get them automatically on boot.
  • Documentation is a first-class citizen
    We do not let our users guess by making a complete, concise and clean documentation.
  • Lab catalog
    The "most-wanted" lab topologies are documented and included with containerlab installation. Based on this cherry-picked selection you can start crafting the labs answering your needs.

Use cases

  • Labs and demos
    Containerlab was meant to be a tool for provisioning networking labs built with containers. It is free, open and ubiquitous. No software apart from Docker is required!
    As with any lab environment it allows the users to validate features, topologies, perform interop testing, datapath testing, etc.
    It is also a perfect companion for your next demo. Deploy the lab fast, with all its configuration stored as a code -> destroy when done. Easily and securely share lab access if needed.
  • Testing and CI
    Because of the containerlab's single-binary packaging and code-based lab definition files, it was never that easy to spin up a test bed for CI. Gitlab CI, Github Actions and virtually any CI system will be able to spin up containerlab topologies in a single simple command.
  • Telemetry validation
    Coupling modern telemetry stacks with containerlab labs make a perfect fit for Telemetry use cases validation. Spin up a lab with containerized network functions with a telemetry on the side, and run comprehensive telemetry use cases.

Containerlab documentation is provided at https://containerlab.dev.

containerlab's People

Contributors

alexandrehassan avatar ankudinov avatar bclasse avatar bewing avatar bjmeuer avatar bortok avatar carlmontanari avatar crankynetman avatar deepsource-autofix[bot] avatar dependabot[bot] avatar frederic-loui avatar grigoriymikhalkin avatar hellt avatar henderiw avatar jbemmel avatar juliopdx avatar karimra avatar kellerza avatar lbaker-esnet avatar limehat avatar mabra94 avatar mzagozen avatar networkop avatar nlgotz avatar oshothebig avatar sc68cal avatar steiler avatar sulrich avatar ubaumann avatar yutarohayakawa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

containerlab's Issues

Graph option - Not just parsing topology file

Tried the Graph option with the example topology of lab-examples/br01

go run . graph -t lab-examples/br01/br01.yml  -d

The truncated log is as follows:

DEBU[0000] [lab-examples br01 br01.yml]br01.yml[br01 yml]br01 
DEBU[0000] File : &{br01.yml br01 br01}                 
INFO[0000] Parsing topology information ...             
DEBU[0000] Prefix: br01                                 
DEBU[0000] DockerInfo: {clab 172.20.20.0/24 172.20.20.1 2001:172:20:20::/80 2001:172:20:20::1} 
DEBU[0000] License key: /home/ubuntu/container-lab/clab-br01/license.key 
DEBU[0000] Config: /home/ubuntu/container-lab/clab-br01/srl1/config/ 
DEBU[0000] Env Config: /home/ubuntu/container-lab/clab-br01/srl1/srlinux.conf 
DEBU[0000] Topology File: /home/ubuntu/container-lab/clab-br01/srl1/topology.yml 
DEBU[0000] License key: /home/ubuntu/container-lab/clab-br01/license.key 
DEBU[0000] Config: /home/ubuntu/container-lab/clab-br01/srl2/config/ 
DEBU[0000] Env Config: /home/ubuntu/container-lab/clab-br01/srl2/srlinux.conf 
DEBU[0000] Topology File: /home/ubuntu/container-lab/clab-br01/srl2/topology.yml 
DEBU[0000] License key: /home/ubuntu/container-lab/clab-br01/license.key 
DEBU[0000] Config: /home/ubuntu/container-lab/clab-br01/srl3/config/ 
DEBU[0000] Env Config: /home/ubuntu/container-lab/clab-br01/srl3/srlinux.conf 
DEBU[0000] Topology File: /home/ubuntu/container-lab/clab-br01/srl3/topology.yml 
FATA[0000] Bridge br-clab is referenced in the endpoints section but was not found in the default network namespace 
exit status 1

There is code executed which should not be executed on simple topology parsing and graphing.

InitVirtualWiring broken

InitVirtualWiring is broken, if you delete one end of the virtual wire the other end also disappears

create bridge on-demand

When link' kind is set to bridge and the bridge does not exist - create the bridge automatically with the tcp offload set to off and default MTU set to 9212

default mgmt network IP ranges

We should set the default IP ranges only if both ranges are missing.
This will allow users to create networks with only IPv4 or only IPv6 ranges.

Currently if a user specifies IPv4 only, clab adds the default IPv6 range which could fail if it's already in use by another docker network

do not try to remove docker bridge if it has active endpoints

Currently, the lab topologies reuse the management network provided by docker network containerlab.
When we destroy the topo, we try to delete this network and its entities, but that errors, due to other containers from other labs might be still attached:

INFO[0001] Deleting docker bridge ...
ERRO[0001] Error response from daemon: error while removing network: network containerlab id 1a050256ae16f2f7e73d2c29a336b04b133698a17fc77230fdea34b29bc5a2cd has active endpoints

the proposal is to first check if the network has active endpoints attached and if it doesn't - try and delete it

add a container label which will point to the topology file this container was created from

Currently its hard to understand which topo file was used to spin up containers after the creation.

I propose we add a label that will point to the abs file path of the topology that was used to create those nodes.

What I think also can be added is the containerlab inspect command which can be used without arguments and will output all containers from all the labs with topo file paths. This will make it possible to see which labs were deployed on the host and how to destroy them

add a flag to skip certs generation

For the kinds where certs can be locally generated or not needed it might be nice to have a config option to skip cert generation for the global/kind/node case

sort the topology information table on container name

The table that appears once the lab has been created is not sorted:

+---------------------------------+---------------------------------+-------+-------+---------+-----------------+----------------------+
|              Name               |              Image              | Kind  | Group |  State  |  IPv4 Address   |     IPv6 Address     |
+---------------------------------+---------------------------------+-------+-------+---------+-----------------+----------------------+
| containerlab-clos02-spine1      | srlinux                         | srl   |       | running | 172.20.20.14/24 | 2001:172:20:20::e/80 |
| containerlab-clos02-client3     | ghcr.io/hellt/network-multitool | linux |       | running | 172.20.20.15/24 | 2001:172:20:20::f/80 |
| containerlab-clos02-spine2      | srlinux                         | srl   |       | running | 172.20.20.13/24 | 2001:172:20:20::d/80 |
| containerlab-clos02-superspine1 | srlinux                         | srl   |       | running | 172.20.20.11/24 | 2001:172:20:20::b/80 |
| containerlab-clos02-client1     | ghcr.io/hellt/network-multitool | linux |       | running | 172.20.20.12/24 | 2001:172:20:20::c/80 |
| containerlab-clos02-leaf3       | srlinux                         | srl   |       | running | 172.20.20.10/24 | 2001:172:20:20::a/80 |
| containerlab-clos02-leaf2       | srlinux                         | srl   |       | running | 172.20.20.6/24  | 2001:172:20:20::6/80 |
| containerlab-clos02-client4     | ghcr.io/hellt/network-multitool | linux |       | running | 172.20.20.8/24  | 2001:172:20:20::8/80 |
| containerlab-clos02-client2     | ghcr.io/hellt/network-multitool | linux |       | running | 172.20.20.7/24  | 2001:172:20:20::7/80 |
| containerlab-clos02-superspine2 | srlinux                         | srl   |       | running | 172.20.20.5/24  | 2001:172:20:20::5/80 |
| containerlab-clos02-leaf4       | srlinux                         | srl   |       | running | 172.20.20.9/24  | 2001:172:20:20::9/80 |
| containerlab-clos02-spine3      | srlinux                         | srl   |       | running | 172.20.20.4/24  | 2001:172:20:20::4/80 |
| containerlab-clos02-leaf1       | srlinux                         | srl   |       | running | 172.20.20.3/24  | 2001:172:20:20::3/80 |
| containerlab-clos02-spine4      | srlinux                         | srl   |       | running | 172.20.20.2/24  | 2001:172:20:20::2/80 |
+---------------------------------+---------------------------------+-------+-------+---------+-----------------+----------------------+

The proposal is to sort it on the container name

rework topology definition file

This issue will track the design decisions on the new "topology definition" format clab should adopt.

The recent proposal was:

name: wan-topo
mgmt:
  # network is not mandatory, if not specified it default to $name-network
  network: $name-network
  ipv4_range: <ipv4 mgmt range>
  ipv6_range: <ipv6 mgmt range>
topology:
  defaults:
    kind: srl
  kinds:
    srl:
      image: srlinux20.6.1-286
      type: ixr6
    alpine:
      image: henderiw/client-alpine:1.0.0
  
  nodes:
    node1:
    node2:
    client1:
      kind: alpine
    client2:
      kind: alpine
  
  links:
    - endpoints: [ "node1:e1-1", "node2:e1-1"]
    - endpoints: [ "node1:e1-2", "client1:eth1"]
    - endpoints: [ "node2:e1-2", "client2:eth1"]

The areas that we need to have a discussion on:

  1. how to model endpoints/links to allow for data augmentation. For example, when we will do the config generation for interfaces, how do we add the interface-related information to the endpoints.
  2. where to keep the protocol and other logical-inventory related data for config generation?

perform "topology check" before creating any links and/or containers

in addition to the existing checks, the following checks need to be added before we attempt to create a lab

  • every endpoint appears in the list of links only once (added in #266)
  • ensure that container images are either available locally or pullable (added in #267)
  • referred linux bridges exist in the lab host (addressed in #118 )
  • config files referenced in topo with config are present by the provided paths (added in #268)
  • license files referenced in topo with license element must have a file by that path (added in #268)

`containerlab version upgrade` must uninstall the package before trying upgrade

two issues here:

  • old version is not detected
  • package must be uninstalled before installing the new one
[root@srl-centos7 tmp]# sudo curl -sL https://github.com/srl-wim/container-lab/raw/master/get.sh | \
> sudo bash
containerlab 0.7.0 is available. Changing from version .
Downloading https://github.com/srl-wim/container-lab/releases/download/v0.7.0/containerlab_0.7.0_linux_amd64.rpm
Preparing to install containerlab 0.7.0 from package
        file /etc/containerlab/templates/srl/srlconfig.tpl from install of containerlab-0:0.7.0-1.x86_64 conflicts with file from package containerlab-0:0.6.1-1.x86_64
        file /usr/local/bin/containerlab from install of containerlab-0:0.7.0-1.x86_64 conflicts with file from package containerlab-0:0.6.1-1.x86_64
Failed to install containerlab
        For support, go to https://github.com/srl-wim/container-lab/issues

Common Label in example topologies

I propose using a more general label in the example topologies.
Right now we are using:

  kind_defaults:
    srl:
      image: srlinux:20.6.1-286

I propose to change this to either ´srlinux:latest´... or srlinux:containerlab, which would decouple the config from the actual GA release.

do not create certificates if the files already exist

Currently certificates are generated regardless of whether (SRL) config files exist or not.
In case of subsequent deploys of the same lab (with existing config), this results on SRL certificates not being verifiable using the new rootCA file.

the proposal is to add a few features to be able to avoid this situation.

  • if a rootCA file exists (and nodes certificates? and config files?) ==> skip the certificates generation step altogether
  • add a flag to destroy command to delete lab files (certificates and config)
  • add a flag to deploy command to delete lab files if they exist

provide sysctl config parameter for linux workloads to enable ipv6

By default, docker disables ipv6 networking for containers unless the docker bridge is configured with ipv6 cidr

But for the containers which do not connect to the docker bridge we need to provide a runtime parameter to enable ipv6 networking.

Consider the case of the linux container which we use for test clients connected to SRL containers, for them to have ipv6 networking enabled we need to launch this container with --sysctl net.ipv6.conf.all.disable_ipv6=0

I propose we add this parameter for all kind: linux workloads.

background: moby/moby#32433

So for a newly clab launched alpine container with a single additional interface, we will have the following picture where the eth1 interface will be disabled for ipv6:

/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
283: eth0@if284: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:14:14:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.20.20.4/24 brd 172.20.20.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:172:20:20::4/80 scope global nodad
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe14:1404/64 scope link
       valid_lft forever preferred_lft forever
300: eth1@if299: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3a:07:48:7b:e8:aa brd ff:ff:ff:ff:ff:ff link-netnsid 1


/ # sysctl -a | grep disable_ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_ipv6 = 0
net.ipv6.conf.eth1.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 0

bind mount directories should be possible to set per kind and node

To allow mounting files to the containerlab modes, the mount option have to be created for kinds and nodes.
The use case is to add custom binaries or configs for testing containers (like adding gobgp to a vanilla alpine)

kind_defaults:
    my_tools:
      type: custom
      image: pklepikov/ubuntu-tools
      mounts:
       - $(pwd)/exabgp /home/admin/exabgp
       - $(pwd)/gobgp /home/admin/gobgp

delay certificates creation

Right now, we generate certificates before the containers deployments,
This means the mgmt IP addresses, cannot be part of the cert SAN.

What about delaying the certificates generation till after the deployment and use SRL's json-rpc to set the certificates and enable the gnmi-server ?
This also solves the issue of overwriting certificates on disk even if they are present in config.

For other nodes, we can check how certificates can be set other than at boot

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.