Coder Social home page Coder Social logo

arista-netdevops-community / avd-ceos-lab Goto Github PK

View Code? Open in Web Editor NEW
57.0 9.0 9.0 5.33 MB

A repository with playbooks to implement basic EVPN/VXLAN Fabric using Arista AVD and cEOS-Lab

Home Page: https://arista-netdevops-community.github.io/avd-cEOS-Lab/

License: Apache License 2.0

Dockerfile 1.56% Shell 57.56% Smarty 3.03% Makefile 37.84%
ceos ceos-lab ansible avd containerlab evpn-vxlan

avd-ceos-lab's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

avd-ceos-lab's Issues

stuck during deploy

hi guys
I'm trying to build this in my lab, but I get stuck in the containerlab deploy stage during the "creating virtual wire", see below.

I thought I had memory/cpu limitations on the ubuntu VM, but after increasing them 12vcpus/32G I still get stuck:

INFO[0006] Creating container: "spine1"
DEBU[0006] Container "client3" create response: {ID:9734a2e4aea79bdee7c004a614b0bc657c8ca91d4ce40b5b9c985cc3f6fd1407 Warnings:[]}
DEBU[0006] Start container: "clab-avdirb-client3"
DEBU[0006] Container "client1" create response: {ID:088d3570dc0faa31703876973abc218c011d914c5b2b4bd0aedafe881efdfcb0 Warnings:[]}
DEBU[0006] Start container: "clab-avdirb-client1"
DEBU[0006] Container "svc2b" create response: {ID:d60844aef36936500313dd8355380dfb041969ce3ddd69d2e4a0737a4d9f088e Warnings:[]}
DEBU[0006] Start container: "clab-avdirb-svc2b"
DEBU[0006] Container "l2leaf2a" create response: {ID:ffb9604b18f6b0a56314400deaff533a51905dba6b33a04f8bfa0cb6fbb08b85 Warnings:[]}
DEBU[0006] Start container: "clab-avdirb-l2leaf2a"
DEBU[0006] Container "client4" create response: {ID:ff299f57ea7ee927540a407a607eac620cf04aa2bb5fe7fd41b82dbb3e9bf357 Warnings:[]}
DEBU[0006] Container "leaf1a" create response: {ID:ce4d29f8cfaa3e99cd355f4d82ed11b5de9d96b48d9306d065e03ba1ea69ff4a Warnings:[]}
DEBU[0006] Start container: "clab-avdirb-leaf1a"
DEBU[0006] Start container: "clab-avdirb-client4"
DEBU[0006] Container "leaf1b" create response: {ID:6350b0ef2f9b80c41f57d3339dcf5333c48b0df4db6ffa29c772f50e7feeec8b Warnings:[]}
DEBU[0006] Start container: "clab-avdirb-leaf1b"
DEBU[0006] Container "l2leaf2b" create response: {ID:de4caf2ab07b62882a7841e3881756fa0379504c99f02e613f34c836aa2c1181 Warnings:[]}
DEBU[0006] Start container: "clab-avdirb-l2leaf2b"
DEBU[0006] Container "spine1" create response: {ID:e4096c9fec90128c2d93b11058244342d80247df41baedc722334dcfbb6489b2 Warnings:[]}
DEBU[0006] Start container: "clab-avdirb-spine1"
DEBU[0006] Container "spine2" create response: {ID:99e91c718549227dc49df3485d8640afcf9b626eca28a5c1a9a324c25dc4afe0 Warnings:[]}
DEBU[0006] Start container: "clab-avdirb-spine2"
DEBU[0006] Container "client2" create response: {ID:d3f6a6d1f3c1f60da771d487af5acce128e9adf37081d7aca32dead3be4f31d3 Warnings:[]}
DEBU[0006] Start container: "clab-avdirb-client2"
DEBU[0006] Container "svc2a" create response: {ID:d67677395dbe55ecf66e80fda5b162a79a2d48583e6cc513ee79fd44370882e1 Warnings:[]}
DEBU[0006] Start container: "clab-avdirb-svc2a"
ERRO[0006] failed deploy phase for node "l2leaf2a": Error response from daemon: Address already in use
DEBU[0006] Worker 0 terminating...
ERRO[0007] failed deploy phase for node "l2leaf2b": Error response from daemon: Address already in use
DEBU[0007] Worker 4 terminating...
DEBU[0008] Container started: "clab-avdirb-client3"
DEBU[0008] Worker 1 terminating...
DEBU[0008] Container started: "clab-avdirb-client1"
DEBU[0008] Worker 9 terminating...
DEBU[0008] Container started: "clab-avdirb-svc2b"
DEBU[0008] Worker 5 terminating...
DEBU[0008] Container started: "clab-avdirb-spine1"
DEBU[0008] Worker 3 terminating...
DEBU[0009] Container started: "clab-avdirb-spine2"
DEBU[0009] Worker 11 terminating...
DEBU[0009] Container started: "clab-avdirb-svc2a"
DEBU[0009] Worker 10 terminating...
DEBU[0009] Container started: "clab-avdirb-client2"
DEBU[0009] Worker 6 terminating...
DEBU[0009] Container started: "clab-avdirb-client4"
DEBU[0009] Worker 7 terminating...
DEBU[0009] Container started: "clab-avdirb-leaf1a"
DEBU[0009] Worker 8 terminating...
DEBU[0009] Container started: "clab-avdirb-leaf1b"
DEBU[0009] Worker 2 terminating...
DEBU[0009] Link worker 8 received link: link [leaf1a:eth6, client2:eth1]
DEBU[0009] Link worker 6 received link: link [svc2a:eth3, svc2b:eth3]
INFO[0009] Creating virtual wire: svc2a:eth3 <--> svc2b:eth3
DEBU[0009] Link worker 2 received link: link [leaf1a:eth3, leaf1b:eth3]
INFO[0009] Creating virtual wire: leaf1a:eth3 <--> leaf1b:eth3
DEBU[0009] Link worker 11 received link: link [svc2a:eth2, spine2:eth3]
INFO[0009] Creating virtual wire: svc2a:eth2 <--> spine2:eth3
DEBU[0009] Link worker 3 received link: link [svc2b:eth2, spine2:eth4]
INFO[0009] Creating virtual wire: svc2b:eth2 <--> spine2:eth4
INFO[0009] Creating virtual wire: leaf1a:eth6 <--> client2:eth1
DEBU[0009] Link worker 7 received link: link [svc2b:eth1, spine1:eth4]
INFO[0009] Creating virtual wire: svc2b:eth1 <--> spine1:eth4
DEBU[0009] Link worker 5 received link: link [leaf1b:eth5, client1:eth2]
INFO[0009] Creating virtual wire: leaf1b:eth5 <--> client1:eth2
DEBU[0009] Link worker 1 received link: link [leaf1a:eth5, client1:eth1]
INFO[0009] Creating virtual wire: leaf1a:eth5 <--> client1:eth1
DEBU[0009] Link worker 9 received link: link [leaf1b:eth2, spine2:eth2]
INFO[0009] Creating virtual wire: leaf1b:eth2 <--> spine2:eth2
DEBU[0009] Link worker 10 received link: link [svc2a:eth1, spine1:eth3]
INFO[0009] Creating virtual wire: svc2a:eth1 <--> spine1:eth3
DEBU[0009] Link worker 4 received link: link [leaf1a:eth4, leaf1b:eth4]
INFO[0009] Creating virtual wire: leaf1a:eth4 <--> leaf1b:eth4
DEBU[0009] Link worker 0 received link: link [leaf1b:eth1, spine1:eth2]
INFO[0009] Creating virtual wire: leaf1b:eth1 <--> spine1:eth2
DEBU[0009] Link worker 6 received link: link [leaf1a:eth1, spine1:eth1]
INFO[0009] Creating virtual wire: leaf1a:eth1 <--> spine1:eth1
DEBU[0010] Link worker 11 received link: link [leaf1b:eth6, client2:eth2]
INFO[0010] Creating virtual wire: leaf1b:eth6 <--> client2:eth2
DEBU[0010] Link worker 7 received link: link [svc2a:eth4, svc2b:eth4]
INFO[0010] Creating virtual wire: svc2a:eth4 <--> svc2b:eth4
DEBU[0010] Link worker 0 received link: link [leaf1a:eth2, spine2:eth1]
INFO[0010] Creating virtual wire: leaf1a:eth2 <--> spine2:eth1

any ideas what I'm doing wrong?

I tested the containerlab ok with another repo:
https://github.com/arista-netdevops-community/emea-ambassadors-containerlab-aug-2022.git

cheers!

client configuration is failing

client configuration does not work. looks like the interface teaming in the alpine container entrypoint script is failing

messages after running the l3_build.sh script

[INFO] Configuring clab-avdirb-client1
client1
vconfig: ioctl error for add: No such device
ifconfig: SIOCSIFADDR: No such device
ip: ioctl 0x8913 failed: No such device
ip: can't find device 'team0.110'
ifconfig: team0.110: error fetching interface information: Device not found
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.100.100.1 0.0.0.0 UG 0 0 0 eth0
172.100.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
[INFO] Configuring clab-avdirb-client2
client2
vconfig: ioctl error for add: No such device
ifconfig: SIOCSIFADDR: No such device
ip: ioctl 0x8913 failed: No such device
ip: can't find device 'team0.111'
ifconfig: team0.111: error fetching interface information: Device not found
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.100.100.1 0.0.0.0 UG 0 0 0 eth0
172.100.100.0 0.0.0.0 255.255.255.0 U

it can't build the topology yaml file.

Hi,

I am trying to use the docker-topo to create a topology for arista but it couldn't create the topology file.
I installed docker-topo Without virtualenv, and got no error during the installation

root@debian:/lab# docker --version
Docker version 20.10.13, build a224086

root@debian:/lab# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye

docker-topo --create base_lab.yml
Traceback (most recent call last):
File "/usr/local/bin/docker-topo", line 842, in
main()
File "/usr/local/bin/docker-topo", line 692, in main
with open(t_file, 'r') as stream:
FileNotFoundError: [Errno 2] No such file or directory: '/lab/base_lab.yml'

Best Regards,
Keyvan

task deploy_eapi failed

Hello Arista

I'm trying to deal with cEOS and AVD. During deploy task I've got following issues:

TASK [arista.avd.eos_config_deploy_eapi : replace configuration with intended configuration] *******************************************
fatal: [DC1_SPINE2]: FAILED! => changed=false
module_stderr: 'Could not connect to https://172.100.100.3:443/command-api: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)'
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
fatal: [DC1_SPINE1]: FAILED! => changed=false
module_stderr: 'Could not connect to https://172.100.100.2:443/command-api: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)'
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
fatal: [DC1_LEAF1A]: FAILED! => changed=false
module_stderr: 'Could not connect to https://172.100.100.4:443/command-api: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)'
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
fatal: [DC1_LEAF1B]: FAILED! => changed=false
module_stderr: 'Could not connect to https://172.100.100.5:443/command-api: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)'
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
fatal: [DC1_LEAF2A]: FAILED! => changed=false
module_stderr: 'Could not connect to https://172.100.100.6:443/command-api: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)'
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
fatal: [DC1_LEAF2B]: FAILED! => changed=false
module_stderr: 'Could not connect to https://172.100.100.7:443/command-api: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)'
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error

PLAY RECAP *****************************************************************************************************************************
DC1_LEAF1A : ok=4 changed=2 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
DC1_LEAF1B : ok=4 changed=2 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
DC1_LEAF2A : ok=4 changed=2 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
DC1_LEAF2B : ok=4 changed=2 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
DC1_SPINE1 : ok=26 changed=7 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
DC1_SPINE2 : ok=4 changed=2 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0

Basic cEOS configuration is as follow:

management api http-commands
protocol https
no shutdown
!
vrf MGMT
no shutdown

Trying connect to specific URL got code 405:

$ wget https://172.100.100.2/command-api
--2022-10-16 15:43:44-- https://172.100.100.2/command-api
Connecting to 172.100.100.2:443... connected.
ERROR: cannot verify 172.100.100.2's certificate, issued by ‘CN=self.signed’:
Self-signed certificate encountered.
ERROR: certificate common name ‘self.signed’ doesn't match requested host name ‘172.100.100.2’.
To connect to 172.100.100.2 insecurely, use `--no-check-certificate'.

$ wget https://172.100.100.2/command-api --no-check-certificate
--2022-10-16 15:44:02-- https://172.100.100.2/command-api
Connecting to 172.100.100.2:443... connected.
WARNING: cannot verify 172.100.100.2's certificate, issued by ‘CN=self.signed’:
Self-signed certificate encountered.
WARNING: certificate common name ‘self.signed’ doesn't match requested host name ‘172.100.100.2’.
HTTP request sent, awaiting response... 405 Not Allowed
2022-10-16 15:44:02 ERROR 405: Not Allowed.

Any suggestions?

Regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.