netenglabs / suzieq Goto Github PK
View Code? Open in Web Editor NEWUsing network observability to operate and design healthier networks
Home Page: https://www.stardustsystems.net/suzieq/
License: Apache License 2.0
Using network observability to operate and design healthier networks
Home Page: https://www.stardustsystems.net/suzieq/
License: Apache License 2.0
I don't understand the time filtering. I don't know what I'm supposed to put in
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 655, in parse
ret = self._build_naive(res, default)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1241, in _build_naive
naive = default.replace(**repl)
ValueError: year 1570006401 is out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "pandas/_libs/tslib.pyx", line 610, in pandas._libs.tslib.array_to_datetime
File "pandas/_libs/tslibs/parsing.pyx", line 225, in pandas._libs.tslibs.parsing.parse_datetime_string
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1374, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 657, in parse
six.raise_from(ParserError(e.args[0] + ": %s", timestr), e)
File "", line 3, in raise_from
dateutil.parser._parser.ParserError: year 1570006401 is out of range: 1570006401
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "pandas/_libs/tslib.pyx", line 617, in pandas._libs.tslib.array_to_datetime
TypeError: invalid string coercion to datetime
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 655, in parse
ret = self._build_naive(res, default)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1241, in _build_naive
naive = default.replace(**repl)
ValueError: year 1570006401 is out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/home/jpiet/suzieq/suzieq/cli/sqcmds/topcpuCmd.py", line 54, in show
hostname=self.hostname, columns=self.columns, datacenter=self.datacenter
File "/home/jpiet/suzieq/suzieq/sqobjects/basicobj.py", line 125, in get
return self.engine_obj.get(**kwargs)
File "/home/jpiet/suzieq/suzieq/engines/pandas/engineobj.py", line 201, in get
df = self.get_valid_df(self.iobj._table, sort_fields, **kwargs)
File "/home/jpiet/suzieq/suzieq/engines/pandas/engineobj.py", line 126, in get_valid_df
**kwargs
File "/home/jpiet/suzieq/suzieq/engines/pandas/engine.py", line 74, in get_table_df
files = get_latest_files(folder, start, end)
File "/home/jpiet/suzieq/suzieq/utils.py", line 131, in get_latest_files
ssecs = pd.to_datetime(start, infer_datetime_format=True).timestamp() * 1000
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/util/_decorators.py", line 208, in wrapper
return func(*args, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 796, in to_datetime
result = convert_listlike(np.array([arg]), box, format)[0]
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 463, in _convert_listlike_datetimes
allow_object=True,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 1984, in objects_to_datetime64ns
raise e
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 1975, in objects_to_datetime64ns
require_iso8601=require_iso8601,
File "pandas/_libs/tslib.pyx", line 465, in pandas._libs.tslib.array_to_datetime
File "pandas/_libs/tslib.pyx", line 688, in pandas._libs.tslib.array_to_datetime
File "pandas/_libs/tslib.pyx", line 822, in pandas._libs.tslib.array_to_datetime_object
File "pandas/_libs/tslib.pyx", line 813, in pandas._libs.tslib.array_to_datetime_object
File "pandas/_libs/tslibs/parsing.pyx", line 225, in pandas._libs.tslibs.parsing.parse_datetime_string
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1374, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 657, in parse
six.raise_from(ParserError(e.args[0] + ": %s", timestr), e)
File "", line 3, in raise_from
dateutil.parser._parser.ParserError: year 1570006401 is out of range: 1570006401
start_time doesn't do anything.
(suzieq) jpiet@t14:/tmp/pycharm_project_1000/suzieq$ python3 /tmp/pycharm_project_1000/suzieq/suzieq/cli/suzieq-cli routes show --format=json
Logging to /tmp/suzieq-or6_d9ro
Error running command: Unsupported UTF-8 sequence length when encoding string
------------------------------------------------------------
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 448, in run_cli
return fn(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/RoutesCmd.py", line 68, in show
return self._gen_output(df)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/command.py", line 108, in _gen_output
print(df.to_json(orient="records"))
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/generic.py", line 2424, in to_json
index=index,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/io/json/_json.py", line 78, in to_json
index=index,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/io/json/_json.py", line 135, in write
self.default_handler,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/io/json/_json.py", line 234, in _write
default_handler,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/io/json/_json.py", line 155, in _write
default_handler=default_handler,
OverflowError: Unsupported UTF-8 sequence length when encoding string
------------------------------------------------------------
jpiet> evpnVni show
Traceback (most recent call last):
File "suzieq-cli", line 18, in
sys.exit(shell.run())
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/nubia.py", line 311, in run
return self.start_interactive(args)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/nubia.py", line 211, in start_interactive
io_loop.run()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 145, in run
self.parse_and_evaluate(text)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 93, in parse_and_evaluate
return self.evaluate_command(cmd, args, input)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 124, in evaluate_command
result = cmd_instance.run_interactive(cmd, args, raw)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 227, in run_interactive
instance, remaining_args = self._create_subcommand_obj(args_dict)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 196, in _create_subcommand_obj
return self._fn(**kwargs), remaining
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/EvpnVniCmd.py", line 40, in init
self.evpnVniobj = evpnVniObj(context=self.ctxt)
File "/home/jpiet/suzieq/suzieq/sqobjects/evpnVni.py", line 23, in init
datacenter, columns, context=context, table='evpnVni')
File "/home/jpiet/suzieq/suzieq/sqobjects/basicobj.py", line 87, in init
self.engine_obj = self.engine.get_object(self._table, self)
File "/home/jpiet/suzieq/suzieq/engines/pandas/engine.py", line 191, in get_object
eobj = getattr(module, "{}Obj".format(objname.title()))
AttributeError: module 'suzieq.engines.pandas.evpnVni' has no attribute 'EvpnvniObj'
the data I'm using is in tests/data/basic_dual_bgp, and looking at arpnd table.
the data saved in the csv was produced with view='', and it has 75 lines. When I run the query with view='latest', then I only get 74 lines. The duplicate lines that are merged are:
(Pdb) df_two[54:56]
datacenter hostname ipAddress oif macaddr state offload timestamp
54 dual-bgp server102 172.16.2.1 bond0 52:54:00:9e:4c:92 stale False 2020-01-24 22:28:45.696
55 dual-bgp server102 172.16.2.1 bond0 52:54:00:c5:40:6d stale False 2020-01-24 22:28:45.696
the only difference is in the macaddr, but when you do a view='latest', then you drop duplicates by key_fields, and macaddr is not a keyfield so that this gets dropped. they all have the same timestamp.
Is this what we want?
this is true for all commands
[40 rows x 10 columns]
jpiet> system show view=latest datacenter=single
datacenter hostname ... uptime timestamp
71 single edge01 ... 0 days 00:54:33.048000 2020-01-28 21:43:30.048
164 single exit01 ... 0 days 00:09:50.048000 2020-01-28 21:43:30.048
255 single exit02 ... 0 days 00:09:49.048000 2020-01-28 21:43:30.048
360 single internet ... 0 days 00:54:30.048000 2020-01-28 21:43:30.048
458 single leaf01 ... 0 days 00:09:50.048000 2020-01-28 21:43:30.048
563 single leaf02 ... 0 days 00:09:48.048000 2020-01-28 21:43:30.048
664 single leaf03 ... 0 days 00:09:49.048000 2020-01-28 21:43:30.048
763 single leaf04 ... 0 days 00:09:51.048000 2020-01-28 21:43:30.048
826 single server101 ... 0 days 00:05:38.976000 2020-01-28 21:41:18.976
892 single server102 ... 0 days 00:00:44.048000 2020-01-28 21:43:30.048
956 single server103 ... 0 days 00:00:44.048000 2020-01-28 21:43:30.048
1019 single server104 ... 0 days 00:00:44.048000 2020-01-28 21:43:30.048
1116 single spine01 ... 0 days 00:09:46.048000 2020-01-28 21:43:30.048
1219 single spine02 ... 0 days 00:09:48.048000 2020-01-28 21:43:30.048
1 single 192.168.121.105 ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
4 single 192.168.121.118 ... 18289 days 21:10:43.968000 2020-01-28 21:10:43.968
5 single 192.168.121.123 ... 18289 days 20:05:11.808000 2020-01-28 20:05:11.808
7 single 192.168.121.159 ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
8 single 192.168.121.172 ... 18289 days 20:07:22.880000 2020-01-28 20:07:22.880
9 single 192.168.121.198 ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
10 single 192.168.121.55 ... 18289 days 20:07:22.880000 2020-01-28 20:07:22.880
[21 rows x 10 columns]
jpiet> system show view=all datacenter=single
datacenter hostname ... uptime timestamp
71 single edge01 ... 0 days 00:54:33.048000 2020-01-28 21:43:30.048
164 single exit01 ... 0 days 00:09:50.048000 2020-01-28 21:43:30.048
255 single exit02 ... 0 days 00:09:49.048000 2020-01-28 21:43:30.048
360 single internet ... 0 days 00:54:30.048000 2020-01-28 21:43:30.048
458 single leaf01 ... 0 days 00:09:50.048000 2020-01-28 21:43:30.048
563 single leaf02 ... 0 days 00:09:48.048000 2020-01-28 21:43:30.048
664 single leaf03 ... 0 days 00:09:49.048000 2020-01-28 21:43:30.048
763 single leaf04 ... 0 days 00:09:51.048000 2020-01-28 21:43:30.048
826 single server101 ... 0 days 00:05:38.976000 2020-01-28 21:41:18.976
892 single server102 ... 0 days 00:00:44.048000 2020-01-28 21:43:30.048
956 single server103 ... 0 days 00:00:44.048000 2020-01-28 21:43:30.048
1019 single server104 ... 0 days 00:00:44.048000 2020-01-28 21:43:30.048
1116 single spine01 ... 0 days 00:09:46.048000 2020-01-28 21:43:30.048
1219 single spine02 ... 0 days 00:09:48.048000 2020-01-28 21:43:30.048
1 single 192.168.121.105 ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
4 single 192.168.121.118 ... 18289 days 21:10:43.968000 2020-01-28 21:10:43.968
5 single 192.168.121.123 ... 18289 days 20:05:11.808000 2020-01-28 20:05:11.808
7 single 192.168.121.159 ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
8 single 192.168.121.172 ... 18289 days 20:07:22.880000 2020-01-28 20:07:22.880
9 single 192.168.121.198 ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
10 single 192.168.121.55 ... 18289 days 20:07:22.880000 2020-01-28 20:07:22.880
[21 rows x 10 columns]
jpiet>
it didn't get tested with all the other commands
we should remove the option to filter by engine
jpiet> system show engine=foop
datacenter hostname model version vendor architecture status address uptime timestamp
5 dual 192.168.121.113 dead 18289 days 20:31:24.672000 2020-01-28 20:31:24.672
6 dual 192.168.121.119 dead 18289 days 21:08:32.896000 2020-01-28 21:08:32.896
10 dual 192.168.121.146 dead 18289 days 20:31:24.672000 2020-01-28 20:31:24.672
13 dual 192.168.121.171 dead 18289 days 21:08:32.896000 2020-01-28 21:08:32.896
15 dual 192.168.121.175 dead 18289 days 21:26:01.472000 2020-01-28 21:26:01.472
17 dual 192.168.121.199 dead 18289 days 21:26:01.472000 2020-01-28 21:26:01.472
20 dual 192.168.121.250 dead
jpiet> system show columns=[hostname, uptime] Error: 1Traceback (most recent call last):
File "suzieq-cli", line 18, in
sys.exit(shell.run())
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/nubia.py", line 311, in run
return self.start_interactive(args)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/nubia.py", line 211, in start_interactive
io_loop.run()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 145, in run
self.parse_and_evaluate(text)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 93, in parse_and_evaluate
return self.evaluate_command(cmd, args, input)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 124, in evaluate_command
result = cmd_instance.run_interactive(cmd, args, raw)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 227, in run_interactive
instance, remaining_args = self._create_subcommand_obj(args_dict)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 196, in _create_subcommand_obj
return self._fn(**kwargs), remaining
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/systemCmd.py", line 40, in init
columns=columns,
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/command.py", line 71, in init
self.columns = columns.split()
AttributeError: 'list' object has no attribute 'spl
(suzieq) jpiet@t12:/tmp/pycharm_project_1000/suzieq/suzieq/cli$
jpiet> tables unique columns=all_rows
Empty DataFrame
Columns: []
Index: []
jpiet> tables unique columns=name
Empty DataFrame
Columns: []
Index: []
jpiet>
there is no difference in any of the commands i've run
right now from the command line if you try to filter by something other than pandas it complains because it's not a valid argument, but if you are on the cli and add engine=foo it just uses pandas and returns the pandas data. our other filters return 0 if you don't use a valid thing to filter by.
We want to only serve data that is valid, so we don't want to show data that has polled before the last polled system time.
(suzieq) jpiet@t14:/tmp/pycharm_project_1000/suzieq$ python3 /tmp/pycharm_project_1000/suzieq/suzieq/cli/suzieq-cli tables show
Logging to /tmp/suzieq-4oyoeyqw
table first_time latest_time intervals latest rows all rows datacenters devices
0 arpnd 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280 9 74 286 1 14
1 bgp 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280 6 32 318 1 9
2 fs 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280 9 229 1504 1 14
3 ifCounters 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280 9 138 5758 1 14
4 interfaces 2020-03-09 16:54:20.416 2020-03-09 17:18:22.208 7 138 496 1 14
5 lldp 2020-03-09 16:54:20.416 2020-03-09 17:18:22.208 8 44 142 1 10
6 macs 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280 8 39 141 1 5
7 mlag 2020-03-09 16:54:20.416 2020-03-09 17:07:26.848 3 4 10 1 4
8 routes 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280 8 24 1376 1 14
9 system 2020-03-09 16:54:20.416 2020-03-09 17:07:26.848 2 14 28 1 14
10 time 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280 9 14 69 1 14
11 topcpu 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280 9 14 2034 1 14
12 topmem 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280 9 9 1477 1 9
13 vlan 2020-03-09 16:54:20.416 2020-03-09 17:07:26.848 3 16 56 1 4
14 TOTAL 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280 9 789 13695 1 14
look at topmem, it says that there are 9 latest_rows.
but command topmem show says 6:
suzieq) jpiet@t14:/tmp/pycharm_project_1000/suzieq$ python3 /tmp/pycharm_project_1000/suzieq/suzieq/cli/suzieq-cli topmem show
Logging to /tmp/suzieq-luzfqijc
datacenter hostname timestamp
1 dual-bgp exit02 2020-03-09 17:14:00.064
2 dual-bgp internet 2020-03-09 17:16:11.136
4 dual-bgp leaf02 2020-03-09 17:14:00.064
5 dual-bgp leaf03 2020-03-09 17:09:37.920
6 dual-bgp leaf04 2020-03-09 17:11:48.992
7 dual-bgp spine01 2020-03-09 17:09:37.920
the problem is that in table show, I'm directly calling get_table_df, not get_valid_df, and so it's not doing the merge with the system table.
here is the system table
uzieq) jpiet@t14:/tmp/pycharm_project_1000/suzieq$ python3 /tmp/pycharm_project_1000/suzieq/suzieq/cli/suzieq-cli system show
Logging to /tmp/suzieq-uyzotg20
datacenter hostname model version vendor architecture status address uptime timestamp
1 dual-bgp edge01 vm 16.04.6 LTS Ubuntu x86-64 alive 192.168.121.165 00:17:30.848000 2020-03-09 17:07:26.848
3 dual-bgp exit01 vm 3.7.12 Cumulus x86-64 alive 192.168.121.108 00:17:25.848000 2020-03-09 17:07:26.848
5 dual-bgp exit02 vm 3.7.12 Cumulus x86-64 alive 192.168.121.236 00:17:28.848000 2020-03-09 17:07:26.848
7 dual-bgp internet vm 3.7.12 Cumulus x86-64 alive 192.168.121.237 00:17:30.848000 2020-03-09 17:07:26.848
9 dual-bgp leaf01 vm 3.7.12 Cumulus x86-64 alive 192.168.121.86 00:17:28.848000 2020-03-09 17:07:26.848
11 dual-bgp leaf02 vm 3.7.12 Cumulus x86-64 alive 192.168.121.22 00:17:25.848000 2020-03-09 17:07:26.848
13 dual-bgp leaf03 vm 3.7.12 Cumulus x86-64 alive 192.168.121.132 00:17:29.848000 2020-03-09 17:07:26.848
15 dual-bgp leaf04 vm 3.7.12 Cumulus x86-64 alive 192.168.121.53 00:17:31.848000 2020-03-09 17:07:26.848
17 dual-bgp server101 vm 16.04.6 LTS Ubuntu x86-64 alive 192.168.121.235 00:22:13.848000 2020-03-09 17:07:26.848
19 dual-bgp server102 vm 16.04.6 LTS Ubuntu x86-64 alive 192.168.121.239 00:22:54.848000 2020-03-09 17:07:26.848
21 dual-bgp server103 vm 16.04.6 LTS Ubuntu x86-64 alive 192.168.121.83 00:22:23.848000 2020-03-09 17:07:26.848
23 dual-bgp server104 vm 16.04.6 LTS Ubuntu x86-64 alive 192.168.121.54 00:22:14.848000 2020-03-09 17:07:26.848
25 dual-bgp spine01 vm 3.7.12 Cumulus x86-64 alive 192.168.121.242 00:17:30.848000 2020-03-09 17:07:26.848
27 dual-bgp spine02 vm 3.7.12 Cumulus x86-64 alive 192.168.121.105 00:17:29.848000 2020-03-09 17:07:26.848
so there are three entries in latest topmem that don't get shown because their time is older than the last time system table was queried, which is this line in get_valid_df
.query("timestamp_x >= timestamp_y")
if I comment out that one line, then I get
Logging to /tmp/suzieq-gtrmtk3f
datacenter hostname timestamp
0 dual-bgp exit01 2020-03-09 16:54:20.416
1 dual-bgp exit02 2020-03-09 17:14:00.064
2 dual-bgp internet 2020-03-09 17:16:11.136
3 dual-bgp leaf01 2020-03-09 16:54:20.416
4 dual-bgp leaf02 2020-03-09 17:14:00.064
5 dual-bgp leaf03 2020-03-09 17:09:37.920
6 dual-bgp leaf04 2020-03-09 17:11:48.992
7 dual-bgp spine01 2020-03-09 17:09:37.920
8 dual-bgp spine02 2020-03-09 16:54:20.416
in other words, if you look at exit01, it didn't show up in the first topmem show I sent. That is because exit01 ahs a timestamp of 16:54 in topmem but has a timstamp of 17:07 in system.
I looked more, and the reason that the system table has an updated entry again is because I restarted the poller, not that the data had changed. I don't know why topmem wasn't updated after I restarted the poller.
if there is no mlag data, it's just fine, but when there is data it fails:
jpiet> mlag show
datacenter hostname systemId ... mlagSinglePortsCnt mlagErrorPortsCnt timestamp
0 dual-bgp leaf01 44:39:39:ff:40:94 ... 0 0 2020-01-24 22:28:45.696
1 dual-bgp leaf02 44:39:39:ff:40:94 ... 0 0 2020-01-24 22:28:45.696
2 dual-bgp leaf03 44:39:39:ff:40:95 ... 0 0 2020-01-24 22:28:45.696
3 dual-bgp leaf04 44:39:39:ff:40:95 ... 0 0 2020-01-24 22:28:45.696
[4 rows x 11 columns]
jpiet> mlag show columns=hostname
Error running command: name 'state' is not defined
------------------------------------------------------------
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 188, in resolve
return self.resolvers[key]
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/collections/__init__.py", line 916, in __getitem__ return self.__missing__(key) # support subclasses that define __missing__
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/collections/__init__.py", line 908, in __missing__ raise KeyError(key)
KeyError: 'state'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 199, in resolve
return self.temps[key]
KeyError: 'state'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/MlagCmd.py", line 51, in show
print(df.query('state != "disabled"'))
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3199, in query
res = self.eval(expr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3315, in eval
return _eval(expr, inplace=inplace, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/eval.py", line 322, in eval
parsed_expr = Expr(expr, engine=engine, parser=parser, env=env, truediv=truediv)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 830, in __init__
self.terms = self.parse()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 847, in parse
return self._visitor.visit(self.expr)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 447, in visit_Module
return self.visit(expr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 450, in visit_Expr
return self.visit(node.value, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 747, in visit_Compare
return self.visit(binop)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 563, in visit_BinOp
op, op_class, left, right = self._maybe_transform_eq_ne(node)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 482, in _maybe_transform_eq_ne
left = self.visit(node.left, side="left")
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 577, in visit_Name
return self.term_type(node.id, self.env, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/ops.py", line 78, in __init__
self._value = self._resolve_name()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/ops.py", line 95, in _resolve_name
res = self.env.resolve(self.local_name, is_local=self.is_local)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 201, in resolve
raise compu.ops.UndefinedVariableError(key, is_local)
pandas.core.computation.ops.UndefinedVariableError: name 'state' is not defined
------------------------------------------------------------
jpiet>
I don't know when this happens, but I've seen it several times
Suzieq Verbose OFF Datacenter Hostname StartTime EndTime Engine pandas Query Time 1.6024s
jpiet> system show datacenter=single
datacenter hostname model version vendor architecture status address uptime timestamp
45 single exit01 vm 3.7.11 Cumulus x86-64 alive 192.168.121.5 0 days 00:03:34.984000 2020-01-16 23:28:41.984
48 single exit02 vm 3.7.11 Cumulus x86-64 alive 192.168.121.209 0 days 00:03:32.984000 2020-01-16 23:28:41.984
52 single internet vm 3.7.11 Cumulus x86-64 alive 192.168.121.201 1 days 03:41:44.984000 2020-01-16 23:28:41.984
55 single leaf01 vm 3.7.11 Cumulus x86-64 alive 192.168.121.206 0 days 00:03:34.984000 2020-01-16 23:28:41.984
58 single leaf02 vm 3.7.11 Cumulus x86-64 alive 192.168.121.140 0 days 00:03:32.984000 2020-01-16 23:28:41.984
61 single leaf03 vm 3.7.11 Cumulus x86-64 alive 192.168.121.28 0 days 00:03:39.984000 2020-01-16 23:28:41.984
64 single leaf04 vm 3.7.11 Cumulus x86-64 alive 192.168.121.160 0 days 00:03:36.984000 2020-01-16 23:28:41.984
116 single spine01 vm 3.7.11 Cumulus x86-64 alive 192.168.121.7 0 days 00:03:36.984000 2020-01-16 23:28:41.984
119 single spine02 vm 3.7.11 Cumulus x86-64 alive 192.168.121.222 0 days 00:03:31.984000 2020-01-16 23:28:41.984
1 single 192.168.121.138 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
3 single 192.168.121.140 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
5 single 192.168.121.160 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
7 single 192.168.121.201 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
9 single 192.168.121.206 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
11 single 192.168.121.209 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
13 single 192.168.121.221 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
15 single 192.168.121.222 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
17 single 192.168.121.27 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
19 single 192.168.121.28 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
21 single 192.168.121.5 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
23 single 192.168.121.7 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
25 single 192.168.121.79 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
27 single 192.168.121.96 dead 18278 days 22:11:50.912000 2020-01-17 22:11:50.912
at least routes and ospf. system works fine
jpiet> route unique columns=timestamp view=all
Error running command: 'datacenter'
------------------------------------------------------------
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/command.py", line 148, in unique
df = self.show(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/RouteCmd.py", line 55, in show
datacenter=self.datacenter,
File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 100, in get
return self.engine_obj.get(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/routes.py", line 10, in get
df = super().get(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 165, in get
df = self.get_valid_df(self.iobj._table, sort_fields, **kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 137, in get_valid_df
table_df.merge(sys_df, on=["datacenter", "hostname"])
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 7349, in merge
validate=validate,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
81, in merge
validate=validate,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
626, in __init__
) = self._get_merge_keys()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
988, in _get_merge_keys
left_keys.append(left._get_label_or_level_values(lk))
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/generic.py", line 1774,
in _get_label_or_level_values
raise KeyError(key)
KeyError: 'datacenter'
are we getting different data fro table show than for other commands?
routes as an example
table first_time latest_time intervals latest rows all rows datacenters devices
0 arpnd 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064 84 74 2010 1 14
1 bgp 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064 66 32 3102 1 9
2 fs 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064 85 229 7459 1 14
3 ifCounters 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064 87 138 60895 1 14
4 interfaces 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064 59 138 5331 1 14
5 lldp 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064 54 44 1242 1 10
6 macs 2020-03-04 20:19:27.872 2020-03-04 23:25:08.992 50 39 636 1 5
7 mlag 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064 31 4 154 1 4
8 routes 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064 78 24 9062 1 14
9 system 2020-03-04 20:19:27.872 2020-03-04 23:25:08.992 25 14 225 1 14
10 time 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064 66 14 523 1 14
11 topcpu 2020-03-04 20:19:27.872 2020-03-04 23:25:08.992 85 14 20831 1 14
12 topmem 2020-03-04 20:19:27.872 2020-03-04 23:25:08.992 86 9 15373 1 9
13 vlan 2020-03-04 20:19:27.872 2020-03-04 23:25:08.992 16 16 220 1 4
14 TOTAL 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064 87 789 127063 1 14
jpiet> routes summarize Error: 2 datacenter hostname vrf prefix nexthopIps oifs weights protocol source metric timestamp
count 229 229 229 229 229 229 229 229 229 229 229
unique 1 12 3 27 108 114 96 4 35 2 3
top dual-bgp exit02 default 0.0.0.0/0 [] [swp5.4] [1] bgp 20 2020-03-04 23:25:08.992000
freq 229 36 171 22 56 26 134 151 169 219 206
first - - - - - - - - - - 2020-03-04 23:22:57.920000
last - - - - - - - - - - 2020-03-04 23:27:20.064000
jpiet> routes summarize view=all
datacenter hostname vrf prefix nexthopIps oifs weights protocol source metric timestamp
count 56251 56251 56251 56251 56251 56251 56251 56251 56251 56251 56251
unique 1 14 3 27 4031 4035 4017 4 39 2 78
top dual-bgp edge01 default 0.0.0.0/0 [] [eth0] [1] bgp 20 2020-03-04 23:09:51.488000
freq 56251 17200 51709 6439 13441 8728 32465 20642 31479 55542 4120
first - - - - - - - - - - 2020-03-04 20:19:27.872000
last - - - - - - - - - - 2020-03-04 23:27:20.064000
are some of these using filtering by active and some not?
also does latest mean the same thing?
I'm not sure how best to do this, but the last convergence is in sh ip ospf on FRR/Quagga
if there are multiple events inside our polling interval we'll lose them.
routes, macs, tables need to be singularized
rather than show all the columns, it acts as if it was given bad column names'
piet> address show columns=*
ipAddressList timestamp ifname hostname datacenter
0 [192.168.121.28/24] 2020-01-24 22:28:45.696 eth0 edge01 dual-bgp
1 [169.254.254.2/30] 2020-01-24 22:28:45.696 eth1.2 edge01 dual-bgp
2 [169.254.254.10/30] 2020-01-24 22:28:45.696 eth1.4 edge01 dual-bgp
4 [169.254.253.2/30] 2020-01-24 22:28:45.696 eth2.2 edge01 dual-bgp
5 [169.254.253.10/30] 2020-01-24 22:28:45.696 eth2.4 edge01 dual-bgp
7 [10.0.0.100/32] 2020-01-24 22:28:45.696 lo edge01 dual-bgp
8 [192.168.121.145/24] 2020-01-24 22:28:45.696 eth0 exit01 dual-bgp
9 [10.0.0.101/32] 2020-01-24 22:28:45.696 internet-vrf exit01 dual-bgp
10 [10.0.0.101/32] 2020-01-24 22:28:45.696 lo exit01 dual-bgp
16 [169.254.254.1/30] 2020-01-24 22:28:45.696 swp5.2 exit01 dual-bgp
17 [169.254.254.9/30] 2020-01-24 22:28:45.696 swp5.4 exit01 dual-bgp
19 [169.254.127.1/31] 2020-01-24 22:28:45.696 swp6 exit01 dual-bgp
20 [192.168.121.199/24] 2020-01-24 22:28:45.696 eth0 exit02 dual-bgp
21 [10.0.0.102/32] 2020-01-24 22:28:45.696 internet-vrf exit02 dual-bgp
22 [10.0.0.102/32] 2020-01-24 22:28:45.696 lo exit02 dual-bgp
28 [169.254.253.1/30] 2020-01-24 22:28:45.696 swp5.2 exit02 dual-bgp
29 [169.254.253.9/30] 2020-01-24 22:28:45.696 swp5.4 exit02 dual-bgp
31 [169.254.127.3/31] 2020-01-24 22:28:45.696 swp6 exit02 dual-bgp
32 [192.168.121.213/24] 2020-01-24 22:28:45.696 eth0 internet dual-bgp
33 [10.0.0.253/32, 172.16.253.1/32] 2020-01-24 22:28:45.696 lo internet dual-bgp
34 [169.254.127.0/31] 2020-01-24 22:28:45.696 swp1 internet dual-bgp
35 [169.254.127.2/31] 2020-01-24 22:28:45.696 swp2 internet dual-bgp
39 [192.168.121.247/24] 2020-01-24 22:28:45.696 eth0 leaf01 dual-bgp
40 [10.0.0.11/32] 2020-01-24 22:28:45.696 lo leaf01 dual-bgp
42 [169.254.1.1/30] 2020-01-24 22:28:45.696 peerlink.4094 leaf01 dual-bgp
50 [172.16.1.1/24] 2020-01-24 22:28:45.696 vlan13 leaf01 dual-bgp
51 [172.16.2.1/24] 2020-01-24 22:28:45.696 vlan24 leaf01 dual-bgp
55 [192.168.121.127/24] 2020-01-24 22:28:45.696 eth0 leaf02 dual-bgp
56 [10.0.0.12/32] 2020-01-24 22:28:45.696 lo leaf02 dual-bgp
58 [169.254.1.2/30] 2020-01-24 22:28:45.696 peerlink.4094 leaf02 dual-bgp
66 [172.16.1.1/24] 2020-01-24 22:28:45.696 vlan13 leaf02 dual-bgp
67 [172.16.2.1/24] 2020-01-24 22:28:45.696 vlan24 leaf02 dual-bgp
71 [192.168.121.7/24] 2020-01-24 22:28:45.696 eth0 leaf03 dual-bgp
72 [10.0.0.13/32] 2020-01-24 22:28:45.696 lo leaf03 dual-bgp
74 [169.254.1.1/30] 2020-01-24 22:28:45.696 peerlink.4094 leaf03 dual-bgp
82 [172.16.3.1/24] 2020-01-24 22:28:45.696 vlan13 leaf03 dual-bgp
83 [172.16.4.1/24] 2020-01-24 22:28:45.696 vlan24 leaf03 dual-bgp
87 [192.168.121.184/24] 2020-01-24 22:28:45.696 eth0 leaf04 dual-bgp
88 [10.0.0.14/32] 2020-01-24 22:28:45.696 lo leaf04 dual-bgp
90 [169.254.1.2/30] 2020-01-24 22:28:45.696 peerlink.4094 leaf04 dual-bgp
98 [172.16.3.1/24] 2020-01-24 22:28:45.696 vlan13 leaf04 dual-bgp
99 [172.16.4.1/24] 2020-01-24 22:28:45.696 vlan24 leaf04 dual-bgp
100 [172.16.1.101/24] 2020-01-24 22:28:45.696 bond0 server101 dual-bgp
101 [192.168.121.225/24] 2020-01-24 22:28:45.696 eth0 server101 dual-bgp
105 [172.16.2.102/24] 2020-01-24 22:28:45.696 bond0 server102 dual-bgp
106 [192.168.121.90/24] 2020-01-24 22:28:45.696 eth0 server102 dual-bgp
110 [172.16.3.103/24] 2020-01-24 22:28:45.696 bond0 server103 dual-bgp
111 [192.168.121.240/24] 2020-01-24 22:28:45.696 eth0 server103 dual-bgp
115 [172.16.4.104/24] 2020-01-24 22:28:45.696 bond0 server104 dual-bgp
116 [192.168.121.243/24] 2020-01-24 22:28:45.696 eth0 server104 dual-bgp
120 [192.168.121.47/24] 2020-01-24 22:28:45.696 eth0 spine01 dual-bgp
121 [10.0.0.21/32] 2020-01-24 22:28:45.696 lo spine01 dual-bgp
129 [192.168.121.155/24] 2020-01-24 22:28:45.696 eth0 spine02 dual-bgp
130 [10.0.0.22/32] 2020-01-24 22:28:45.696 lo spine02 dual-bgp
the issue is that engine/pandas/addr.py doesn't use get_display_field, which is the code that rightly deals with '*', instead it has it's own code, starting at line 39
columns = kwargs.get("columns", [])
if columns:
del kwargs["columns"]
else:
columns = ['default']
if columns != ["default"]:
if addrcol not in columns:
columns.insert(-1, addrcol)
else:
columns = ["datacenter", "hostname", "ifname", "state", addrcol,
"timestamp"]
it shouldn't be available if it isn't used.
it always sets view=all
with routes view='latest'
datacenter hostname vrf prefix nexthopIps oifs weights protocol source metric timestamp
50 dual-bgp exit01 mgmt IPv4Network('0.0.0.0/0') [] [] [1] 4278198272 2020-01-24 22:28:45.696
51 dual-bgp exit01 mgmt IPv4Network('127.0.0.0/8') [] [mgmt] [1] kernel 127.0.0.1 20 2020-01-24 22:28:45.696
52 dual-bgp exit01 mgmt IPv4Network('192.168.121.0/24') [] [eth0] [1] kernel 192.168.121.145 20 2020-01-24 22:28:45.696
with views='all'
datacenter hostname vrf prefix nexthopIps oifs weights protocol source metric timestamp
51 dual-bgp exit01 mgmt IPv4Network('0.0.0.0/0') [192.168.121.1] [eth0] [1] 20 2020-01-24 22:28:45.696
52 dual-bgp exit01 mgmt IPv4Network('0.0.0.0/0') [] [] [1] 4278198272 2020-01-24 22:28:45.696
53 dual-bgp exit01 mgmt IPv4Network('127.0.0.0/8') [] [mgmt] [1] kernel 127.0.0.1 20 2020-01-24 22:28:45.696
54 dual-bgp exit01 mgmt IPv4Network('192.168.121.0/24') [] [eth0] [1] kernel 192.168.121.145 20 2020-01-24 22:28:45.696
so we don't see the default that has a metric of 20 when view='latest'. the problem is that via the keys, they are a duplicate and that one gets dropped. so maybe adding 'metric' as key?
14 dual-bgp exit01 internet-vrf IPv4Network('0.0.0.0/0') [] [] [1] 4278198272 2020-01-24 22:28:45.696
(suzieq) jpiet@t14:/tmp/pycharm_project_304/suzieq$ time python suzieq/cli/suzieq-cli route unique --columns=metric --view=all
Logging to /tmp/suzieq-otal3boh
metric count
0 20 236
1 4278198272 10
jpiet> ospf show
datacenter hostname vrf ifname state peerIP lastChangeTime numChanges timestamp
0 dual-ospf exit01 default swp1 full 10.0.0.21 1584469000622 5 2020-03-17 18:29:30.240
1 dual-ospf exit01 default swp2 full 10.0.0.22 1584469000622 5 2020-03-17 18:29:30.240
2 dual-ospf exit02 default swp1 full 10.0.0.21 1584469006599 5 2020-03-17 18:29:30.240
3 dual-ospf exit02 default swp2 full 10.0.0.22 1584469001599 4 2020-03-17 18:29:30.240
4 dual-ospf leaf01 default swp1 full 10.0.0.21 1584469005600 5 2020-03-17 18:29:30.240
5 dual-ospf leaf01 default swp2 full 10.0.0.22 1584469001600 5 2020-03-17 18:29:30.240
6 dual-ospf leaf02 default swp1 full 10.0.0.21 1584469003631 5 2020-03-17 18:31:41.312
7 dual-ospf leaf02 default swp2 full 10.0.0.22 1584468998631 5 2020-03-17 18:31:41.312
8 dual-ospf leaf03 default swp1 full 10.0.0.21 1584469003634 5 2020-03-17 18:31:41.312
9 dual-ospf leaf03 default swp2 full 10.0.0.22 1584468997634 5 2020-03-17 18:31:41.312
10 dual-ospf leaf04 default swp1 full 10.0.0.21 1584469005623 5 2020-03-17 18:29:30.240
11 dual-ospf leaf04 default swp2 full 10.0.0.22 1584469000623 5 2020-03-17 18:29:30.240
12 dual-ospf spine01 default swp1 full 10.0.0.11 1584469005259 5 2020-03-17 18:27:19.168
13 dual-ospf spine01 default swp2 full 10.0.0.12 1584469006259 5 2020-03-17 18:27:19.168
14 dual-ospf spine01 default swp3 full 10.0.0.13 1584469006259 5 2020-03-17 18:27:19.168
15 dual-ospf spine01 default swp4 full 10.0.0.14 1584469006259 4 2020-03-17 18:27:19.168
16 dual-ospf spine01 default swp5 full 10.0.0.102 1584469006259 5 2020-03-17 18:27:19.168
17 dual-ospf spine01 default swp6 full 10.0.0.101 1584469005259 5 2020-03-17 18:27:19.168
18 dual-ospf spine02 default swp1 full 10.0.0.11 1584469001620 5 2020-03-17 18:29:30.240
19 dual-ospf spine02 default swp2 full 10.0.0.12 1584469001620 5 2020-03-17 18:29:30.240
20 dual-ospf spine02 default swp3 full 10.0.0.13 1584469001620 5 2020-03-17 18:29:30.240
21 dual-ospf spine02 default swp4 full 10.0.0.14 1584469001620 5 2020-03-17 18:29:30.240
22 dual-ospf spine02 default swp5 full 10.0.0.102 1584469001620 5 2020-03-17 18:29:30.240
23 dual-ospf spine02 default swp6 full 10.0.0.101 1584469001620 5 2020-03-17 18:29:30.240
should have a better message tell you you are missing config file.
you get
(suzieq) jpiet@t5:~/suzieq$ python3 sq-poller.py --f -H dual
Traceback (most recent call last):
File "sq-poller.py", line 168, in
logger.setLevel(cfg.get("logging-level", "WARNING").upper())
I don't know how to get the code to execute, but this, starting at line 114 is clearly broken, it tries to use files before it is assigned and doesn't have a colon after files.
jobs = [
exe.submit(self.read_pq_file, f, fields, query_str)
for f in files
]
it's in a section that starts out with if use_get_files, so we must never be using use_get_files. I don't know what that does or how to test it.
jpiet> system show columns=hostname, uptime Error: 1Error parsing command
system show columns=hostname, uptime
^
Expected end of text, found ',' (at char 21), (line:1, col:22)
jpiet>
systemCmd work
addrCmd, macCmd, routesCmd, and vlan have the same errors as when given a bad hostname
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[addrCmd] - pyarrow.lib.ArrowInvalid: Must pass at least one table
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[arpndCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[bgpCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[interfaceCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[lldpCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[macsCmd] - pyarrow.lib.ArrowInvalid: Must pass at least one table
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[mlagCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[routesCmd] - pandas.core.computation.ops.UndefinedVariableError: name 'prefix' is not defined
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[topcpuCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[topmemCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[vlanCmd] - pyarrow.lib.ArrowInvalid: Must pass at least one table
We need to make it so that it doesn't require the specific suzieq file so that we can run tests on arbitrary hosts and so that local suzieq config file doesn't mess up tests.
for instance, if there is no OSPF, then every polling interval there is currently and ERROR message about the loss of polling.
I kind of understand why this if failing and might even be able to fix it.
However, I don't understand how tests are passing and I don't understand why any other commands work
for most commands, except for routes, topcpu and topmem, entering start_time filtering doesn't change the output.
jpiet> system show start_time="909090" Error: 1Error running command: month must be in 1..12: 909090
------------------------------------------------------------
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 655, in parse
ret = self._build_naive(res, default)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1241, in _build_naive
naive = default.replace(**repl)
ValueError: month must be in 1..12
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "pandas/_libs/tslib.pyx", line 610, in pandas._libs.tslib.array_to_datetime
File "pandas/_libs/tslibs/parsing.pyx", line 225, in pandas._libs.tslibs.parsing.parse_datetime_string
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1374, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 657, in parse
six.raise_from(ParserError(e.args[0] + ": %s", timestr), e)
File "<string>", line 3, in raise_from
dateutil.parser._parser.ParserError: month must be in 1..12: 909090
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "pandas/_libs/tslib.pyx", line 617, in pandas._libs.tslib.array_to_datetime
TypeError: invalid string coercion to datetime
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 655, in parse
ret = self._build_naive(res, default)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1241, in _build_naive
naive = default.replace(**repl)
ValueError: month must be in 1..12
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/SystemCmd.py", line 63, in show
datacenter=self.datacenter,
File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 128, in get
return self.engine_obj.get(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 173, in get
df = self.get_valid_df(self.iobj._table, sort_fields, **kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 98, in get_valid_df
**kwargs
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engine.py", line 73, in get_table_df
files = get_latest_files(folder, start, end, view)
File "/tmp/pycharm_project_1000/suzieq/suzieq/utils.py", line 183, in get_latest_files
ssecs = pd.to_datetime(start, infer_datetime_format=True).timestamp() * 1000
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/util/_decorators.py", line 208, in wrapper
return func(*args, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 796, in to_datetime
result = convert_listlike(np.array([arg]), box, format)[0]
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 463, in _convert_listlike_datetimes
allow_object=True,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 1984, in objects_to_datetime64ns
raise e
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 1975, in objects_to_datetime64ns
require_iso8601=require_iso8601,
File "pandas/_libs/tslib.pyx", line 465, in pandas._libs.tslib.array_to_datetime
File "pandas/_libs/tslib.pyx", line 688, in pandas._libs.tslib.array_to_datetime
File "pandas/_libs/tslib.pyx", line 822, in pandas._libs.tslib.array_to_datetime_object
File "pandas/_libs/tslib.pyx", line 813, in pandas._libs.tslib.array_to_datetime_object
File "pandas/_libs/tslibs/parsing.pyx", line 225, in pandas._libs.tslibs.parsing.parse_datetime_string
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1374, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 657, in parse
six.raise_from(ParserError(e.args[0] + ": %s", timestr), e)
File "<string>", line 3, in raise_from
dateutil.parser._parser.ParserError: month must be in 1..12: 909090
this works
jpiet> system show start_time='2020-01-28 20:53:15.392' Error: 2 datacenter hostname model version ... status address uptime timestamp
0 dual-bgp edge01 vm 16.04.6 LTS ... alive 192.168.121.28 00:22:34.696000 2020-01-24 22:28:45.696
0 dual-bgp exit01 vm 3.7.11 ... alive 192.168.121.145 00:22:26.696000 2020-01-24 22:28:45.696
0 dual-bgp exit02 vm 3.7.11 ... alive 192.168.121.199 00:22:24.696000 2020-01-24 22:28:45.696
0 dual-bgp internet vm 3.7.11 ... alive 192.168.121.213 00:22:29.696000 2020-01-24 22:28:45.696
0 dual-bgp leaf01 vm 3.7.11 ... alive 192.168.121.247 00:22:27.696000 2020-01-24 22:28:45.696
0 dual-bgp leaf02 vm 3.7.11 ... alive 192.168.121.127 00:22:27.696000 2020-01-24 22:28:45.696
0 dual-bgp leaf03 vm 3.7.11 ... alive 192.168.121.7 00:22:26.696000 2020-01-24 22:28:45.696
0 dual-bgp leaf04 vm 3.7.11 ... alive 192.168.121.184 00:22:29.696000 2020-01-24 22:28:45.696
0 dual-bgp server101 vm 16.04.6 LTS ... alive 192.168.121.225 00:29:01.696000 2020-01-24 22:28:45.696
0 dual-bgp server102 vm 16.04.6 LTS ... alive 192.168.121.90 00:29:10.696000 2020-01-24 22:28:45.696
0 dual-bgp server103 vm 16.04.6 LTS ... alive 192.168.121.240 00:29:15.696000 2020-01-24 22:28:45.696
0 dual-bgp server104 vm 16.04.6 LTS ... alive 192.168.121.243 00:29:23.696000 2020-01-24 22:28:45.696
0 dual-bgp spine01 vm 3.7.11 ... alive 192.168.121.47 00:22:26.696000 2020-01-24 22:28:45.696
0 dual-bgp spine02 vm 3.7.11 ... alive 192.168.121.155 00:22:22.696000 2020-01-24 22:28:45.696
[14 rows x 10 columns]
jpiet>
jpiet> arpnd show columns='*'
Error running command: "None of [Index(['*'], dtype='object')] are in the [columns]"
------------------------------------------------------------
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/ArpndCmd.py", line 69, in show
return self._gen_output(df)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/command.py", line 113, in _gen_output
df = df[self.columns]
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3001, in __getitem__
indexer = self.loc._convert_to_indexer(key, axis=1, raise_missing=True)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/indexing.py", line 1285, in _convert_to_indexer
return self._get_listlike_indexer(obj, axis, **kwargs)[1]
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/indexing.py", line 1092, in _get_listlike_indexer
keyarr, indexer, o._get_axis_number(axis), raise_missing=raise_missing
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/indexing.py", line 1177, in _validate_read_indexer
key=key, axis=self.obj._get_axis_name(axis)
KeyError: "None of [Index(['*'], dtype='object')] are in the [columns]"
------------------------------------------------------------
I think this might be because of _gen_output, but I'm not sure exactly what broke this since we don't have testing around it.
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 188, in resolve
return self.resolvers[key]
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/collections/init.py", line 916, in getitem
return self.missing(key) # support subclasses that define missing
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/collections/init.py", line 908, in missing
raise KeyError(key)
KeyError: 'prefix'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 199, in resolve
return self.temps[key]
KeyError: 'prefix'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/routesCmd.py", line 60, in show
datacenter=self.datacenter,
File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 125, in get
return self.engine_obj.get(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/routes.py", line 19, in get
df = super().get(**kwargs).query('prefix != ""')
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3199, in query
res = self.eval(expr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3315, in eval
return _eval(expr, inplace=inplace, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/eval.py", line 322, in eval
parsed_expr = Expr(expr, engine=engine, parser=parser, env=env, truediv=truediv)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 830, in init
self.terms = self.parse()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 847, in parse
return self._visitor.visit(self.expr)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 447, in visit_Module
return self.visit(expr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 450, in visit_Expr
return self.visit(node.value, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 747, in visit_Compare
return self.visit(binop)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 563, in visit_BinOp
op, op_class, left, right = self._maybe_transform_eq_ne(node)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 482, in _maybe_transform_eq_ne
left = self.visit(node.left, side="left")
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 577, in visit_Name
return self.term_type(node.id, self.env, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/ops.py", line 78, in init
self._value = self._resolve_name()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/ops.py", line 95, in _resolve_name
res = self.env.resolve(self.local_name, is_local=self.is_local)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 201, in resolve
raise compu.ops.UndefinedVariableError(key, is_local)
pandas.core.computation.ops.UndefinedVariableError: name 'prefix' is not defined
some of these are valid, many are confusing and I think it will confuse anybody else trying to understand what's going on. Especially the first ones that are run as root are confusing.
In this case, I was using cloud-native-datacenter dual topology with bgp un-numbered. I successfully ran the ping.yml ansible-playbook before starting suzieq.
jpiet@a1:~/cloud-native-data-center-networking/topologies/dual-attach/bgp$ more /tmp/suzieq.log
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node internet
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf01
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf01
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf01
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf04
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf02
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf02
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf02
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node server104
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node server101
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node leaf02
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node leaf02
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node leaf03
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node server101
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node internet
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node exit02
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node exit02
2020-01-28 22:26:28,717 - root - ERROR - Unable to connect to node server103
2020-01-28 22:26:28,717 - root - ERROR - Unable to connect to node server103
2020-01-28 22:26:28,761 - root - ERROR - Unable to connect to node edge01
2020-01-28 22:26:28,773 - root - ERROR - Unable to connect to node edge01
2020-01-28 22:26:28,784 - root - ERROR - Unable to connect to node edge01
2020-01-28 22:26:34,367 - suzieq - ERROR - evpnVni: failed for node leaf02 with 408/
2020-01-28 22:26:34,367 - suzieq - ERROR - evpnVni: failed for node internet with 408/
2020-01-28 22:26:34,367 - suzieq - ERROR - evpnVni: failed for node exit02 with 408/
2020-01-28 22:26:35,743 - suzieq - ERROR - routes: failed for node leaf01 with 408/
2020-01-28 22:26:35,743 - suzieq - ERROR - routes: failed for node leaf02 with 408/
2020-01-28 22:26:35,776 - suzieq - ERROR - routes: failed for node server104 with 408/
2020-01-28 22:26:35,776 - suzieq - ERROR - routes: failed for node server101 with 408/
2020-01-28 22:26:35,809 - suzieq - ERROR - lldp: failed for node server103 with 408/
2020-01-28 22:26:35,815 - suzieq - ERROR - lldp: failed for node leaf02 with 408/
2020-01-28 22:26:35,815 - suzieq - ERROR - lldp: failed for node leaf01 with 408/
2020-01-28 22:26:35,824 - suzieq - ERROR - lldp: failed for node edge01 with 408/
2020-01-28 22:26:36,088 - suzieq - ERROR - topcpu: failed for node leaf04 with 408/
2020-01-28 22:26:36,092 - suzieq - ERROR - topcpu: failed for node internet with 408/
2020-01-28 22:26:36,100 - suzieq - ERROR - topcpu: failed for node leaf01 with 408/
2020-01-28 22:26:36,669 - suzieq - ERROR - ospfNbr: failed for node server103 with 1/
2020-01-28 22:26:36,669 - suzieq - ERROR - ospfNbr: failed for node server101 with 1/
2020-01-28 22:26:36,669 - suzieq - ERROR - ospfNbr: failed for node server104 with 1/
2020-01-28 22:26:36,669 - suzieq - ERROR - ospfNbr: failed for node server102 with 1/
2020-01-28 22:26:36,670 - suzieq - ERROR - system: failed for node edge01 with 408/
2020-01-28 22:26:36,671 - suzieq - ERROR - system: failed for node leaf02 with 408/
2020-01-28 22:26:36,674 - suzieq - ERROR - ospfIf: failed for node server102 with 1/
2020-01-28 22:26:36,674 - suzieq - ERROR - ospfIf: failed for node server104 with 1/
2020-01-28 22:26:36,674 - suzieq - ERROR - ospfIf: failed for node server103 with 1/
2020-01-28 22:26:36,674 - suzieq - ERROR - ospfIf: failed for node server101 with 1/
2020-01-28 22:26:36,696 - suzieq - ERROR - topmem: failed for node leaf02 with 408/
2020-01-28 22:26:36,696 - suzieq - ERROR - topmem: failed for node server101 with 408/
2020-01-28 22:26:36,704 - suzieq - ERROR - topmem: failed for node leaf03 with 408/
2020-01-28 22:26:36,706 - suzieq - ERROR - topmem: failed for node edge01 with 408/
2020-01-28 22:26:36,706 - suzieq - ERROR - topmem: failed for node server103 with 408/
2020-01-28 22:26:36,706 - suzieq - ERROR - topmem: failed for node exit02 with 408/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfIf: failed for node server103 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfIf: failed for node server102 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfIf: failed for node server101 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfIf: failed for node server104 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfNbr: failed for node server102 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfNbr: failed for node server101 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfNbr: failed for node server104 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfNbr: failed for node server103 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfNbr: failed for node server101 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfNbr: failed for node server104 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfNbr: failed for node server103 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfNbr: failed for node server102 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfIf: failed for node server104 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfIf: failed for node server101 with 1/
2020-01-28 22:27:16,147 - suzieq - ERROR - ospfIf: failed for node server102 with 1/
2020-01-28 22:27:16,147 - suzieq - ERROR - ospfIf: failed for node server103 with 1/
2020-01-28 22:27:35,474 - suzieq - ERROR - ospfIf: failed for node server101 with 1/
2020-01-28 22:27:35,474 - suzieq - ERROR - ospfIf: failed for node server102 with 1/
2020-01-28 22:27:35,474 - suzieq - ERROR - ospfIf: failed for node server103 with 1/
2020-01-28 22:27:35,474 - suzieq - ERROR - ospfIf: failed for node server104 with 1/
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2897, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'bootupTimestamp'
During handling of the above exception, another exception occurred:
jpiet>
I don't know what the right argument should be, but none seem to work
jpiet> system show start-time=foop
Unknown argument(s) ['start-time'] were passed
jpiet> system show end-time='foop' Error: 2Unknown argument(s) ['end-time'] were passed
jpiet> system show end-time=007373730 Error: 2Unknown argument(s) ['end-time'] were passed
jpiet>
jpiet> address show columns=hostname
Error running command: 'datacenter'
------------------------------------------------------------
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/AddrCmd.py", line 58, in show
datacenter=self.datacenter,
File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 125, in get
return self.engine_obj.get(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/addr.py", line 53, in get
**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 173, in get_valid_df
table_df.merge(sys_df, on=["datacenter", "hostname"])
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 7349, in merge
validate=validate,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
81, in merge
validate=validate,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
626, in __init__
) = self._get_merge_keys()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
988, in _get_merge_keys
left_keys.append(left._get_label_or_level_values(lk))
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/generic.py", line 1774,
in _get_label_or_level_values
raise KeyError(key)
KeyError: 'datacenter'
------------------------------------------------------------
jpiet>
jpiet> routes lpm address=10.0.0.101
This command does not support positional arguments
if you don't specific that 10.0.0.101 is '10.0.0.101' then things don't work
In [68]: ctx.start_time=''
In [69]: s1 = routesCmd.routesCmd().show()
/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/bin/ipython:1: FutureWarning: 'ExtensionArray._formatting_values' is deprecated. Specify 'ExtensionArray._formatter' instead.
#!/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/bin/python3
datacenter hostname vrf prefix nexthopIps ... weights protocol source metric timestamp0 dual-bgp edge01 default IPv4Network('0.0.0.0/0') [192.168.121.1] ... [1] 20 2020-01-24 22:28:45.6961 dual-bgp edge01 default IPv4Network('10.0.0.101/32') [169.254.254.1, 169.254.254.9] ... [1, 1] 186 10.0.0.100 20 2020-01-24 22:28:45.6962 dual-bgp edge01 default IPv4Network('10.0.0.102/32') [169.254.253.1, 169.254.253.9] ... [1, 1] 186 10.0.0.100 20 2020-01-24 22:28:45.6963 dual-bgp edge01 default IPv4Network('10.0.0.11/32') [169.254.253.1, 169.254.254.1] ... [1, 1] 186 10.0.0.100 20 2020-01-24 22:28:45.6964 dual-bgp edge01 default IPv4Network('10.0.0.12/32') [169.254.253.1, 169.254.254.1] ... [1, 1] 186 10.0.0.100 20 2020-01-24 22:28:45.696.. ... ... ... ... ... ... ... ... ... ... ...231 dual-bgp spine02 default IPv4Network('172.16.3.0/24') [169.254.0.1, 169.254.0.1] ... [1, 1] bgp 20 2020-01-24 22:28:45.696232 dual-bgp spine02 default IPv4Network('172.16.4.0/24') [169.254.0.1, 169.254.0.1] ... [1, 1] bgp 20 2020-01-24 22:28:45.696233 dual-bgp spine02 mgmt IPv4Network('0.0.0.0/0') [] ... [1] 4278198272 2020-01-24 22:28:45.696234 dual-bgp spine02 mgmt IPv4Network('127.0.0.0/8') [] ... [1] kernel 127.0.0.1 20 2020-01-24 22:28:45.696235 dual-bgp spine02 mgmt IPv4Network('192.168.121.0/24') [] ... [1] kernel 192.168.121.155 20 2020-01-24 22:28:45.696
[236 rows x 11 columns]
In [70]: ctx.start_time=1570006401
In [71]: s2 = topcpuCmd.topcpuCmd().show()
datacenter hostname timestamp
0 dual-bgp edge01 2020-01-24 22:28:45.696
1 dual-bgp edge01 2020-01-24 22:28:45.696
2 dual-bgp edge01 2020-01-24 22:28:45.696
3 dual-bgp edge01 2020-01-24 22:28:45.696
4 dual-bgp edge01 2020-01-24 22:28:45.696
.. ... ... ...
122 dual-bgp spine02 2020-01-24 22:30:56.768
123 dual-bgp spine02 2020-01-24 22:30:56.768
124 dual-bgp spine02 2020-01-24 22:30:56.768
125 dual-bgp spine02 2020-01-24 22:30:56.768
126 dual-bgp spine02 2020-01-24 22:30:56.768
[127 rows x 3 columns]
Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/interfaceCmd.py", line 64, in show
type=type.split(),
File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 125, in get
return self.engine_obj.get(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 201, in get
df = self.get_valid_df(self.iobj._table, sort_fields, **kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 173, in get_valid_df
table_df.merge(sys_df, on=["datacenter", "hostname"])
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 7349, in merge
validate=validate,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line 81, in merge
validate=validate,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line 626, in init
) = self._get_merge_keys()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line 988, in _get_merge_keys
left_keys.append(left._get_label_or_level_values(lk))
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/generic.py", line 1774, in _get_label_or_level_values
raise KeyError(key)
KeyError: 'datacenter'
what's extra weird is that on the cli, the address show works just fine and returns an empty dataframe, but the others fail
ArrowInvalid Traceback (most recent call last)
in
----> 1 addrCmd.addrCmd(hostname='l').show()
~/suzieq/suzieq/cli/sqcmds/addrCmd.py in show(self, address)
56 columns=self.columns,
57 address=address,
---> 58 datacenter=self.datacenter,
59 )
60 self.ctxt.exec_time = "{:5.4f}s".format(time.time() - now)
~/suzieq/suzieq/sqobjects/basicobj.py in get(self, **kwargs)
123 return(pd.DataFrame(columns=['datacenter', 'hostname']))
124
--> 125 return self.engine_obj.get(**kwargs)
126
127 def summarize(self, **kwargs) -> pd.DataFrame:
~/suzieq/suzieq/engines/pandas/addr.py in get(self, **kwargs)
57
58 df = self.get_valid_df("interfaces", sort_fields, columns=columns,
---> 59 **kwargs)
60
61 # Works with pandas 0.25.0 onwards
~/suzieq/suzieq/engines/pandas/engineobj.py in get_valid_df(self, table, sort_fields, **kwargs)
124 end_time=self.iobj.end_time,
125 sort_fields=sort_fields,
--> 126 **kwargs
127 )
128
~/suzieq/suzieq/engines/pandas/engine.py in get_table_df(self, cfg, schemas, **kwargs)
135 folder, filters=filters or None, validate_schema=False
136 )
--> 137 .read(columns=fields)
138 .to_pandas()
139 .query(query_str)
~/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pyarrow/parquet.py in read(self, columns, use_threads, use_pandas_metadata)
1138 tables.append(table)
1139
-> 1140 all_data = lib.concat_tables(tables)
1141
1142 if use_pandas_metadata:
~/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.concat_tables()
~/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Must pass at least one table
This is not what happens in the cli, so I'm not sure if this is a bug or just unclear how to change engine. I think part of the problem is that engine is sometimes used to be the string name of the engine and sometimes is the engine, so we should probably go back and edit all strings to be engine_name
____________________ test_context_engine_filtering[addrCmd] ____________________
setup_nubia = None, svc = 'addrCmd'
@pytest.mark.fast
#@pytest.mark.xfail(reason='bug # ', raises=AttributeError)
@pytest.mark.parametrize('svc', good_svcs)
def test_context_engine_filtering(setup_nubia, svc):
_test_context_filtering(svc, {'engine': 'pandas'})
tests/integration/test_sqcmds.py:182:
tests/integration/test_sqcmds.py:199: in _test_context_filtering
s2 = _test_command(svc, 'show', None, None)
tests/integration/test_sqcmds.py:62: in _test_command
s = execute_cmd(svc, cmd, arg, filter)
tests/integration/test_sqcmds.py:211: in execute_cmd
instance = instance()
suzieq/cli/sqcmds/addrCmd.py:39: in init
self.addrobj = addrObj(context=self.ctxt)
suzieq/sqobjects/addr.py:23: in init
datacenter, columns, context=context, table='addr')
self = <suzieq.sqobjects.addr.addrObj object at 0x7fba50196950>
engine_name = '', hostname = [], start_time = '', end_time = '', view = 'latest'
datacenter = [], columns = ['default']
context = <suzieq.cli.sq_nubia_context.NubiaSuzieqContext object at 0x7fba501f9bd0>
table = 'addr'
def __init__(self, engine_name: str = '', hostname: typing.List[str] = [],
start_time: str = '', end_time: str = '',
view: str = 'latest', datacenter: typing.List[str] = [],
columns: typing.List[str] = ['default'],
context=None, table: str = '') -> None:
if context is None:
self.ctxt = SQContext(engine_name)
else:
self.ctxt = context
if not self.ctxt:
self.ctxt = SQContext(engine_name)
self._cfg = self.ctxt.cfg
self._schemas = self.ctxt.schemas
self._table = table
self._sort_fields = []
self._cat_fields = []
if not datacenter and self.ctxt.datacenter:
self.datacenter = self.ctxt.datacenter
else:
self.datacenter = datacenter
if not hostname and self.ctxt.hostname:
self.hostname = self.ctxt.hostname
else:
self.hostname = hostname
if not start_time and self.ctxt.start_time:
self.start_time = self.ctxt.start_time
else:
self.start_time = start_time
if not end_time and self.ctxt.end_time:
self.end_time = self.ctxt.end_time
else:
self.end_time = end_time
self.view = view
self.columns = columns
if engine_name:
self.engine = get_sqengine(engine_name)
else:
self.engine = self.ctxt.engine
if table:
self.engine_obj = self.engine.get_object(self._table, self)
E AttributeError: 'str' object has no attribute 'get_object'
(suzieq) jpiet@t12:/tmp/pycharm_project_1000/suzieq/suzieq/cli$ python3 suzieq-cli
Logging to /tmp/suzieq-97go9iho
jpiet> topcpu show
datacenter hostname timestamp
11 single leaf03 2020-01-28 21:43:30.048
jpiet>
Suzieq Verbose OFF Datacenter Hostname StartTime EndTime Engine pandas Query Time 4.79
but when I look at the parquet-out there are lots of parquet files. I don't know why only one host. There is no filtering
(suzieq) jpiet@t12:~/parquet-out$ ls topcpu/datacenter=dual//.parquet|wc
543 543 43712
I'm not sure where to put this data to investigate
I assume this happens at other times to
sts/integration/test_sqcmds.py::test_commands[routesCmd-commands7-size7]
/tmp/pycharm_project_1000/suzieq/tests/integration/test_sqcmds.py:72: FutureWarning: 'ExtensionArray._formatting_values' is deprecated. Specify 'ExtensionArray._formatter' instead.
return getattr(instance, cmd)()
I believe it's just for the tables that have IP addresses which must use ExtentionArray
FileNotFoundError Traceback (most recent call last)
in
----> 1 ospfCmd.ospfCmd().show()
~/suzieq/suzieq/cli/sqcmds/ospfCmd.py in show(self, ifname, vrf, state, type)
70 columns=self.columns,
71 datacenter=self.datacenter,
---> 72 type=type,
73 )
74 self.ctxt.exec_time = "{:5.4f}s".format(time.time() - now)
~/suzieq/suzieq/sqobjects/ospf.py in get(self, **kwargs)
32 raise AttributeError('No analysis engine specified')
33
---> 34 return self.engine_obj.get(**kwargs)
35
36 def summarize(self, **kwargs):
~/suzieq/suzieq/engines/pandas/ospf.py in get(self, **kwargs)
30 del kwargs["type"]
31
---> 32 df = self.get_valid_df(table, sort_fields, **kwargs)
33 return df
34
~/suzieq/suzieq/engines/pandas/engineobj.py in get_valid_df(self, table, sort_fields, **kwargs)
124 end_time=self.iobj.end_time,
125 sort_fields=sort_fields,
--> 126 **kwargs
127 )
128
~/suzieq/suzieq/engines/pandas/engine.py in get_table_df(self, cfg, schemas, **kwargs)
59 folder += "/datacenter={}/".format(v)
60
---> 61 fcnt = self.get_filecnt(folder)
62
63 use_get_files = (
~/suzieq/suzieq/engines/pandas/engine.py in get_filecnt(self, path)
176 def get_filecnt(self, path="."):
177 total = 0
--> 178 for entry in os.scandir(path):
179 if entry.is_file():
180 total += 1
FileNotFoundError: [Errno 2] No such file or directory: './tests/data/basic_dual_bgp/parquet-out/ospfNbr'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.