smsearcy / mesh-info Goto Github PK
View Code? Open in Web Editor NEWCollect and view information about an AREDN mesh network.
License: GNU General Public License v3.0
Collect and view information about an AREDN mesh network.
License: GNU General Public License v3.0
The data folder defaulted to /usr/local/share
based on appdirs
, but now that I think about it more, /var
is typically where variable data goes - especially as I'm considering the possibility of caching map tiles.
Can the error counts be broken down into types of errors? If so that would be helpful. Even more helpful would be a list of nodes associated with that particular type of error.
Add a Procfile to simplify starting the web and collector processes for local deployment (without a container).
Update the README to highlight this functionality for "getting started".
In the overview page, under Network Statistics, group the development firmware nodes together as "Other".
Stuff to add to documentation:
Use containers for development. This will aid development on Windows clients and (hopefully) pave the way for containers for deployment.
I've seen a couple of nodes where the hostname
in the link_info
dictionary had a leading period (.
), e.g. "hostname": ".N0CALL-HAP"
. This prevents the link from being saved.
In both cases the API was version 1.7, which is older but it should be fairly simple to workaround by doing .lstrip(".")
on the node name.
Optionally specify starting map coordinates via URL and show current location in the URL.
I think the "Hash" Leaflet plugin can do both.
The newest AREDN firmware (3.22.6.0) added two more channels for 2GHz, -3 and -4 (for those countries where it is legal).
Source: https://www.arednmesh.org/content/aredn-release-32260
Gunicorn uses a single "bind" option (instead of host and port), which enables binding to Unix sockets instead of TCP ports.
Gunicorn also defaults to port 8000 instead of 8080 (for consistency between running web application via gunicorn
and meshinfo
.
New AREDN firmware has been released.
Add CI definitions for GitHub as part of moving primary development to GitHub.
Add a map legend explaining the different icons and link colors.
Currently errors polling nodes are only available in the collector logs, so there isn't a way to review historically or provide more information in the UI (#47). As a first step, store the node's IP address, reverse-DNS name (if available), error type, and response details. This should provide everything we need for #47 and lay a foundation for #17.
With the larger network (~300 nodes) the time to write the updates to the database has dramatically increased (20+ seconds?). Need to investigate further.
I think I observed high CPU usage, it's much faster on my workstation than the Pi. This is with SQLite. Need to see if I can profile the queries. Maybe too many indexes?
Setup framework for documentation via Sphinx and reStructuredText. Start with installation/setup, future goals would be general end user and development/contributing.
I like the Read the Docs theme.
Currently there are 65 nodes with blank band.
Spot checking, one of the blank ones has ["meshrf"]["status"] = "off"
. In that case the band really should be "N/A".
Depends on #1
Make the default database a file in the data directory.
Under the Node list, the Firmware column sort should be a "natural" sort - lowest number first, rather than sorting on the first digit.
I.e., 1866 should be at end of the sorted list. This would be very helpful.
Since right now all the graphs have fixed periods, does it make more sense to pre-render the graphs after collecting the data?
Pros:
Cons:
Add option for full screen map. I think there is a Leaflet plugin to provide this functionality.
Now that we're saving the node errors we can make a page for reviewing historical errors by node. Could be useful for identifying and diagnosing problematic nodes.
In interest of cleaning up space, add a purge
command that deletes all nodes (and related data - links and associated RRD files) that have not been seen for a specified amount of time (6 months default?).
Other maps have standardized on blue for 3 GHz; Grey is reserved for no RF (e.g., hAP AC). Recommend you adopt these to reduce confusion.
There have been several requests to filter nodes by SSID. The quickest way to do that right now is to add it to the node table and let the Javascript search/sort functions take care of the rest.
We might need a better long term solution that involved a stored preference for filtering, but hopefully this will do for now until the frontend gets some more features.
Realized that two different links can exist between two nodes (e.g. DTD and RF), but current database schema does not support that.
For newer firmware with link info identifying the types is straightforward, however not how to associate that with the OLSR costs (because all IPs in there are WLAN, right?). So my thought is to sort links by OLSR and by type (DTD -> Tunnel -> RF) then correlate them that way.
There are only a few PostgreSQL specific column types being used so it should be possible to make the database compatible with SQLite. Doing so would greatly simplify deployment/testing (but I should leave Postgres tests in CI).
Reduce/eliminate the padding/margin around the map so maximizes the space available.
Add a page to the UI with links to this GitHub page and issue tracker, along with my name and callsign. So that people have a way of contacting me about issues or ideas.
The current import/export procedure doesn't recurse into sub-directories because I was concerned I might have cached information there. But cache belongs in a different directory than the data. And now since I moved the RRD files into a rrd
sub-folder under the data directory, we need to recurse into that for the import/export process.
In order to compare what different nodes/links are doing at a particular time, it would be very convenient to have a page on which the user could select a timeframe and different graphs for nodes and links, then they would all be listed out one the page with explanatory titles.
The input could look like a (filterable) list of nodes, which when selected show checkboxes for the node graphs and that node's links, from which the link graphs could be checked. Clicking compare/submit opens a page with all of those graphs vertically going down the page.
I need to start a change log (and using versions) so that there is a place to document changes and highlight breaking changes (like moving data folders around).
Using pip-tools
instead of Poetry will simplify installation instructions and future container creation.
Having all the data live in one folder is convenient for backup, however RRD files are not cross-platform. Reasons for moving files across platforms include migrating the system, copy data to development systems, and replicating issues.
Therefore, having a dump
command that dumps all of the RRD files and zips them up in a tarball with the SQLite database and a restore
command to reverse the process seems beneficial.
The firmware currently in development (API v1.10) is failing because there is no longer a tunnel_installed
key.
I think I remember hearing that with the latest firmware tunneling is builtin, hence this change makes sense.
In the more recent APIs (1.9+?) with link data in the JSON, I think I saw that they are reporting recent links. Since we have our own tracking recent links, I don't think those should be saved in Mesh Info.
TODO: Update the ticket with the example JSON
As an alternative to pre-rendering graphs (#9), try Gunicorn with process workers to see if that resolved issues with Waitress. Currently, since graphv()
is not threadsafe, I'm spawning processes to dynamically generate graphs, but it seems to be causing issues. A process based WSGI server means I won't need to wrap graph generation in a process.
For example, yesterday Waitress was unresponsive and said there were 18 waiting threads. An advantage of dynamically rendering graphs is that the disk space is saved for data and (eventually) caching map tiles. Process workers could be a disadvantage for fetching map tiles in the future (since that will be IO bound). But we'll cross that bridge when we come to it (separate map tile service?).
Rename "pyMeshMap" to "Mesh Info".
Repository: pymeshmap -> mesh-info
Folder/Python: pymeshmap -> meshinfo
Rationale:
Version 1.9 of sysinfo.json
added a lot more data to the link_info
dictionaries. We should start capturing linkCost
from there and always showing it in the graph (regardless of API version).
I'm not sure right now which of the other data points are of value as well. The good news is that I already have a source for this in the RRD files. If we add anymore then I'll need to implement the process for rebuilding the RRD files.
The tests are currently failing on Python 3.7 due to being unable resolve dependencies for importlib-metadata
.
Is this affecting installation in a production environment or is it specifically related to development requirements?
On a related note, if I used setup.cfg
for requirements, could I avoid this issue (probably not without building the requirement file on Python 3.7).
Create a basic map view of the network data that Mesh Info has collected.
maybe a config file or option or something so the map isn't always centered on Medford, OR...
Under Bands -
5GHz: ---> 5 GHz:
2GHz: ---> 2 GHz:
3GHZ: ---> 3 2GHz:
: ---> No RF (if that's what it actually means)
Currently, when unable to connect to a node the IP address is logged but no database entry is created, and thus any links to that node are not created (even though the new API provides a fair amount of data about the link).
A better solution would seem to be doing a reverse DNS lookup on the IP address, so that with a name and IP we should have enough information to create a minimal node entry and thus can populate the links from the "good" side of the link.
I have a branch (in GitLab?) where I have most of the reverse DNS code working. Need to polish that up and then incorporate it into the error processing.
Add breadcrumbs, particularly for the detailed graphs page.
AREDN added channel 131 to the 5GHz band, needs to be added to the list of channels so those nodes are properly categorized.
Using a fetch for Grid.js to get JSON for the node table from another endpoint means we could use Pyramid's JSON renderer to simplify generating the node data and make the table HTML much simpler.
Similar to what I've done on the map view.
While Pyramid supports the __json__()
, I think I prefer having the conversion done closer to the view.
Since Bootstrap 5 is now out (and doesn't have a jQuery dependency), I think I should switch to that instead of Bulma, since it is more popular. I also don't need Bulma's "mobile-first" design.
On the other hand, there are invasive frameworks (e.g. Pico or Picnic) that might be less overhead.
Munin has adjusted date ranges for its output to optimize the graph output to an ideal of 1px per RRA sample.
Also, make sure that colors are optimized for distinctiveness and contrast.
Reference:
https://github.com/munin-monitoring/munin/blob/master/lib/Munin/Master/Graph.pm
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.