Coder Social home page Coder Social logo

daq-tools / kotori Goto Github PK

View Code? Open in Web Editor NEW
107.0 11.0 16.0 3.61 MB

A flexible data historian based on InfluxDB, Grafana, MQTT, and more. Free, open, simple.

Home Page: https://getkotori.org/

License: GNU Affero General Public License v3.0

Makefile 1.09% PHP 1.99% C 5.15% Python 74.02% CSS 9.30% JavaScript 3.01% HTML 2.91% Shell 1.22% C++ 0.17% Dockerfile 1.15%
telemetry sensor-network daq mqtt grafana mosquitto python multi-channel multi-protocol visualization

kotori's Introduction

Kotori

Telemetry data acquisition and sensor networks for humans.

image

image

image


  • Status:

    CI outcome

    Test suite code coverage

    Supported Python versions

    Package version on PyPI

    Project status (alpha, beta, stable)

    Project license

  • Usage:

    PyPI downloads per month

    Docker image pulls for `kotori` (total)

    Docker image pulls for `kotori-standard` (total)

  • Compatibility:

    Supported Mosquitto versions

    Supported Grafana versions

    Supported InfluxDB versions

    Supported MongoDB versions


About

Kotori is a multi-channel, multi-protocol telemetry data acquisition and graphing toolkit for time-series data processing.

It supports a variety of scenarios in scientific environmental monitoring projects, for building and operating distributed sensor networks, and for industrial data acquisition applications.

Details

Kotori takes the role of the data historian component within a SCADA / MDE system, exclusively built upon industry-grade free and open-source software like Grafana, Mosquitto, or InfluxDB. It is written in Python, and uses the Twisted networking library.

The best way to find out what you can do with Kotori, is by looking at some outlined scenarios and by reading how others are using it at the example gallery. To learn more about the technical details, have a look at the used technologies.

Features

  • Multi-channel and multi-protocol data-acquisition and -storage. Collect and store sensor data from different kinds of devices, data sources, and protocols.
  • Built-in sensor adapters, flexible configuration capabilities, durable database storage and unattended graph visualization.
  • Based on an infrastructure toolkit assembled from different components suitable for data-acquisition, -storage, -fusion, -graphing and more.
  • Leverage the flexible data acquisition integration framework for building telemetry data acquisition and logging systems, test benches, or sensor networks for environmental monitoring systems, as well as other kinds of data-gathering and -aggregation projects.
  • It integrates well with established hardware-, software- and data acquisition workflows through flexible adapter interfaces.

Installation

Kotori can be installed in different ways. You may prefer using a Debian package, install it from the Python Package Index (PyPI), or run it within a development sandbox directly from the Git repository.

Corresponding installation instructions are bundled at https://getkotori.org/docs/setup/.

Synopsis

A compact example how to submit measurement data on a specific channel, using MQTT and HTTP, and export it again.

Data acquisition

First, let's define a data acquisition channel:

CHANNEL=amazonas/ecuador/cuyabeno/1

and some example measurement data:

DATA='{"temperature": 42.84, "humidity": 83.1}'

Submit with MQTT:

MQTT_BROKER=daq.example.org
echo "$DATA" | mosquitto_pub -h $MQTT_BROKER -t $CHANNEL/data.json -l

Submit with HTTP:

HTTP_URI=https://daq.example.org/api/
echo "$DATA" | curl --request POST --header 'Content-Type: application/json' --data @- $HTTP_URI/$CHANNEL/data

Data export

Measurement data can be exported in a variety of formats.

This is a straight-forward example for CSV data export:

http $HTTP_URI/$CHANNEL/data.csv

Acknowledgements

Thanks a stack to all the contributors who helped to co-create and conceive Kotori in one way or another. You know who you are.

Project information

Contributions

Every kind of contribution, feedback, or patch, is much welcome. Create an issue or submit a patch if you think we should include a new feature, or to report or fix a bug.

Development

In order to setup a development environment on your workstation, please head over to the development sandbox documentation. When you see the software tests succeed, you should be ready to start hacking.

Resources

License

The project is licensed under the terms of the GNU AGPL license, see LICENSE.

kotori's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kotori's Issues

Issue on raspbian stretch

I have this issue run kotori 0.15.0 on raspbian stretch

2019-03-01 19:51:32,132 [root ] INFO : Root configuration file is /etc/kotori/kotori.ini
2019-03-01 19:51:32,135 [root ] INFO : Requested configuration files: /opt/kotori/.kotori.ini, /etc/kotori/kotori.ini
2019-03-01 19:51:32,156 [root ] INFO : Expanded configuration files: /opt/kotori/.kotori.ini, /etc/kotori/kotori.ini, /etc/kotori/apps-enabled/hives.ini
2019-03-01 19:51:32,166 [root ] INFO : Used configuration files: /etc/kotori/kotori.ini, /etc/kotori/apps-enabled/hives.ini
2019-03-01T19:51:32+0100 [kotori ] INFO: Starting Kotori version 0.15.0
2019-03-01T19:51:32+0100 [kotori ] INFO: Using configuration file /etc/kotori/kotori.ini
2019-03-01T19:51:32+0100 [kotori.core ] INFO: Enabling applications ['hives']
2019-03-01T19:51:32+0100 [kotori.core ] INFO: Starting application "hives"
2019-03-01T19:51:32+0100 [kotori.core ] CRITICAL: Error loading entrypoint "kotori.daq.application.mqttkit:mqttkit_application"
2019-03-01T19:51:32+0100 [kotori.core ] CRITICAL: Application entrypoint "kotori.daq.application.mqttkit:mqttkit_application" for "hives" not loaded
2019-03-01T19:51:32+0100 [kotori.core ] INFO: Enabling vendors []

Can i have help for this problem ?

Installation from .deb package on Debian 10 fails

A user of Kotori just reported that running Kotori on Debian fails 10. It gives:

root@hives1:/home/georges# tail -F /var/log/kotori/kotori.log
    from .compat import unicode
  File "/opt/kotori/lib/python2.7/site-packages/twisted/python/compat.py", line 611, in <module>
    import cookielib
  File "/usr/lib/python2.7/cookielib.py", line 32, in <module>
    import re, urlparse, copy, time, urllib
  File "/usr/lib/python2.7/copy.py", line 52, in <module>
    import weakref
  File "/usr/lib/python2.7/weakref.py", line 14, in <module>
    from _weakref import (
ImportError: cannot import name _remove_dead_weakref

Custom Timestamp

Hello,

First of all thank you for this amazing work.

I have the need of the timestamp at the exact moment that the data is collected, as consequence i send this timestamp along with the data value over mqtt. Is there a way to force InduxDB to record my own timestamp and not the one that it uses when it receive the data?

Thank you.

Weewx

I have everything set up per these instructions.
https://getkotori.org/docs/examples/weewx.html

The exception is that I am using a mqtt broker on another server.

Kotori logs did not show any issue with the broker

now all I have is these logs I am assuming they are the mqtt messages

2021-06-02T23:33:32+0000 [kotori.daq.services.mig ] INFO : [weewx ] transactions: 0.00 tps
2021-06-02T23:34:32+0000 [kotori.daq.services.mig ] INFO : [weewx ] transactions: 0.00 tps
2021-06-02T23:35:32+0000 [kotori.daq.services.mig ] INFO : [weewx ] transactions: 0.00 tps
2021-06-02T23:36:32+0000 [kotori.daq.services.mig ] INFO : [weewx ] transactions: 0.00 tps

What do I do to get information into the database? Then Grafana?

matplotlib style-theming leaks into global context

Problem

Kotori yields wiggly grid lines on non-xkcd-style ggplot and matplotlib outputs.

Description

We just found this issue again: After visiting the xkcd-style ggplot endpoint [3] once, the grid lines on the regular ggplot endpoint [2] are also wiggly - and even the plain matplotlib renderer context gets poisoned [1]. This is obviously not intended, so we consider this a bug. Thanks for reporting this, weef.

Background

Somehow, I haven't been able to isolate the matplotlib contexts from each other or something seems to leak there under the hood. We would probably have to reset the context every time that it's clean even if has become dirty before. I probably left the implementation in an unfavorable, definitely improvable state. Sorry for that.

[1] https://swarm.hiveeyes.org/api/hiveeyes/open_hive/test_statista/default/1/data.png?from=now-5h&to=now
[2] https://swarm.hiveeyes.org/api/hiveeyes/open_hive/test_statista/default/1/data.png?renderer=ggplot&from=now-5h&to=now
[3] https://swarm.hiveeyes.org/api/hiveeyes/open_hive/test_statista/default/1/data.png?renderer=ggplot&theme=xkcd&from=now-5h&to=now

"Basic example with MQTT" fails

With apologies if I'm missing something terribly obvious (as I'm very much a n00b to MQTT, InfluxDB, and Grafana), the "basic example with MQTT" from the docs (https://getkotori.org/docs/getting-started/basic-mqtt.html) isn't working for me.

Background

I installed Kotori on a fresh, updated Ubuntu 18 server following the instructions at https://getkotori.org/docs/setup/debian.html. I then went to the "Getting Started" documentation and tried to follow the example with MQTT. The snipped in amazonas.ini looks like this:

[amazonas]
enable      = true
type        = application
realm       = amazonas
mqtt_topics = amazonas/#
application = kotori.daq.application.mqttkit:mqttkit_application

But when I run the mosquitto_pub command, I get an error in the kotori log:

2020-12-07T19:59:39-0500 [kotori.daq.graphing.grafana.manager] INFO    : Provisioning Grafana dashboard "amazonas-ecuador" for database "amazonas_ecuador" and measurement "cuyabeno_1_sensors"
2020-12-07T19:59:39-0500 [kotori.daq.graphing.grafana.api    ] INFO    : Checking/Creating datasource "amazonas_ecuador"
2020-12-07T19:59:40-0500 [kotori.daq.services.mig            ] ERROR   : Grafana provisioning failed for storage={"node": "1", "slot": "data.json", "realm": "amazonas", "network": "ecuador", "database": "amazonas_ecuador", "measurement_events": "cuyabeno_1_events", "label": "cuyabeno_1", "measurement": "cuyabeno_1_sensors", "gateway": "cuyabeno"}, message={u'temperature': 42.84, u'humidity': 83.1}:
	[Failure instance: Traceback: <class 'grafana_api_client.GrafanaUnauthorizedError'>: Unauthorized
	/opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:250:inContext
	/opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:266:<lambda>
	/opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:122:callWithContext
	/opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:85:callWithContext
	--- <exception caught here> ---
	/opt/kotori/lib/python2.7/site-packages/kotori/daq/services/mig.py:269:process_message
	/opt/kotori/lib/python2.7/site-packages/kotori/daq/graphing/grafana/manager.py:129:provision
	/opt/kotori/lib/python2.7/site-packages/kotori/daq/graphing/grafana/manager.py:85:create_datasource
	/opt/kotori/lib/python2.7/site-packages/kotori/daq/graphing/grafana/api.py:104:create_datasource
	/opt/kotori/lib/python2.7/site-packages/grafana_api_client/__init__.py:73:create
	/opt/kotori/lib/python2.7/site-packages/grafana_api_client/__init__.py:64:make_request
	/opt/kotori/lib/python2.7/site-packages/grafana_api_client/__init__.py:171:make_raw_request
	]

I'm sure it's as a result of this that browsing to ip:3000/dashboard/db/ecuador/ fails a "Dashboard not found" message.

I'd appreciate a pointer in the right direction.

How data is flowing through Kotori

Coming from #12, I have a question about the export of data from InfluxDB in the weewx.ini configuration file.

; ----------------------------------------------------------------------
; Data export
; https://getkotori.org/docs/handbook/export/
; https://getkotori.org/docs/applications/forwarders/http-to-influx.html
; ----------------------------------------------------------------------
[weewx.data-export]
enable = true

type = application
application = kotori.io.protocol.forwarder:boot

realm = weewx
source = http:/api/{realm:weewx}/{network:.}/{gateway:.}/{node:.*}/{slot:(data|event)}.{suffix} [GET]
target = influxdb:/{database}?measurement={measurement}
transform = kotori.daq.intercom.strategies:WanBusStrategy.topology_to_storage,
kotori.io.protocol.influx:QueryTransformer.transform

My question is about the source, target and transform options.
In the option source you utilize a HTTP GET request to the database or is to get the JSON payload published in the topic and transform it in order to put in Influx?

The target option i understand, is for defining the database and measurment where we put the data.

The transform option i don't really understand, i just know you guys transform the parameters in JSON to queries.

Thanks a lot for the help already.
Best Regards.

Optimize packaging

We've learned from @RuiPinto96 and @Dewieinns through #7, #19 and #22 that the packaging might not be done appropriately.

Within this issue, we will try to walk through any issues observed. Thanks again for your feedback, we appreciate that very much!

Installation on RaspberryPi using Docker

Hi,

lots of Home Automation enthusiasts use a Raspberry Pi (model 3, 3plus or 4) for their computing needs. I made the mistake of perhaps not reading through all steps and tried the docker install and failed.

  • Issue 1 was the Grafana permissions - but resolved.
  • Issue 2 No MongoDB for the pi
  • Issue 3 I read that MongoDB is optionally, so tried
    docker run -it --rm daqzilla/kotori kotori --version
    Nope!
$ docker run -it --rm daqzilla/kotori kotori --version
Unable to find image 'daqzilla/kotori:latest' locally
latest: Pulling from daqzilla/kotori
68ced04f60ab: Pull complete 
0f5503414412: Pull complete 
Digest: sha256:ff3d0a569de75fda447ad108a2ec664d8aaf545ded82ecd8c9010fc50817f94b
Status: Downloaded newer image for daqzilla/kotori:latest
standard_init_linux.go:211: exec user process caused "exec format error"
failed to resize tty, using default size

I am a Linux noob so maybe i am doing it all wrong or perhaps Kotori is not for the Pi.

Would love to hear from you as I am quite excited with how you have brought MQTT (even Tasmota!!), InfluxDB and Grafana all together.

Cheers and best wishes!

Kotori installation fails on 64bit arm raspberry

Hi Kotori,

my pi4 running in 64bit mode, the install of Kotori fails because it thinks it's on a 32bit system:

$ sudo dpkg --print-architecture
arm64
$ sudo apt-get install kotori
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'kotori-standard:armhf' instead of 'kotori:armhf'
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 kotori-standard:armhf : Depends: python2.7:armhf but it is not going to be installed
                         Recommends: mongodb:armhf but it is not installable
                         Recommends: python-requests:armhf but it is not installable
                         Recommends: python-openssl:armhf but it is not installable
                         Recommends: python-cryptography:armhf but it is not going to be installed
                         Recommends: python-certifi:armhf but it is not installable
                         Recommends: python-numpy:armhf but it is not going to be installed
                         Recommends: python-scipy:armhf but it is not going to be installed
                         Recommends: python-matplotlib:armhf but it is not going to be installed
                         Recommends: python-tables:armhf but it is not installable
                         Recommends: python-netcdf4:armhf but it is not going to be installed
                         Recommends: libhdf5-100:armhf but it is not installable
                         Recommends: libnetcdf-c++4:armhf but it is not going to be installed
                         Recommends: libnetcdf11:armhf but it is not installable
E: Unable to correct problems, you have held broken packages.

is there no arm64 package or where in this process is armhf selected?

Of course, python is installed, I got InfluxDB/Grafana setup separately, but I'm hesitating to compile Kotori.

Any hints on how to fix this?

Thanks for your help,
Hans

Weather Variables (Davis Vantage Pro2) Weewx

Hello!! I am utilizing a Davis Vantage Pro2 station with Weewx, Kotori, Mosquitto, InfluxDB and Grafana architecture to save and display the weather data.
I don't know if anyone can help me... My issue is not really an issue related with kotori and the whole architecture. That is working fine. My issue is about understanding what some values of the Davis Vantage Pro 2 variables mean. Some variables it's obvious what they meant but some i have no clue (for example: rxCheckPercent, Sunset, ForecastRule, ForecastIcon, stormStart, etc).
I looked in the Davis Vantage manuals but it not tells that much. I know the issue it's not directly related with Kotori, but i thought you guys might help.

Here's the JSON example that you have in the kotori website of all variables that the console gives:

"windSpeed10_kph": "5.78725803977",
"monthET": "1.32",
"highUV": "0.0",
"cloudbase_meter": "773.082217509",
"leafTemp1_C": "8.33333333333",
"rainAlarm": "0.0",
"pressure_mbar": "948.046280104",
"rain_cm": "0.0",
"highRadiation": "0.0",
"interval_minute": "5.0",
"barometer_mbar": "1018.35464712",
"yearRain_cm": "17.2000000043",
"consBatteryVoltage_volt": "4.72",
"dewpoint_C": "2.07088485785",
"insideAlarm": "0.0",
"inHumidity": "29.0",
"soilLeafAlarm4": "0.0",
"sunrise": "1492489200.0",
"windGust_kph": "9.65608800006",
"heatindex_C": "3.55555555556",
"dayRain_cm": "0.0",
"lowOutTemp": "38.3",
"outsideAlarm1": "0.0",
"forecastIcon": "8.0",
"outsideAlarm2": "0.0",
"windSpeed_kph": "3.95409343049",
"forecastRule": "40.0",
"windrun_km": "1.07449640224",
"outHumidity": "90.0",
"stormStart": "1492207200.0",
"inDewpoint": "45.1231125123",
"altimeter_mbar": "1016.62778614",
"windchill_C": "3.55555555556",
"appTemp_C": "1.26842313302",
"outTemp_C": "3.55555555556",
"windGustDir": "275.0",
"extraAlarm1": "0.0",
"extraAlarm2": "0.0",
"extraAlarm3": "0.0",
"extraAlarm4": "0.0",
"extraAlarm5": "0.0",
"extraAlarm6": "0.0",
"extraAlarm7": "0.0",
"extraAlarm8": "0.0",
"humidex_C": "3.55555555556",
"rain24_cm": "0.88000000022",
"rxCheckPercent": "87.9791666667",
"hourRain_cm": "0.0",
"inTemp_C": "26.8333333333",
"watertemp": "8.33333333333",
"trendIcon": "59.7350993377",
"soilLeafAlarm2": "0.0",
"soilLeafAlarm3": "0.0",
"usUnits": "16.0",
"soilLeafAlarm1": "0.0",
"leafWet4": "0.0",
"txBatteryStatus": "0.0",
"yearET": "4.88",
"monthRain_cm": "2.94000000074",
"UV": "0.0",
"rainRate_cm_per_hour": "0.0",
"dayET": "0.0",
"dateTime": "1492467300.0",
"windDir": "283.55437192",
"stormRain_cm": "1.72000000043",
"ET_cm": "0.0",
"sunset": "1492538940.0",
"highOutTemp": "38.4",
"radiation_Wpm2": "0.0"

Installation from .deb package on Ubuntu 18.04 fails

Hi! I installed the latest kotori package (0.22.7) for amd64 in my Ubuntu 18.04 LTS machine, but the kotori service fails to start. My machine has the architecture x86_64.
If anyone could please help.

โ— kotori.service
   Loaded: not-found (Reason: No such file or directory)
   Active: failed (Result: exit-code) since Thu 2019-05-30 14:56:34 UTC; 19h ago
 Main PID: 15597 (code=exited, status=1/FAILURE)

May 30 14:56:34 igup-be systemd[1]: kotori.service: Main process exited, code=exited, status=1/FAILURE
May 30 14:56:34 igup-be systemd[1]: kotori.service: Failed with result 'exit-code'.
May 30 14:56:34 igup-be systemd[1]: kotori.service: Service hold-off time over, scheduling restart.
May 30 14:56:34 igup-be systemd[1]: kotori.service: Scheduled restart job, restart counter is at 5.
May 30 14:56:34 igup-be systemd[1]: Stopped Kotori data acquisition and graphing toolkit.
May 30 14:56:34 igup-be systemd[1]: kotori.service: Start request repeated too quickly.
May 30 14:56:34 igup-be systemd[1]: kotori.service: Failed with result 'exit-code'.
May 30 14:56:34 igup-be systemd[1]: Failed to start Kotori data acquisition and graphing toolkit.

The logs from kotori are:

from .compat import unicode
  File "/opt/kotori/lib/python2.7/site-packages/twisted/python/compat.py", line 611, in <module>
    import cookielib
  File "/usr/lib/python2.7/cookielib.py", line 32, in <module>
    import re, urlparse, copy, time, urllib
  File "/usr/lib/python2.7/copy.py", line 52, in <module>
    import weakref
  File "/usr/lib/python2.7/weakref.py", line 14, in <module>
    from _weakref import (
ImportError: cannot import name _remove_dead_weakref

Regards

Improve data export interface

Introduction

Together with @MKO1640 of Hiveeyes fame, we are considering to use the ESP32-e-Paper-Weather-Display by @G6EJD (thanks, David!) in order to display weather and hive information on a Waveshare e-Paper module [1].

We outlined how to use Arduino to acquire beehive data from the HTTP data export interface of Kotori at [2].

Objectives

Therefore, we would like to improve the HTTP export API [3] appropriately.

  • Optionally export only the most recent reading.
  • Optionally downsample time series data in order to reduce the amount of exported information.

Rationale

We can't know upfront about the original data acquisition frequency, but we have to take care about memory usage when data gets exported to embedded devices with constrained memory.
So, we have to use "pandas.resample()" to aggregate time series data by a new time period (e.g. daily to monthly). See also:


[1] https://community.hiveeyes.org/t/anzeige-der-daten-auf-einem-e-paper-display/3229
[2] https://github.com/hiveeyes/hiveeyes-epaper-display/tree/master/lib/hiveeyes
[3] https://getkotori.org/docs/handbook/export/

Submit bulk measurements with timestamps using specialized "bulk-json-compact" format

Hello,

I've searched the documentation but I didn't find nothing about this. In my case, I've a telemetry source that sends local data stored for a period of time, so each message includes several individual measurements (each one with a timestamp).
As an example, one of my measurements would look like this:

{"1611082554":{"temperature":45.2}, "1611082568":{"temperature":45.3}}

Is it possible to handle such data format from Kotori or is the timestamp always set when each individual measurement is received on the server?

Greetings.

Mosquitto authentication fails when using password made of digits

Hello!!
I'm trying to connect with Kotori to Mosquitto through authentication and i'm not able to connect.
It gives me: Error connecting to MQTT broker but retrying each 5 seconds

I enabled the allow_anonymous to false and defined a user and password in a file in order to connect to mosquitto.
Then, i changed the user and password in the Kotori.ini file and restarted Kotori but it will not connect to the mqtt broker.

; mqtt bus adapter
[mqtt]
host        = localhost
#port        = 1883
username    = rui
password    = ********

Best Regards

Question about the payload format

@Dewieinns just asked...

Hey Andreas,

I am playing with sending data to my server through Kotori and looking at the automatically generated dashboards in Grafana.

Is there any place where the structure of the data to be sent is documented? I've determined, by trial and error, that for the following 'topic':

databaseName/dev/site_name/device_name/data.json
  • A database called "databaseName_dev" gets automatically generated.
  • A dashboard in Grafana is automatically generated by the same name
  • On that dashboard a panel is generated automatically showing:
    • device=device_name
    • site=site_name
  • data.json tells the server we're sending data vs annotations (event.json)

The data is sent through as key-value pairs as I would expect:

{"key1":123, "key2":456, "key3":789}

But is there any special things I should know about? I haven't explored trying to add tags or anything yet, is this supported?

-- Andrew

Connection refused via MQTT

I've been using Kotori for a few months to log data from my WeeWx system via MQTT, and it's been working well--until yesterday afternoon. I rebooted the system running Kotori at that time, and weewx hasn't logged any records since that. Sadly, it also doesn't log any error messages.

However, I'm noticing that I get "connection refused" messages for remote MQTT requests:

(mppsolar) root@solar:~# mosquitto_pub -t 'GS/topic' -m 'helloWorld' -h kotori
Error: Connection refused

Mosquitto appears to be running on the kotori system:

root@kotori:/var/log# systemctl status mosquitto
โ— mosquitto.service - Mosquitto MQTT Broker
   Loaded: loaded (/lib/systemd/system/mosquitto.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2021-03-30 10:45:34 EDT; 1h 36min ago
     Docs: man:mosquitto.conf(5)
           man:mosquitto(8)
  Process: 677 ExecStartPre=/bin/chown mosquitto: /var/run/mosquitto (code=exited, status=0/SUCCESS)
  Process: 659 ExecStartPre=/bin/mkdir -m 740 -p /var/run/mosquitto (code=exited, status=0/SUCCESS)
  Process: 657 ExecStartPre=/bin/chown mosquitto: /var/log/mosquitto (code=exited, status=0/SUCCESS)
  Process: 609 ExecStartPre=/bin/mkdir -m 740 -p /var/log/mosquitto (code=exited, status=0/SUCCESS)
 Main PID: 678 (mosquitto)
    Tasks: 1 (limit: 2316)
   CGroup: /system.slice/mosquitto.service
           โ””โ”€678 /usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf

Mar 30 10:45:34 kotori systemd[1]: Starting Mosquitto MQTT Broker...
Mar 30 10:45:34 kotori systemd[1]: Started Mosquitto MQTT Broker.
root@kotori:/var/log# 

And it appears to be listening on port 1883:

root@kotori:/var/log# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:1883          0.0.0.0:*               LISTEN      678/mosquitto       
tcp        0      0 127.0.0.1:24642         0.0.0.0:*               LISTEN      1312/python         
tcp        0      0 127.0.0.1:2019          0.0.0.0:*               LISTEN      671/caddy           
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      596/mongod          
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      500/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      718/sshd            
tcp        0      0 127.0.0.1:8088          0.0.0.0:*               LISTEN      672/influxd         
tcp6       0      0 :::443                  :::*                    LISTEN      671/caddy           
tcp6       0      0 ::1:1883                :::*                    LISTEN      678/mosquitto       
tcp6       0      0 :::80                   :::*                    LISTEN      671/caddy           
tcp6       0      0 :::8086                 :::*                    LISTEN      672/influxd         
tcp6       0      0 :::22                   :::*                    LISTEN      718/sshd            
tcp6       0      0 :::3000                 :::*                    LISTEN      662/grafana-server  
udp        0      0 127.0.0.53:53           0.0.0.0:*                           500/systemd-resolve 
udp        0      0 192.168.1.68:68         0.0.0.0:*                           445/systemd-network 

Nothing really stands out in the kotori log, though its size is almost 4 GB, so I could easily have missed it.

Compatibility with ESPHome

Hi there,

friends of Kotori recently started playing with the excellent ESPHome by @OttoWinter and contributors (cheers!). It would be sweet to add a corresponding page to the documentation which outlines best practices how both software components can be made work together, based on ESPHome's mqtt.publish_json Action.

It looks pretty straight-forward, yet we will have to figure out how to embed the timestamp (Time) into the MQTT payload.

With kind regards,
Andreas.


- mqtt.publish_json:
    topic: the/topic
    payload: |-
      root["key"] = id(my_sensor).state;
      root["greeting"] = "Hello World";

    # Will produce:
    # {"key": 42.0, "greeting": "Hello World"}

Kotori server does not start

Good morning, I am new to the installation part of Kotori, I have followed all the steps on your page but I get to the moment where I have to enter the server with http://kotori.example.org:3000/ and it does not work. I've started the server with "systemctl start kotori" and it doesn't work either. When I get to the step of the tutorial where you have to put the command:

mosquitto_pub -t $CHANNEL_TOPIC -m '{"temperature": 42.84, "humidity": 83.1}'

in the terminal I get this: "Error: Connection Refused". I am using Linux Mint Ulyssa as the operating system.

Can someone help me with this please?

Thank you

Spin up a whole DAQ environment using Docker Compose

While we use Docker already for building Debian packages for Kotori [1], we just learned from #16 that @agross is also using it for actually running Kotori.

Thinking about that, it would actually be cool to have a docker-compose.yml file ready for running a complete environment of Mosquitto, InfluxDB, Grafana and Kotori on your fingertips, right?

Community contributions regarding this are always welcome. Maybe you will be able to volunteer on that, @agross? ;]

[1] https://www.getkotori.org/docs/setup/debian-quickstart.html

Installation/Configuration on fresh install of Debian 10 fails.

Hey @amotl I am starting a new thread to follow up with a post I made where similar was happening as I'm experiencing slightly different variations now.

After my reply last evening I set about working with a new VM. I didn't properly document everything I had done so this morning I started over COMPLETELY fresh. I was seeing weird issues where systemctl didn't seem to be running and the VM I was using had only one cpu/core. I assumed this was the reason for performance issues I was seeing so I wiped it, made a new VM (on a different drive in my server even) and set about installing Debian (debian-10.3.0-amd64-netinst.iso)

I opted not to install the GUI and enabled the SSH server.

With Debian successfully installed I installed screen and then started following the Setup on Debian guide again.

Note: When I started following this guide initially (yesterday) it wasn't obvious to me (I'm a n00b) that I needed to install the package source for Debian Stretch (9) OR Debian Buster (10).

This time I added only the package source I needed (Buster) and was good to go... until:

Add GPG key for checking package signatures:
wget -qO - https://packages.elmyra.de/elmyra/foss/debian/pubkey.txt | apt-key add -

Error:
E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation

Fix:
apt-get install gnupg

All is good and I set about installing Kotori as well as recommended and suggested packages (14.9GB - takes about an hour and a half on my slow internet connection)

Next thing prompted for is "Configuring jackd2"

_Do I want to run with realtime priority? (_explains it may lead to lock-ups.) <-- Selected No

then followed:

Configuring Kerberos Authentication
Default Kerbors version 5 realm:
- Pre-populated with my domain <-- left it as is
Kerberos servers for your realm:
- Nothing pre-populated <-- left it blank
Administrative server for your Kerberos realm:
- Nothing pre-populated <-- left it blank

It then goes through and installs everything - this takes some time - until it gets to the end where there are a couple of errors displayed:

Setting up mh-e (8.5-2.1) ...
ERROR: mh-e is broken - called emacs-package-install as a new-style add-on, but has no compat file.

and then:

Errors were encountered while processing:
 lirc
 lirc-x
E: Sub-process /usr/bin/dpkg returned an error code (1)

At this point I rebooted the VM using systemclt reboot

After reboot I noticed VM was SLOW again - mega slow.

I again ran apt update, all packages were up to date.

I then again ran apt install --install-recommends --install-suggests kotori and noticed the following:

0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
2 not fully installed or removed.

I hit Y to continue and it looked like everything was successful this time.

I attempted to execute systemctl start influxdb grafana-server but received the following warning again:

System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down

It was after this that I wiped the VM last time thinking I had done something mega wrong...

I then installed open-vm-tools on the VM which allowed me to gracefully reboot the system via the VM Host console.

Upon rebooting systemctl still isn't running... at this point I'm kind of at a loss. I was farther than this trying to install it on Ubuntu yesterday.

Sorry for the wall of information but I wanted to be thorough enough you may be able to provide some insight as to what's going on (Hopefully something more than "that's really weird" haha)

If you wish me to get in touch directly I can do this also.

Packages for CentOS

@RuiPinto96 just asked over at #5 (comment):

Hello there again!!
Just asking a question: Do you have packages for Centos 7?
I'm trying to install kotori on a Centos Machine instead of Raspberry pi.

Thank you

How to export data from InfluxDB

Hi,

thank you for creating Kotori, it is really useful.

I have a setup that uses weewx to acquire weather data, saves it to InfluxDB and displays graphs via Grafana.

So far everything works as expected:

[wetter]
enable      = true
type        = application
realm       = wetter
mqtt_topics = wetter/#
application = kotori.daq.application.mqttkit:mqttkit_application

# How often to log metrics
metrics_logger_interval = 60

My question is about data export which currently yields Connection reset by peer regardless of the requested URL.

[wetter.influx-data-export]
enable          = true

type            = application
application     = kotori.io.protocol.forwarder:boot

realm           = wetter
source          = http:/api/{realm:wetter}/{network:.*}/{gateway:.*}/{node:.*}/{slot:(data|event)}.{suffix} [GET]
target          = influxdb:/{database}?measurement={measurement}
transform       = kotori.daq.intercom.strategies:WanBusStrategy.topology_to_storage,
                  kotori.io.protocol.influx:QueryTransformer.transform
[kotori.io.protocol.forwarder       ] INFO    : Starting ProtocolForwarderService(wetter.influx-data-export-forwarder)
[kotori.io.protocol.forwarder       ] INFO    : Forwarding payloads from http:/api/{realm:wetter}/{network:.*}/{gateway:.*}/{node:.*}/{slot:(data|event)}.{suffix} [GET] to influxdb:/{database}?measurement={measurement}
[kotori.io.protocol.http            ] INFO    : Initializing HttpChannelContainer
[kotori.io.protocol.http            ] INFO    : Connecting to Metadata storage database
[kotori.io.protocol.http            ] INFO    : Starting HTTP service on localhost:24642
[kotori.io.protocol.http.LocalSite  ] INFO    : Starting factory <kotori.io.protocol.http.LocalSite instance at 0x7fcb3495b1e0>
[kotori.io.protocol.http            ] INFO    : Registering endpoint at path '/api/{realm:wetter}/{network:.*}/{gateway:.*}/{node:.*}/{slot:(data|event)}.{suffix}' for methods [u'GET']
[kotori.io.protocol.target          ] INFO    : Starting ForwarderTargetService(wetter-wetter.influx-data-export) for serving address influxdb:/{database}?measurement={measurement} []
$ curl http://localhost:24642/api/wetter/de/ogd/oben_sensors/data.csv
curl: (56) Recv failure: Connection reset by peer

The InfluxDB database is called wetter_de. Grafana runs queries like these: SELECT mean(windSpeed_kph) FROM wetter_de.autogen.ogd_oben_sensors WHERE time >= now() - 5m GROUP BY time(500ms) successfully.

I don't know how I need to construct the data export URL to get data from InfluxDB. As far as I understand from the docs http://localhost:24642/api/wetter/de/ogd/oben_sensors/data.csv is transformed as follows:

  • wetter/de is translated to the wetter_de database,
  • ogd/oben_sensors is translated to the ogd_oben_sensors measurement

Unfortunately, Kotori does not log my HTTP requests and their transformations. InfluxDB also does not log any failing queries. Can you help me find what goes wrong?

I'm also not sure why I need to specify the realm twice in the export ini:

realm           = wetter
source          = http:/api/{realm:wetter}/...

Adding tags to the InfluxDB data model

@TheOneWhoKnocks96 asked over at #13:

I already have data in my InfluxDB, but without tags.
Where in the Kotori's code, could i add tags like longitude and latitude?

Thanks for keeping responding.
Best Regards.

Add documentation - Caddy reverse proxy

I see that your docs include a page for setup behind a Nginx reverse proxy (https://getkotori.org/docs/setup/nginx.html), but I think it'd be nice to include instructions for using Caddy as well. I use it in a number of applications, and the big benefits I see are (1) it automatically manages TLS, including obtaining and renewing certificates from Let's Encrypt, and implementing a sensible and secure TLS configuration (the defaults give an A rating using https://github.com/drwetter/testssl.sh); and (2) much shorter and simpler configuration files (the complete server configuration for this application, including all the TLS stuff, can be as short as 10 lines).

The one downside is that in many cases it needs to be built from source--though as it's written in Go, that isn't as big of an issue as it could be.

I've written up a draft of a guide--feel free to adopt it wholesale, modify as appropriate, link to it, or whatever:
https://www.familybrown.org/dokuwiki/doku.php?id=advanced:kotori_caddy

Improve Docker-based installation

While #17 will make it possible to spin up all required containers together, running the DaqZilla stack on pure Docker will already make it possible to run all parts within distinct Docker containers.

However, the invocation for running Kotori currently needs to be executed from within the repository in order to obtain the configuration file.

docker run \
    --volume="$(pwd)/etc":/etc/kotori \
    --publish=24642:24642 \
    --link mosquitto:mosquitto \
    --link influxdb:influxdb \
    --link grafana:grafana \
    --link mongodb:mongodb \
    -it --rm daqzilla/kotori \
    kotori --config /etc/kotori/docker-mqttkit.ini

Let's improve that situation by adding a kotori make-config subcommand to let Kotori itself let create an appropriate configuration file to get started.

Updating provisioned dashboards with Grafana 8

Hi again,

users also just reported this flaw:

The dashboard update somehow does not work with the new Grafana. Kotori only creates a (new) panel after renaming the dashboard and restarting Kotori.

With kind regards,
Andreas.

Data export in hierarchical data formats (HDF5, netCDF) fails when field names contain special characters

Hi again,

when trying to export data in hierarchical data formats, where its fieldnames contain special characters like /, the process croaks.

HDF5

ERROR: ValueError: the ``/`` character is not allowed in object names: 'VAR[m/s]'
ERROR: ValueError: the ``/`` character is not allowed in object names: 'VAR[m/s]'

------------------------------------------------------------
Entry point:
Filename:    /opt/kotori/lib/python3.5/site-packages/kotori/io/export/tabular.py
Line number: 60
Function:    render
Code:        df.to_hdf(t.name, group_name, format='table', data_columns=True, index=False)
------------------------------------------------------------
Source of exception:
Filename:    /opt/kotori/lib/python3.5/site-packages/tables/path.py
Line number: 165
Function:    check_name_validity
Code:        "in object names: %r" % name)

Traceback (most recent call last):
  File "/opt/kotori/lib/python3.5/site-packages/kotori/io/export/tabular.py", line 60, in render
    df.to_hdf(t.name, group_name, format='table', data_columns=True, index=False)
  File "/opt/kotori/lib/python3.5/site-packages/pandas/core/generic.py", line 2530, in to_hdf
    pytables.to_hdf(path_or_buf, key, self, **kwargs)
  File "/opt/kotori/lib/python3.5/site-packages/pandas/io/pytables.py", line 278, in to_hdf
    f(store)
  File "/opt/kotori/lib/python3.5/site-packages/pandas/io/pytables.py", line 271, in <lambda>
    f = lambda store: store.put(key, value, **kwargs)
  File "/opt/kotori/lib/python3.5/site-packages/pandas/io/pytables.py", line 959, in put
    self._write_to_group(key, value, append=append, **kwargs)
  File "/opt/kotori/lib/python3.5/site-packages/pandas/io/pytables.py", line 1525, in _write_to_group
    s.write(obj=value, append=append, complib=complib, **kwargs)
  File "/opt/kotori/lib/python3.5/site-packages/pandas/io/pytables.py", line 4214, in write
    self._handle.create_table(self.group, **options)
  File "/opt/kotori/lib/python3.5/site-packages/tables/file.py", line 1066, in create_table
    track_times=track_times)
  File "/opt/kotori/lib/python3.5/site-packages/tables/table.py", line 781, in __init__
    self.description = Description(description, ptparams=parentnode._v_file.params)
  File "/opt/kotori/lib/python3.5/site-packages/tables/description.py", line 541, in __init__
    check_name_validity(k)
  File "/opt/kotori/lib/python3.5/site-packages/tables/path.py", line 165, in check_name_validity
    "in object names: %r" % name)
ValueError: the ``/`` character is not allowed in object names: 'VAR[m/s]'

The issue can be reproduced like:

http "https://mah.panodata.net/api/mah/testdrive/area-42/node-1/data.hdf5?from=2021-09-21T09:20:00%2B02:00&to=2021-09-21T10:10:00%2B02:00"

netCDF

ERROR: KeyError: 'VAR[m/s]'
ERROR: KeyError: 'VAR[m/s]'

------------------------------------------------------------
Entry point:
Filename:    /opt/kotori/lib/python3.5/site-packages/kotori/io/export/tabular.py
Line number: 76
Function:    render
Code:        df.to_xarray().to_netcdf(path=t.name, format='NETCDF4', engine='netcdf4', group=group_name)
------------------------------------------------------------
Source of exception:
Filename:    /opt/kotori/lib/python3.5/site-packages/xarray/backends/netCDF4_.py
Line number: 66
Function:    get_array
Code:        variable = ds.variables[self.variable_name]

Traceback (most recent call last):
  File "/opt/kotori/lib/python3.5/site-packages/kotori/io/export/tabular.py", line 76, in render
    df.to_xarray().to_netcdf(path=t.name, format='NETCDF4', engine='netcdf4', group=group_name)
  File "/opt/kotori/lib/python3.5/site-packages/xarray/core/dataset.py", line 1540, in to_netcdf
    invalid_netcdf=invalid_netcdf,
  File "/opt/kotori/lib/python3.5/site-packages/xarray/backends/api.py", line 1074, in to_netcdf
    dataset, store, writer, encoding=encoding, unlimited_dims=unlimited_dims
  File "/opt/kotori/lib/python3.5/site-packages/xarray/backends/api.py", line 1120, in dump_to_store
    store.store(variables, attrs, check_encoding, writer, unlimited_dims=unlimited_dims)
  File "/opt/kotori/lib/python3.5/site-packages/xarray/backends/common.py", line 303, in store
    variables, check_encoding_set, writer, unlimited_dims=unlimited_dims
  File "/opt/kotori/lib/python3.5/site-packages/xarray/backends/common.py", line 341, in set_variables
    name, v, check, unlimited_dims=unlimited_dims
  File "/opt/kotori/lib/python3.5/site-packages/xarray/backends/netCDF4_.py", line 514, in prepare_variable
    target = NetCDF4ArrayWrapper(name, self)
  File "/opt/kotori/lib/python3.5/site-packages/xarray/backends/netCDF4_.py", line 39, in __init__
    array = self.get_array()
  File "/opt/kotori/lib/python3.5/site-packages/xarray/backends/netCDF4_.py", line 66, in get_array
    variable = ds.variables[self.variable_name]
KeyError: 'VAR[m/s]'

The issue can be reproduced like:

http "https://mah.panodata.net/api/mah/testdrive/area-42/node-1/data.nc?from=2021-09-21T09:20:00%2B02:00&to=2021-09-21T10:10:00%2B02:00"

With kind regards,
Andreas.

/cc @mhaberler

Building packages for Debian 11 (Bullseye) fails

Hi,

friends of Kotori have been trying to build .deb packages for Debian 11 (Bullseye). They are on a Linux environment (Intel x86, current Linux kernel, Docker 20.10.11, running within a KVM).

So, what they are looking at, would be to run this command successfully.

make package-debian flavor=full dist=bullseye arch=amd64 version=0.26.12

However, the problem is that the preparation command make package-baseline-images already croaks.

standard_init_linux.go:228: exec user process caused: exec format error
The command '/bin/sh -c apt-get update && apt-get upgrade --yes' returned a non-zero code: 1
Command failed

With kind regards,
Andreas.

Processing MQTT message failed: `'NoneType' object has no attribute 'endswith'`

Hi,

people using Kotori reported this error message to us:

2021-11-20T20:25:51.019107+0100 [kotori.daq.services.mig            ] DEBUG   : Processing message on topic 'workbench/testdrive/area-42/evb-ea-ind-02' with payload '{"bme280_temp":17.9,"bme280_humidity":39.80176,"bme280_pressure":938.2483}'
2021-11-20T20:25:51.019295+0100 [kotori.daq.services.mig            ] DEBUG   : Topology address: {'realm': 'workbench', 'network': 'testdrive', 'gateway': 'area-42', 'node': 'evb-ea-ind-02', 'slot': None}
2021-11-20T20:25:51.019605+0100 [kotori.daq.services.mig            ] ERROR   : Processing MQTT message failed from topic "workbench/testdrive/area-42/evb-ea-ind-02":
        [Failure instance: Traceback: <class 'AttributeError'>: 'NoneType' object has no attribute 'endswith'
[Failure instance: Traceback: <class 'AttributeError'>: 'NoneType' object has no attribute 'endswith'
    /usr/lib/python3.9/threading.py:954:_bootstrap_inner
    /usr/lib/python3.9/threading.py:892:run
    /home/tonke/development/kotori/.venv/lib/python3.9/site-packages/twisted/_threads/_threadworker.py:46:work
    /home/tonke/development/kotori/.venv/lib/python3.9/site-packages/twisted/_threads/_team.py:190:doWork
    --- <exception caught here> ---
    /home/tonke/development/kotori/.venv/lib/python3.9/site-packages/twisted/python/threadpool.py:250:inContext
    /home/tonke/development/kotori/.venv/lib/python3.9/site-packages/twisted/python/threadpool.py:266:<lambda>
    /home/tonke/development/kotori/.venv/lib/python3.9/site-packages/twisted/python/context.py:122:callWithContext
    /home/tonke/development/kotori/.venv/lib/python3.9/site-packages/twisted/python/context.py:85:callWithContext
    /home/tonke/development/kotori/kotori/daq/services/mig.py:135:process_message
    /home/tonke/development/kotori/kotori/daq/services/mig.py:185:decode_message
    /home/tonke/development/kotori/kotori/daq/decoder/__init__.py:27:probe

With kind regards,
Andreas.

References

Compatibility with Grafana Trackmap Panel

Hi there,

people using Kotori for tracking moving objects are often avid fans of the TrackMap Panel for Grafana by @pR0Ps and contributors (cheers!).

As originally reported at #31, it appears that grafana-trackmap-panel expects that the latitude and longitude attributes to be stored within InfluxDB fields. Currently, Kotori stores them into InfluxDB tags instead, based on a simple heuristic.

if "latitude" in data and "longitude" in data:
chunk["tags"]["latitude"] = data["latitude"]
chunk["tags"]["longitude"] = data["longitude"]
del data['latitude']
del data['longitude']

We might think about changing that default behavior and making it configurable instead.

With kind regards,
Andreas.

Add adapter for devices running Tasmota

Tasmota is an alternative firmware for ESP8266 based devices like iTead Sonoff, which - besides other things - is able to do control and telemetry using MQTT.

People recently asked about respective data ingest support for the Tasmota MQTT communication style.

[EDIT]: The documentation is available at Tasmota Decoder.

Docker image 0.24.5 not compatible with Grafana 7.4

Hello,

Just tried to upgrade to grafana 7.4 with latest kotori image 0.24.5 but GraphanaClient APi not compatible and reporting error. I've seen a 0.25.0 been tagged with grafana 7.4 support, any chance to make a try with an updated kotori docker image updated to 25.0 ?
Thanks
Best Regards, PLo
=>> BTW, great piece of software !!

Support Python 3

2020: Python 2 is dead. Long live Python 3!

We should support it.

Kotori is subscribing to empty topic?

Hi again,

we are not sure about the following log output:

2021-11-20T17:53:03.215897+0100 [kotori.daq.intercom.mqtt.paho      ] INFO    : Subscribing to topics []. client=<paho.mqtt.client.Client object at 0x7ff79325c850>
2021-11-20T17:53:03.234887+0100 [kotori.daq.intercom.mqtt.paho      ] INFO    : Subscribing to topics ['workbench/#']. client=<paho.mqtt.client.Client object at 0x7ff7b9ebb7f0>
2021-11-20T17:53:03.235003+0100 [kotori.daq.intercom.mqtt.paho      ] INFO    : Subscribing to topic 'workbench/#'

Does that mean Kotori subscribes to an empty topic?

With kind regards,
Andreas.

Kotori using Weewx (MQTT) - ERROR: Processing MQTT message failed from topic "weewx//loop":

Hello!!

I have a weather station Vantage Pro 2 console and i'm trying to use Kotori and Weewx to publish data into grafana and store the weather data in InfluxDB.
I'm using an MQTT broker (Mosquitto) to send and receive the data from weewx to kotori.
But i have a problem processing MQTT messages from the topic. The topic that i used is 'weewx/#'.

I am getting this error from kotori.log:

ERROR   : Processing MQTT message failed from topic "weewx//loop"

If anyone could please help...

This is the error (kotori.log):

    2019-05-22T15:24:17+0100 [kotori.daq.services.mig            ] ERROR   : Processing MQTT message failed from topic "weewx//loop":

    [Failure instance: Traceback: <type 'exceptions.AttributeError'>: 'dict' object has no attribute 'slot'
/usr/lib/python2.7/threading.py:801:__bootstrap_inner
/usr/lib/python2.7/threading.py:754:run
/opt/kotori/lib/python2.7/site-packages/twisted/_threads/_threadworker.py:46:work
/opt/kotori/lib/python2.7/site-packages/twisted/_threads/_team.py:190:doWork
--- <exception caught here> ---
/opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:250:inContext
/opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:266:<lambda>
/opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:122:callWithContext
/opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:85:callWithContext
/opt/kotori/lib/python2.7/site-packages/kotori/daq/services/mig.py:234:process_message
/opt/kotori/lib/python2.7/site-packages/kotori/daq/services/mig.py:92:topology_to_storage
/opt/kotori/lib/python2.7/site-packages/kotori/daq/intercom/strategies.py:85:topology_to_storage

Question about the data flow for data acquisition

Hello again!! Sorry to bother... I have a question about how Kotori exports data to influxDB.
I'm writing a thesis about the system of my automatic acquisition of meteorological data from the Davis Vantage Pro2 station and i'm using your system.
I would like to know more about Kotori exports data to influxDB. Is it through http? Through GET request? What kotori does to the JSON that comes from MQTT?

I kinda know a little of how it works but i wanna have certainty.
Best Regards.

Use pool-based multithreading when serving HTTP responses from data frames

When reading some code again, we just found that some HTTP response handling is currently performed synchronously:

# Synchronous, the worker-threading is already on the HTTP layer
return response.render()

We should improve this situation by either doing it asynchronously or by diverting the computation to worker threads from a thread pool, as we already introduced at

# Perform MQTT message processing using a different thread pool
self.threadpool = ThreadPool()
self.thimble = Thimble(reactor, self.threadpool, self, ["process_message"])
for processing messages from MQTT the other day.

The implementation is based on thimble, which wraps objects that have a blocking API with a non-blocking, Twisted-friendly Deferred API by means of thread pools.

Support Open Vehicle Monitoring System (OVMS)

Dear @markwj and @dexterbg,

thanks for conceiving and maintaining OVMS. It appeared to me that starting with OVMS 3, the system sends out metrics over MQTT. It would be nice to add an appropriate decoder to Kotori.

So, can I humbly ask you to point out some documentation or otherwise tell me about how the metric topics and payloads actually look like?

Thanks already and with kind regards,
Andreas.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.