Coder Social home page Coder Social logo

mcci-catena / docker-iot-dashboard Goto Github PK

View Code? Open in Web Editor NEW
99.0 12.0 58.0 2.33 MB

A complete IoT server for LoRaWAN IoT projects: node-red + influxdb + grafana + ssl + let's encrypt using docker-compose.

License: MIT License

JavaScript 15.44% Shell 59.57% Dockerfile 24.99%
node-red grafana influxdb docker-container dashboard letsencrypt apache2 iot-cloud iot

docker-iot-dashboard's Introduction

Dashboard example for Internet of Things (IoT)

This repository contains a complete example that grabs device data from IoT-Network server, stores it in a database, and then displays the data using a web-based dashboard.

You can set this up on a Docker droplet from Digital Ocean (or on a Ubuntu VM from DreamCompute, or on a "Ubuntu + Docker" VM from the Microsoft Azure store) with minimal effort. You should set up this service to run all the time to capture the data from your devices; you then access the data at your convenience using a web browser.

Table of Contents

Introduction

This SETUP.md explains the Application Server Installation and its setup. Docker and Docker Compose are used to make the installation and setup easier.

This dashboard uses docker-compose to set up a group of eight primary docker containers, backed by two auxiliary containers:

  1. An instance of Nginx, which proxies the other services, handles access control, gets SSL certificates from Let's Encrypt, and faces the outside world.

  2. An instance of Node-RED, which processes the data from the individual nodes, and puts it into the database.

  3. An instance of InfluxDB, which stores the data as time-series measurements with tags and provides backup support for the databases.

  4. An instance of Grafana, which gives a web-based dashboard interface to the data.

  5. An instance of Mqtt, which provides a lightweight method of carrying out messaging using a publishing/subscribe model

  6. An instance of Apiserver, which runs the below APIs :

    • DNCserver: It is a back end component of Generic DNC, developed on NodeJS, provides RESTful APIs for Generic DNC User Interface, Grafana Influx Plugin and DNC Standard Plugin. Stores user data in MongoDB and communicates with InfluxDB to get database name and measurement name.

    • DNCgiplugin: It is a backend component of Generic DNC, developed on NodeJS, provides RESTful API service for interfacing Grafana UI with the Generic DNC back end. Handles the Influx query from Grafana and replace the influx Tags by the DNC Tags, Server data then communicates with the InfluxDB finally send back the response to the Grafana.

    • DNCstdplugin: It is a backend component of Generic DNC, developed on NodeJS, provides RESTful API service for Customized Excel Plugin UI. Receives request from Excel Plugin and communicates with DNC Server, InfluxDB and send back response to the Excel Plugin UI component.

    • version: It will list us version details of the System Info and Docker-IoT-Dashboard Info

  7. An instance of Mongodb, MongoDB is a document-oriented NoSQL database used for high volume data storage. Instead of using tables and rows as in the traditional relational databases, it makes use of collections and documents. Documents consist of key-value pairs which are the basic unit of data in MongoDB. Collection contains sets of documents and function which is the equivalent of relational database tables. MongoDB's data records are stored in JSON (JavaScript Object Notation) format. DNC uses MongoDB database system to store the data.

  8. An instance of Expo, Expo is a framework and a platform for universal React applications. DNC User Interface designed by using React Native and native platforms. Expo runs dncui API which provides simple User Interface to handle all DNC components.

The auxiliary containers are:

  1. Postfix, which (if configured) handles outbound mail services for the containers (for now, Influxdb, Node-red, cron-backup and Grafana).

  2. Cron-backup, which provides backup support for the Nginx, Node-red, Grafana, Mongodb and Mqtts containers and pushed the backed up data to S3-compatible storage.

To make things more specific, most of the description here assumes use of Digital Ocean. However, this was tested on Ubuntu 20.04 with no issues (apart from the additional complexity of setting up apt-get to fetch docker, and the need for a manual installation of docker-compose), on Dream Compute, and on Microsoft Azure. This will work on any Linux or Linux-like platform that supports docker and docker-compose. Note: Its likelihood of working with Raspberry Pi has not been tested as yet.

Definitions

  • The host system is the system that runs Docker and Docker-compose.

  • A container is one of the virtual systems running under Docker on the host system.

  • A file on the host is a file present on the host system (typically not visible from within the container(s)).

  • A file in container X (or a file in the X container) is a file present in a file-system associated with container X (and typically not visible from the host system).

Security

All communication with the Nginx server is encrypted using SSL with auto-provisioned certificates from Let's Encrypt. Grafana is the primary point of access for most users, and Grafana's login is used for that purpose. Access to Node-RED and InfluxDB is via special URLs (base/node-red/ and base/influxdb:8086/, where base is the URL served by the Nginx container). These URLs are protected via Nginx htpasswd file entries. These entries are files in the Nginx container, and must be manually edited by an Administrator.

The initial administrator's login password for Grafana must be initialized prior to starting; it's stored in .env. (When the Grafana container is started for the first time, it creates grafana.db in the Grafana container, and stores the password at that time. If grafana.db already exists, the password in grafana/.env is ignored.)

Note:- Microsoft Azure, by default, will not open any of the ports to the outside world, so the user will need to open port 443 for SSL access to Nginx.

For concreteness, the following table assumes that base is “dashboard.example.com”.

User Access

To access Open this link Notes
Node-RED https://dashboard.example.com/node-red/ Port number is not needed and shouldn't be used. Note trailing '/' after node-red.
InfluxDB API queries https://dashboard.example.com/influxdb:8086/ Port number is needed. Also note trailing '/' after influxdb.
Grafana https://dashboard.example.com/ Port number is not needed and shouldn't be used.
Mqtts wss://dashboard.example.com/mqtts/ Mqtt client is needed. To test it via Mqtt web portal
Apiserver DNC Server: https://dashboard.example.com/dncserver This API calls starts with the URL.
DNC Standard Plugin: https://dashboard.example.com/dncstdplugin This API calls starts with the URL
DNC Grafana-Influx Plugin: https://dashboard.example.com/dncgiplugin This API calls starts with the URL
Version info: https://dashboard.example.com/version Can view the version details on any web browser using the link.
Expo https://dashboard.example.com/dncui Port number is not needed and shouldn't be used.

To access MongoDB externally

(via Nginx SSL Termination mode)

  • server: dashboard.example.com
  • port: 27020
  • username: IOT_DASHBOARD_MONGO_INITDB_ROOT_USERNAME
  • password: IOT_DASHBOARD_MONGO_INITDB_ROOT_PASSWORD
  • Authentication DB: admin
  • Connect to database: admin
  • SSL support: yes

This can be visualized as shown in the figure below:

Docker connection and User Access

Connection Architecture using SSH

Assumptions

  • The host system must have docker-compose version 1.9 or later (for which https://github.com/docker-compose -- be aware that apt-get normally doesn't grab this; if configured at all, it frequently gets an out-of-date version).

  • The environment variable IOT_DASHBOARD_DATA, if set, points to the common directory for the data. If not set, docker-compose will quit at start-up. (This is by design!)

    • ${IOT_DASHBOARD_DATA}node-red will have the local Node-RED data.

    • ${IOT_DASHBOARD_DATA}influxdb will have the local InfluxDB data (this should be backed-up)

    • ${IOT_DASHBOARD_DATA}grafana will have all the dashboards

    • ${IOT_DASHBOARD_DATA}docker-nginx will have .htpasswd credentials folder authdata and Let's Encrypt certs folder letsencrypt

    • ${IOT_DASHBOARD_DATA}mqtt/credentials will have the user credentials

    • ${IOT_DASHBOARD_DATA}apiserver/dncserver:/apiserver/dncserver will have the source data required to run dncserver API.

    • ${IOT_DASHBOARD_DATA}apiserver/dncstdplugin:/apiserver/dncstdplugin will have the source data required to run dncstdplugin API.

    • ${IOT_DASHBOARD_DATA}apiserver/dncgiplugin:/apiserver/dncgiplugin will have the source data required to run dncgiplugin API.

    • ${IOT_DASHBOARD_DATA}mongodb/mongodb_data will have local MongoDB data (this should be backed-up).

    • ${IOT_DASHBOARD_DATA}expo/dncui:/expo/dncui will have the source data required to run dncui API.

Composition and External Ports

Within the containers, the individual programs use their usual ports, but these are isolated from the outside world, except as specified by docker-compose.yml file.

In docker-compose.yml, the following ports on the docker host are connected to the individual programs.

  • Nginx runs on 80/tcp and 443/tcp. (All connections to port 80 are redirected to 443 using SSL).

  • MQTTS(Mosquitto) runs on

    • 443/tcp for MQTT over Nginx proxy
    • 8883/tcp for MQTT over TLS/SSL
    • 8083/tcp for WebSockets over TLS/SSL
    • 1883/tcp for MQTT over TCP protocol (not secure);(Disabled by default)

The below ports are exposed only for the inter-container communication; These ports can't be accessed by host system externally.

  • Grafana runs on 3000/tcp.

  • Influxdb runs on 8086/tcp.

  • Node-red runs on 1880/tcp.

  • Postfix runs on 25/tcp.

  • Apiserver runs on the below ports

    • DNC-server runs on 8891/tcp.

    • DNC-Std-Plugin runs on 8892/tcp.

    • DNC-GI-Plugin runs on 8893/tcp.

  • Expo runs on 19006/tcp.

  • Mongodb runs on 27017/tcp.

Remember, if the server is running on a cloud platform like Microsoft Azure or AWS, one needs to check the firewall and confirm that the ports are open to the outside world.

Data Files

When designing this collection of services, there were two choices to store the data files:

  • we could keep them inside the docker containers, or

  • we could keep them in locations on the host system.

The advantage of the former is that everything is reset when the docker images are rebuilt. The disadvantage of the former is that there is a possibility to lose all the data when it’s rebuilt. On the other hand, there's another level of indirection when keeping things on the host, as the files reside in different locations on the host and in the docker containers.

Because IoT data is generally persistent, we decided that the extra level of indirection was required. To help find things, consult the following table. Data files are kept in the following locations by default.

Component Data file location on host Location in container
Node-RED ${IOT_DASHBOARD_DATA}node-red /data
InfluxDB ${IOT_DASHBOARD_DATA}influxdb /var/lib/influxdb
Grafana ${IOT_DASHBOARD_DATA}grafana /var/lib/grafana
Mqtt ${IOT_DASHBOARD_DATA}mqtt/credentials /etc/mosquitto/credentials
Nginx ${IOT_DASHBOARD_DATA}docker-nginx/authdata /etc/nginx/authdata
Let's Encrypt certificates ${IOT_DASHBOARD_DATA}docker-nginx/letsencrypt /etc/letsencrypt
DNCserver ${IOT_DASHBOARD_DATA}apiserver/dncserver /apiserver/dncserver
DNCgiplugin ${IOT_DASHBOARD_DATA}apiserver/dncgiplugin /apiserver/dncgiplugin
DNCstdplugin ${IOT_DASHBOARD_DATA}apiserver/dncstdplugin /apiserver/dncstdplugin
Mongodb ${IOT_DASHBOARD_DATA}mongodb/mongodb_data /data/db
Expo ${IOT_DASHBOARD_DATA}expo/dncui /expo/dncui

As shown, one can easily change locations on the host (e.g. for testing). This can be done by setting the environment variable IOT_DASHBOARD_DATA to the absolute path (with trailing slash) to the containing directory prior to calling docker-compose up. The above paths are appended to the value of IOT_DASHBOARD_DATA. Directories are created as needed.

Normally, this is done by an appropriate setting in the .env file.

Consider the following example:

$ grep IOT_DASHBOARD_DATA .env
IOT_DASHBOARD_DATA=/dashboard-data/
$ docker-compose up –d

In this case, the data files are created in the following locations:

Table Data Location Examples

Component Data file location
Node-RED /dashboard-data/node-red
InfluxDB /dashboard-data/influxdb
Grafana /dashboard-data/grafana
Mqtt /dashboard-data/ mqtt/credentials
Nginx /dashboard-data/docker-nginx/authdata
Let's Encrypt certificates /dashboard-data/docker-nginx/letsencrypt
DNCserver /dashboard-data/apiserver/dncserver
DNCgiplugin /dashboard-data/apiserver/dncgiplugin`
DNCstdplugin /dashboard-data/apiserver/dncstdplugin`
Mongodb /dashboard-data/mongodb/mongodb_data`
Expo /dashboard-data/expo/dncui

Reuse and removal of data files

Since data files on the host are not removed between runs, as long as the files are not removed between runs, the data will be preserved.

Sometimes this is inconvenient, and it is necessary to remove some or all of the data. For a variety of reasons, the data files and directories are created owned by root, so the sudo command must be used to remove the data files. Here's an example of how to do it:

source .env
sudo rm -rf ${IOT_DASHBOARD_DATA}node-red
sudo rm -rf ${IOT_DASHBOARD_DATA}influxdb
sudo rm -rf ${IOT_DASHBOARD_DATA}Grafana
sudo rm –rf ${IOT_DASHBOARD_DATA}mqtt
sudo rm -rf ${IOT_DASHBOARD_DATA}docker-nginx
sudo rm -rf ${IOT_DASHBOARD_DATA}apiserver
sudo rm -rf ${IOT_DASHBOARD_DATA}mongodb
sudo rm -rf ${IOT_DASHBOARD_DATA}expo

Node-RED and Grafana Examples

This version requires that you set up Node-RED, the Influxdb database and the Grafana dashboards manually, but we hope to add a reasonable set of initial files in a future release.

Connecting to InfluxDB from Node-RED and Grafana

There is one point that is somewhat confusing about the connections from Node-RED and Grafana to InfluxDB. Even though InfluxDB is running on the same host, it is logically running on its own virtual machine (created by docker). Because of this, Node-RED and Grafana cannot use local host when connecting to InfluxDB. A special name is provided by docker: influxdb. Note that there's no DNS suffix. If influxdb is not used, Node-RED and Grafana will not be able to connect.

Logging in to Grafana

On the login screen, the initial username is "admin". The initial password is given by the value of the variable IOT_DASHBOARD_GRAFANA_ADMIN_PASSWORD in .env. Note that if you change the password in .env after the first time you launch the grafana container, the admin password does not change. If you somehow lose the previous value of the admin password, and you don't have another admin login, it's very hard to recover; easiest is to remove grafana.db and start over.

Data source settings in Grafana

  • Set the URL (under HTTP Settings) to http://influxdb:8086.

  • Select the database.

  • Leave the username and password blank.

  • Click "Save & Test".

MQTTS Examples

Mqtts can be accessed in the following ways:

Method Hostname/Path Port Credentials
MQTT over Nginx proxy wss://dashboard.example.com/mqtts/ 443 Username/Password come from mosquitto’s configuration (password_file)
MQTT over TLS/SSL dashboard.example.com 8883 Username/Password come from mosquitto’s configuration (password_file)
WebSockets over TLS/SSL wss://dashboard.example.com/ 8083 Username/Password come from mosquitto’s configuration (password_file)
MQTT over TCP protocol (not secure) dashboard.example.com 1883 Username/Password come from mosquitto’s configuration (password_file)

Integrating Data Normalization Control(DNC) Support

What is DNC?

DNC is a logical data server, designed to achieve location based data measurement by provide customized tag mapping on top of a general database where sensor data is organized based on device IDs (Hardware device ID).

About DNC

The visibility of clients Data Server is controlled by the DNC Server, users can make data query based on the tags provided in the DNC device mapping, DNC Server removes the customized DNC tags and add required tags available in the mapping field, send converted query to client’s Data Server, receives the response, then remove the tags, add the respective DNC tags and send the response to the user.

Advantages

  • Location-based data measurement
  • Device is considered as a logical unique object, loosely coupled with hardware
  • Device changing of a location can easily be mapped.
  • No data loss due to device change/replacement
  • User can provide convenient naming for their device

Application Architecture

Application Architecture

DNC Components

Client

In DNC, Client is like a profile, each client can be created with a set of tags, and it requires Database credentials to query data from Data Server.

Field Description
Name Name of the Client
TagsList List of Tags (Customized tags)
Example-1:
Tag1 – Country, Tag2 – State, Tag3 – City, Tag4 – Street, Tag5 – Device Name
Example-2:
Tag1- Site, Tag2 – Pile, Tag3 – Location, Tag4 - DeviceName
DBData DataBase Server Credentials. (Most cases: InfluxDB)
1. DB URL, 2. UserName, 3. Password, 4. Database Name

A Client can be created only by the Master Admin, once a client profile is created, Admin and User account can be created for that Client. The Client admin can add new devices under the Client profile.

Device Registry

This is the gateway for adding devices to the DNC Server. This record contains all devices entries irrespective of Clients.

Fields Description
Client Name Device will be assigned to this Client
Hardware ID ID of the Hardware printed in the PCB
deviceID ID received from TTN/Sigfox – This data should be existed as a tag in the influxDB database
devID ID received from TTN/Sigfox– This data should be existed as a tag in the influxDB database
devEUI ID received from TTN/Sigfox– This data should be existed as a tag in the influxDB database
Measurement Name Where the device data gets logged in the Database Server. (In InfluxDB it is called as measurement name)
Field Data Name Name of the Data which required by the Client. Example [ tWater/rh/vBat etc.] This data should be existed as a field tag in the influxDB database
Date of Installation The date when the device installed in the field
Data of Removal The date when device removed from the field. This will not be asked while adding device, required only remove/replace device

The Master Admin has the access to manage this record, no other can do any changes in this record. Note:- The deviceID, devID and devEUI are optional but any-one should be mandatory.

Devices

This is the gateway for adding devices under a client with the customized tag details. All tag details are optional, the admin can add device with or without tag fields.

Fields Description
Hardware ID Select from the Master Record, direct entry not allowed
Tag 1 - optional Value for the Tag
Tag 2 - optional Value for the Tag N - optional Value for the Tag
Latitude Location coordinates – Required when showing data on world map
Longitude Location coordinates – Required when showing data on world map
Installation Date Get it from the Master record
Removal Date Enter the date when the device removed from the field. This date automatically updated to Master record.

Plugins

Plugins are created to provide User interface for using DNC application, which communicates with DNC engine by using the Plugin API.

Standard Plugin API

This API receives the request from Excel sheet or Google sheet Plugin, communicates with the DNC server, InfluxDB and sends back the response to the Plugin.

Grafana Influx Plugin API

This API receives requests from Grafana application, which are influxDB based queries. In Grafana UI all the requests are redirected to Grafana Influx Plugin, which communicates with DNC server, InfluxDB and then sends the response back to Grafana UI.

DNC server Architecture

dnc_server_arch

Setup Instructions

Please refer to SETUP.md for detailed set-up instructions.

Please create a New discussion on this repository for getting support on DNC.

Influxdb Backup and Restore

Please refer to influxdb/README.md.

Release History

  • v3.0.2 has the following changes.

    • Updated DNC-Std-Plugin - Added Token Validation & new Status code
  • v3.0.1 has the following changes.

    • Updated DNC-server - Modified Login response and Query response
    • Updated DNC-Std-Plugin - Added support for DNC Mapping
  • v3.0.0 has the following changes.

    • Included Apiserver and Expo Containers for DNC support.
    • Documented the process behind the DNC support.
    • Provided backup support for Mongodb container's data
  • v2.0.0 includes the following changes

    • Included auxiliary backup container (cron-backup) for providing backup support for Nginx, Node-red, Grafana,and Mqtts containers.
    • Updated the base images used in all Dockerfile from bionic to focal.
    • Added Mosquitto (MQTT client) Ubuntu PPA repository to install the latest version and fixed ownership issue when accessing Let's Encrypt certs.
    • Added TLS/SSL based SMTP authentication support in Postfix container.
    • Some minor changes in the following files: Dockerfile, docker-compose.yml, setup.md and shell scripts.
  • v1.0.0 has the following changes

    • Influxdb:

      1. Backup script is updated for backing up online (live) databases and to push the backup to Amazon bucket.
      2. Crontab was set for automatic backup.
      3. Supports sending email for backup alerting.
    • Nginx:

      1. The Apache setup is migrated to Nginx.
      2. Proxy-ing the services like (influxdb, grafana, node-red, mqtts over proxy) was updated.
    • Node-red:

      1. supports data flowing via MQTT channel and HTTPS Endpoint
      2. supports sending email.
    • MQTTS:

      1. supports different connections as below:
        1. Mqtt Over Nginx proxy.
        2. Mqtt over TCP (disabled by default)
        3. Mqtt over TLS/SSL
        4. Mqtt over Websockets (WSS)
    • Postfix:

      1. Configured to Relay mails via External SMTP Auth (Tested with Gmail and Mailgun).
      2. Mails generated from Containers like Grafana, Influxdb and Node-red will be relayed through Postfix container.

Meta

docker-iot-dashboard's People

Contributors

johanstokking avatar mukeshbharath avatar nvishnu86 avatar oliv3 avatar terrillmoore avatar wnasich avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-iot-dashboard's Issues

Grafana can't send mail

Grafana needs additional setup in order to be able to send mail. Invitations don't go out.

[postfix] not sending Grafana alerts

Hi, can someone help me troubleshoot this issue ?
I'm running latest code from master, everything works fine but sending mail.
I set up a Grafana channel to send alerts to my address, when trying to send a test message from Grafana I don't receive anything. (Nothing reaching my mail server)
I'm not a Postfix expert, and troubleshooting Postfix running in a container is even harder..

Here's the .env file I'm using (replacing my domain with foobar.net):

TTN_DASHBOARD_DATA=/ttn/
TTN_DASHBOARD_APACHE_FQDN=ttn.foobar.net
TTN_DASHBOARD_CERTBOT_FQDN=ttn.foobar.net
[email protected]
TTN_DASHBOARD_GRAFANA_ADMIN_PASSWORD=xxxxxxxxx
[email protected]
TTN_DASHBOARD_GRAFANA_INSTALL_PLUGINS=grafana-worldmap-panel,grafana-clock-panel,grafana-piechart-panel
TTN_DASHBOARD_INFLUXDB_INITIAL_DATABASE_NAME=demo
TTN_DASHBOARD_MAIL_HOST_NAME=ttn.foobar.net
TTN_DASHBOARD_MAIL_DOMAIN=foobar.net
TTN_DASHBOARD_MYSQL_PASSWORD=xxxxxxxxx
TTN_DASHBOARD_MYSQL_ROOT_PASSWORD=xxxxxxxxx

Going into the container with some

# docker exec -it ttn_dashboard_postfix_1_d8c37ddb4aa0 bash
root@d97675a38e1b:/# postqueue -p

(...)
C568C138AB2     1384 Wed Jan 23 20:08:44  MAILER-DAEMON
                                                (unknown mail transport error)
                                         [email protected]

But here stop my postfix skills... I have no clue how to debug this.
Sending a mail to myself from inside the container using the mail command doesn't work either, so I think it's probably Postfix going wrong rather than Grafana..

Thanks in advance,

Certbot needs Port 80 accessible on Docker host

For the Certbot HTTP based verification to work, port 80 also needs to be open.

> docker logs iotdash_nginx_1
...
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for iotdash.XXXX.com
nginx: [error] invalid PID number "" in "/run/nginx.pid"
Waiting for verification...
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/default
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/default

Port 80 must be open to the NGINX host. e.g. validate the Network Security Group if using Microsoft Azure.

Good tip for Setup docs. Add to list of required ports.

bug nginx /etc/nginx/sites-available/default for none ipv6 host

I have found a bug
both are linked with /etc/nginx/sites-available/default
here the original container file:

##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# https://www.nginx.com/resources/wiki/start/
# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
# https://wiki.debian.org/Nginx/DirectoryStructure
#
# In most cases, administrators will remove this file from sites-enabled/ and
# leave it as reference inside of sites-available where it will continue to be
# updated by the nginx packaging team.
#
# This file will automatically load configuration files provided by other
# applications, such as Drupal or Wordpress. These applications will be made
# available underneath a path with that package name, such as /drupal8.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##

# Default server configuration
#
server {
        listen 80 default_server;
        listen [::]:80 default_server;

        # SSL configuration
        #
        # listen 443 ssl default_server;
        # listen [::]:443 ssl default_server;
        #
        # Note: You should disable gzip for SSL traffic.
        # See: https://bugs.debian.org/773332
        #
        # Read up on ssl_ciphers to ensure a secure configuration.
        # See: https://bugs.debian.org/765782
        #
        # Self signed certs generated by the ssl-cert package
        # Don't use them in a production server!
        #
        # include snippets/snakeoil.conf;

        root /var/www/html;

        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;

        server_name _;

        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }

        # pass PHP scripts to FastCGI server
        #
        #location ~ \.php$ {
        #       include snippets/fastcgi-php.conf;
        #
        #       # With php-fpm (or other unix sockets):
        #       fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
        #       # With php-cgi (or other tcp sockets):
        #       fastcgi_pass 127.0.0.1:9000;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #       deny all;
        #}
}


# Virtual Host configuration for example.com
#
# You can move that to a different file under sites-available/ and symlink that
# to sites-enabled/ to enable it.
#
#server {
#       listen 80;
#       listen [::]:80;
#
#       server_name example.com;
#
#       root /var/www/example.com;
#       index index.html;
#
#       location / {
#               try_files $uri $uri/ =404;
#       }
#}

here the first error message

nginx_1        | *** Running /etc/my_init.d/setup.sh...
nginx_1        | Saving debug log to /var/log/letsencrypt/letsencrypt.log
nginx_1        | Error while running nginx -c /etc/nginx/nginx.conf -t.
nginx_1        |
nginx_1        | nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx_1        | nginx: [emerg] socket() [::]:80 failed (97: Address family not supported by protocol)
nginx_1        | nginx: configuration file /etc/nginx/nginx.conf test failed
nginx_1        |
nginx_1        | The nginx plugin is not working; there may be problems with your existing configuration.
nginx_1        | The error was: MisconfigurationError('Error while running nginx -c /etc/nginx/nginx.conf -t.\n\nnginx: the configuration file /etc/nginx/nginx.conf syntax is ok\nnginx: [emerg] socket() [::]:80 failed (97: Address family not supported by protocol)\nnginx: configuration file /etc/nginx/nginx.conf test failed\n')
nginx_1        | *** /etc/my_init.d/setup.sh failed with status 4
nginx_1        |
nginx_1        | *** Killing all processes...

seems to be linked to ipv6
workaround removing ipv6 listening:
insert the following line at line 17 on the nginx docker file
run sed -i 's/listen [::]:80 default_server;/#listen [::]:80 default_server;/g' /etc/nginx/sites-available/default

Add at least minimal CI testing

As people start using this repo more, we need minimal CI testing.

As a bare minimum, CI testing should prove that we can deploy a server without errors.

There may have to be a hack to deal with Lets Encrypt; we won't have a public address on most CI frameworks, unless we use drone.io and test on a dedicated instance.

Perhaps we can do minimal API tests.

As a stretch goal, we'd test the migration and setup scripts.

With Node-RED 1.1 and later, the Admin API can be used to install flows; so we could potentially mock all the data injection, and then use the InfluxDB and grafana APIs to test data recording.

Error doing clean docker-compose build

With the latest changes, we get a build failure.

$ docker-compose build
ERROR: Invalid interpolation format for "build" option in service "postfix": "${TTN_DASHBOARD_MAIL_RELAY_IP:-}"

This is coming from the Dockerfile for postfix; it appears that the substitution is not done in the way we think. Or I have an old docker-compose / docker (seems very likely on that target). Filing a bug anyway in case others hit this.

not functional on raspberry Pi 3B + -- exec user process caused "exec format error"

Hi, can't seem to log an issue on my own fork of your repo, and I guess you'll be interested in this anyway. Cortex A53, ARM v8 architecture I believe.

Issue likely just that the Phusion base image is compiled for x64 not Arm variants (32 or 64 bit)

using HypriotOS v1.9.0

$ docker-compose build
Building postfix
Step 1/17 : FROM phusion/baseimage
latest: Pulling from phusion/baseimage
281a73dee007: Pull complete
2aea1b77cff7: Pull complete
59a714b7d8bf: Pull complete
0218064da0a9: Pull complete
ebac621dcea3: Pull complete
a3ed95caeb02: Pull complete
b580731643cc: Pull complete
faa5fbdba239: Pull complete
Digest: sha256:29479c37fcb28089eddd6619deed43bcdbcccf2185369e0199cc51a5ec78991b
Status: Downloaded newer image for phusion/baseimage:latest
---> 166cfc3f6974
Step 2/17 : RUN apt-get update && apt-get install -y iputils-ping net-tools debconf-utils
---> Running in 72aa1fac7ca8
standard_init_linux.go:190: exec user process caused "exec format error"
ERROR: Service 'postfix' failed to build: The command '/bin/sh -c apt-get update && apt-get install -y iputils-ping net-tools debconf-utils' returned a non-zero code: 1
HypriotOS/armv7: pirate@black-pearl in ~/docker-ttn-dashboard
$

Tips appreciated, I'm thinking of trying using https://hub.docker.com/r/armv7/armhf-baseimage/ which is based on Phusion, but it's pretty old.
x

Default for allow_sign_up has changed in Grafana 5

MCCI runs a live demo of power data at ithaca-power.mcci.com. We require users to sign in, but we allow users to sign up freely. The Grafana 4 setup enabled this by default. However, Grafana 5 has (in the [users] section) allow_sign_up = false as the default.

This meant that users couldn't sign up on our site anymore.

I will push a change that restores the default behavior, and adds a setting in the .env file so installations can adjust this to meet requirements.

As a quick workaround, you can add the following to docker-compose.yml:

diff --git a/docker-compose.yml b/docker-compose.yml
index e9a1a51..86aa5cd 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -210,6 +210,7 @@ services:
       GF_LOG_MODE: "${TTN_DASHBOARD_GRAFANA_LOG_MODE:-console,file}"
       GF_LOG_LEVEL: "${TTN_DASHBOARD_GRAFANA_LOG_LEVEL:-info}"
       GF_INSTALL_PLUGINS: "${TTN_DASHBOARD_GRAFANA_INSTALL_PLUGINS:-}"
+      GF_USERS_ALLOW_SIGN_UP: "true"
     # grafana opens ports on influxdb and postfix, so it needs to be able to talk to it.
     links:
       - influxdb

Allow user to install extra nodes without editing Dockerfiles.

See #17.

It would be nice to be able to push extra nodes through an environment variable -- something that iterates using bash, something like this:

RUN /bin/bash -c 'for iPkg in "$@" ; do npm install "$iPkg" || { echo "couldn't install: $iPkg" ; exit 1 ; } ; done' -- ${TTN_NODERED_PKGLIST}

Of course, I'd have to look up the other env var names and get the style right. And we'd need a test case.

To disable DNC support

In the current setup (commit b317bd5), We didn't add the DNC source files. but we planned to add them soon. So, since the docker services required for the DNC support are looking for source files in their respective path to initialize containers, we suggest you to disable the DNC support to avoid using system resources unnecessarily for a while until we add the DNC source files.

Steps to disable DNC support:

  1. Go to the cloned path and run the below commands

       root@testing-in:/fresh/dnc# mv nginx/proxy-apiserver.conf nginx/proxy-apiserver.conf.block
       root@testing-in:/fresh/dnc# mv nginx/proxy-expo.conf nginx/proxy-expo.conf.block
  2. Edit nginx/setup.sh file as below

    Comment out the line 62 with Hash (#)

     	#grep '27020' /etc/nginx/nginx.conf || $(sed -i "s/domain/$CERTBOT_DOMAINS/g" /root/mongo.txt && sed -i $'/http {/{e cat /root/mongo.txt\n}' /etc/nginx/nginx.conf)
    

    save the changes

  3. Edit the docker-compose.yml file as below

    Comment out the lines 231 and 232 with Hash (#) as below.

     	226     links:
     	227       - grafana
     	228       - node-red
     	229       - influxdb
     	230       - mqtts
     	231         #- apiserver
     	232         #- expo
    

    uncomment (#) the below line under the docker services apiserver (at line 381), mongodb (at line 401) and expo (at line 414).

     	profiles: ['dnc']
    

    save the changes

  4. Then, run the below commands

    	docker-compose down
    	docker-compose up -d --build

Prepare for release v2.0.0

Please release a version v2.0.0 as it was tested. It's working fine.

  • v2.0.0 includes the following changes

    • Included auxiliary backup container (cron-backup) for providing backup support for Nginx, Node-red, Grafana and Mqtts containers.
    • Updated the base images used in all Dockerfile from bionic to focal.
    • Added Mosquitto (MQTT client) Ubuntu ppa repository to install the latest version and fixed ownership issue when accessing Let's Encrypt certs.
    • Added TLS/SSL based SMTP authentication support in Postfix container.
    • Some minor changes in the following files: Dockerfile, docker-compose.yml, setup.md and shell scripts.

Please merge dev_v1 onto main branch

At this point, it is clear that the main branch is seriously out of date for new uses. Someone from TTN NY tried to install and we discovered that the tags for phusion are out of date. We need to update to dev_v1, and publish v1.0.0. @MuruganChandrasekar, please submit a PR. Thanks!

--Terry

Latest Docker-compose hates timezone setting

Uh-oh, although this was working, the latest docker-compose update catches an error:

https://github.com/mcci-catena/docker-ttn-dashboard/blob/bf851e9d76f8aaa2d1774148e9a8b6a341f0efd3/docker-compose.yml#L161-L162

The leading - is spurious, and new versions of docker-compose fail with:

$ docker-compose --version
docker-compose version 1.23.2, build 1110ad01
$ docker-compose up -d
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.node-red.environment contains {"TZ": "GMT"}, which is an invalid type, it should be a string

Removing the - fixes things (and makes it like the other environment groups.

Prepare for release

Please release the HEAD.

HEAD includes the following changes

  • Included auxiliary backup container(cron-backup) for providing backup support for Nginx, Node-red and Grafana containers.

  • Updated the base images used in all Dockerfile from bionic to focal.

  • Added Mosquitto(MQTT client) Ubuntu ppa repository to install the latest version and fixed ownership issue when accessing Let's encrypt certs.

  • Added TLS/SSL based SMTP authentication support in Postfix container.

  • Some minor changes in the following files: Dockerfile, docker-compose.yml, setup.md and shell scripts.

Document how to backup influxdb

It turns out that (for reasons that are not clear to me), influxd does not like to run backups using localhost (or 127.0.0.1) as the host name. Instead, when running under docker-compose, you must explicitly specify the host to backup as influxdb.

# the following assumes that influxdb data is in the docker-ttn-dashboard `data/influxdb` directory, 
# as it normally is, and that the current directory has `.env` and `docker-compose.yml` properly 
# configured.
DBS=$(ls data/influxdb/data | grep -v _internal)
BACKUP=$(date -I)
for d in $DBS ; do \
    docker-compose run influxdb influxd backup -database "$d" -host influxdb:8088 /var/lib/influxdb/backup/${BACKUP} ; \
done

If you omit the -host influxdb:8088, you will get the following error message, and the backup will not be performed:
backup: dial tcp 127.0.0.1:8088: getsockopt: connection refused

Action required: Let's Encrypt certificate renewals

Hi,

Just received this email from letsencrypt.org:

Hello,

Action is required to prevent your Let's Encrypt certificate renewals from breaking.

Your Let’s Encrypt client used ACME TLS-SNI-01 domain validation to issue a certificate in the past 60 days.

TLS-SNI-01 validation is reaching end-of-life and will stop working on February 13th, 2019.

You need to update your ACME client to use an alternative validation method (HTTP-01, DNS-01 or TLS-ALPN-01) before this date or your certificate renewals will break and existing certificates will start to expire.

If you need help updating your ACME client, please open a new topic in the Help category of the Let's Encrypt community forum:

https://community.letsencrypt.org/c/help

Please answer all of the questions in the topic template so we can help you.

For more information about the TLS-SNI-01 end-of-life please see our API announcement:

https://community.letsencrypt.org/t/february-13-2019-end-of-life-for-all-tls-sni-01-validation-support/74209

Thank you,
Let's Encrypt Staff

My site has a cron job to renew, which pretty much looks like this one:

https://github.com/mcci-catena/docker-ttn-dashboard/blob/d667fe127ac9bde18c47aea5751178e3e0a216bb/apache/certbot_cron.sh#L8

From what I read on their forum, using certbot renew --preferred-challenges http would solve this issue.

What do you think ? Can send a PR if this looks ok to you.

Thanks,

Sort out authorization issues for Apache>Nginx migration

We need to switch to Nginx in v1, because it allows mosquitto, etc., to function cleanly.

Unfortunately, AAA works differently. Apache has .htaccess and .htgroup. We use .htaccess to define the logins for all users, and .htgroup to limit access (e.g. for API keys). Nginx only supports .htaccess.

It should be possible to create one Nginx .htaccess file for each controlled service (by using the .htgroup for that service and extracting the relevant entries from the Apache .htaccess file. Right now we have a group for node-red and a group for InfluxDB access. We want therefore to create two Nginx .htaccess files, one for node-red, the other for InfluxDB. We would look in the Apache .htgroup file for group NodeRed, find the users, and extract those user records from Apache .htaccess and put them in Nginx NodeRed .htaccess. Similarly, we would look in the Apache .htgroup file for group InfluxDB (whatever we called it), find the users, and extract those user records from Apache .htaccess and put them in Nginx InfluxDB .htaccess.

This needs to be done with a script for people upgrading.

There also needs to be a script for people setting up API keys and node-red access; but this is more straightforward, because it doesn't have to do the conversion.

build crashes our installing postfix

routine crashes out in building postfix

Step 13/17 : run postmap /etc/postfix/generic
---> Running in 677f4c63c3f8
postmap: fatal: bad string length 0 < 1: mydomain =
ERROR: Service 'postfix' failed to build: The command '/bin/sh -c postmap /etc/postfix/generic' returned a non-zero code: 1

Any assistance appreciated. Thank you.

Lots of Node-RED security warnings when building

The security audits complain when building node-red latest. Looks like node-red-contrib-ttn is not up-to-date on core-js. Not sure what has to be done for that, either, because that repo is marked "archived" (read-only).

npm audit fix takes care of the influxdb vulnerability.

Here's the log.

Step 3/10 : RUN npm install node-red-contrib-influxdb
 ---> Running in dec6e7b649ce
npm notice created a lockfile as package-lock.json. You should commit this file.
+ [email protected]
added 3 packages from 6 contributors and audited 1299 packages in 2.543s
found 1 high severity vulnerability
  run `npm audit fix` to fix them, or `npm audit` for details
Removing intermediate container dec6e7b649ce
 ---> df34c3998f06
Step 4/10 : RUN npm install node-red-contrib-ttn
 ---> Running in e57b6726bc5d
npm WARN deprecated [email protected]: core-js@<3.0 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3.

> [email protected] install /usr/src/node-red/node_modules/grpc
> node-pre-gyp install --fallback-to-build --library=static_library

node-pre-gyp WARN Using request for node-pre-gyp https download
[grpc] Success: "/usr/src/node-red/node_modules/grpc/src/node/extension_binary/node-v64-linux-x64-musl/grpc_node.node" is installed via remote

> [email protected] postinstall /usr/src/node-red/node_modules/core-js
> node postinstall || echo "ignore"

Thank you for using core-js ( https://github.com/zloirock/core-js ) for polyfilling JavaScript standard library!

The project needs your help! Please consider supporting of core-js on Open Collective or Patreon:
> https://opencollective.com/core-js
> https://www.patreon.com/zloirock

Also, the author of core-js ( https://github.com/zloirock ) is looking for a good job -)

+ [email protected]
added 117 packages from 133 contributors and audited 2086 packages in 11.498s
found 1 high severity vulnerability
  run `npm audit fix` to fix them, or `npm audit` for details
Removing intermediate container e57b6726bc5d
 ---> a8d8fae80f04
Step 5/10 : ARG NODERED_INSTALL_PLUGINS
 ---> Running in e70f436c5f0e
Removing intermediate container e70f436c5f0e
 ---> 53b297a28874
Step 6/10 : RUN /bin/bash -c 'for iPkg in "$@" ; do echo "npm install $iPkg" ; npm install "$iPkg" || { echo "couldnt install: $iPkg" ; exit 1 ; } ; done' -- ${NODERED_INSTALL_PLUGINS}
 ---> Running in 93a8e3c67a5c
Removing intermediate container 93a8e3c67a5c
 ---> 609372191067
Step 7/10 : RUN npm audit fix
 ---> Running in 59d5bf55cdf2
up to date in 1.939s
fixed 0 of 1 vulnerability in 2086 scanned packages
  1 vulnerability required manual review and could not be updated

Update README.md for v1.0.0

There are architectural changes (NGINX, MQTTS via Mosquitto, etc.) in the new release, allowing us to support multiple networks, etc. This needs to be documented. Maybe add the MCCI engineering report (or pointer to same) in the README, but we should make sure the README is a good introduction to the package.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.