hadesarchitect / grafanacassandradatasource Goto Github PK
View Code? Open in Web Editor NEWApache Cassandra Datasource for Grafana
License: MIT License
Apache Cassandra Datasource for Grafana
License: MIT License
If the datasource is built from the main branch with no TLS support, no connection to the plain not-TLS secured Cassandra instance is possible.
Unable to establish connection with the database, gocql: unable to create session: unable to discover protocol version: tls: first record does not look like a TLS handshake
Build the plugin from the main branch WITHOUT TLS, try to connect to the non-TLS database.
"[DEBUG] cassandra-backend-datasource: Parsed queries: 1" logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
"[DEBUG] cassandra-backend-datasource: Query type: connection" logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
"[DEBUG] cassandra-backend-datasource: Connecting to cassandra:9042..." logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
"[ERROR] cassandra-backend-datasource: Unable to establish connection with the database, gocql: unable to create session: unable to discover protocol version: tls: first record does not look like a TLS handshake" logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
It would be extremely beneficial to be able to setup alerts on price movements just like you would do on an Influx-based panel.
Add support of https://grafana.com/docs/grafana/latest/dashboards/variables/
Now insecure login with self-signed certificates is turned on by default and it can be turned off by building plugin from sources. It is inconvenient for users. We should add checkbox to make available turning it off
Hi!
First of all, thanks for your effort creating this plugin.
Currently I have a working Cassandra Cluster being monitored with Prometheus and Grafana and wanted to try your plugin. I installed it following the instructions but I can't make it connect with Cassandra. Do I need to set up anything else in Cassandra? I can normally connect with cqlsh and haven't had problems trying out Presto (so I assume the connection can be stablished).
When doing sudo tail /var/log/grafana/grafana.log
I can't see anything suspicious, it only says "Request Completed"
.
I'm using the lastest Grafana available (7.4.2).
Looks like we have only one instance of backend for all added to Grafana datasources, so we need to keep information about all connections.
Reproducing: configure instance of plugin, press test and then try to test connection to another source. The result will be the same with the previous attempt
Thank you so much for providing the plugin. Using this plugin, I am able to connect to my local Cassandra. But, when I try to connect to a remote cassandra instance, I am getting the error "Unable to establish connection with the database". To that remote server, I can make a cqlsh connection from terminal and able to connect to it from a java program as well. I also would like to mention that we connect to that server by SSH tunnelling and without username and password.
I started grafana in debug mode and in the logs, I see following error.
[ERROR] cassandra-backend-datasource: Unable to establish connection with the database, gocql: unable to create session: unable to discover protocol version: gocql: no response to connection startup within timeout" logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
Can you please guide me, how to resolve this? U
Default Consistency Level is not set neither required but it's impossible to connect with no CL.
Failed to apply option: invalid consistency ""
grafana_1 | t=2021-02-05T15:13:34+0000 lvl=dbug msg="2021-02-05T15:13:34.311Z [DEBUG] cassandra-backend-datasource: Parsed queries: 1" logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
grafana_1 | t=2021-02-05T15:13:34+0000 lvl=dbug msg= logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
grafana_1 | t=2021-02-05T15:13:34+0000 lvl=dbug msg="2021-02-05T15:13:34.311Z [DEBUG] cassandra-backend-datasource: Query type: connection" logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
grafana_1 | t=2021-02-05T15:13:34+0000 lvl=dbug msg= logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
grafana_1 | t=2021-02-05T15:13:34+0000 lvl=dbug msg="2021-02-05T15:13:34.311Z [DEBUG] cassandra-backend-datasource: Connecting to cassandra:9042..." logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
grafana_1 | t=2021-02-05T15:13:34+0000 lvl=dbug msg= logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
grafana_1 | t=2021-02-05T15:13:34+0000 lvl=dbug msg="2021-02-05T15:13:34.311Z [ERROR] cassandra-backend-datasource: Failed to apply option: invalid consistency \"\"" logger=plugins.backend pluginId=hadesarchitect-cassandra-datasource
Currently we are not supporting limits and offsets on cassandra level, leading to the whole data batch unloading each time we are using this plugin. This should be eliminated by adding time filtering (interval, from->to)
This issue relates to the query configurator, not the raw query mode.
Atm the only way to try the plugin is to download it as an archive (user needs grafana with administrator access) or build it locally (needs docker, a lot of build steps etc). We have to provide a simple "one-button" way to get it running.
Options:
https://github.com/grafana/plugin-workflows (specifically https://github.com/grafana/plugin-workflows/blob/master/release.yml)
Originally posted by @roundnutz14 in mdn/browser-compat-data#6325
Work in progress.
Since the plugin is not signed, Grafana will not load it by default.
To work-around this issue add the following in the plugins section of /etc/grafana/grafana.ini
[plugins]
# Enter a comma-separated list of plugin identifiers to identify plugins that are allowed to be loaded even if they lack a valid signature.
allow_loading_unsigned_plugins = "hadesarchitect-cassandra-datasource"
This could be added to the README.md with little effort.
Atm "Allow Filtering" is enabled by default which is considered bad practice. "Allow filtering" must be configurable.
FE: TBD
BE: https://github.com/HadesArchitect/grafana-cassandra-source/blob/master/backend/datasource.go#L95
Couldn't we provide a solid docker image people could use, instead of all these commands?
khunter-rmbp15:src kirstenhunter$ docker run --rm -v ${PWD}:/opt/gcds -w /opt/gcds node:12 npm install
Unable to find image 'node:12' locally
12: Pulling from library/node
2587235a7635: Pull complete
953fe5c215cb: Pull complete
d4d3f270c7de: Pull complete
ed36dafe30e3: Pull complete
00e912dd434d: Pull complete
dd25ee3ea38e: Pull complete
2a9b744d457d: Pull complete
cc5d09c61fdf: Pull complete
2f2248a9e475: Pull complete
Digest: sha256:9c5e64d867035cd2b08dbc4a537dbd638c8d761be627c85a00e585309489d6e6
Status: Downloaded newer image for node:12
npm WARN saveError ENOENT: no such file or directory, open '/opt/gcds/package.json'
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN enoent ENOENT: no such file or directory, open '/opt/gcds/package.json'
npm WARN gcds No description
npm WARN gcds No repository field.
npm WARN gcds No README data
npm WARN gcds No license field.
up to date in 0.363s
found 0 vulnerabilities
khunter-rmbp15:src kirstenhunter$
khunter-rmbp15:src kirstenhunter$ docker run --rm -v ${PWD}:/opt/gcds -w /opt/gcds node:12 node node_modules/webpack/bin/webpack.js
internal/modules/cjs/loader.js:818
throw err;
^
Error: Cannot find module '/opt/gcds/node_modules/webpack/bin/webpack.js'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:815:15)
at Function.Module._load (internal/modules/cjs/loader.js:667:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
The Query Configurator (with its dropdowns) could be greatly enhanced by pre-querying the Cassandra datasource and giving the users options to choose from (keyspaces, tables, fields, etc).
Wiki
On the slow inefficient queries (for example, with allow filtering
) we run into GoCQL timeout period.
Suggested solutions:
Possible improvement:
It's realy a great work, but unfortunately i had a legacy cassandra table that use "bigint" as the only "Time Column" and it seemed the "Time Column" only support "timestamp" type.
It would be great if it can support "$__unixEpochFrom" and "$__unixEpochFrom" such as other database data source.
Thank You.
It's not really clear what I'm doing here. The docker thing doesn't leave me with a docker container, do I do the instructions in the other sections? We need to have a clear path (do 1 of these).
Docker Way (Recommended)
docker run --rm -v ${PWD}:/opt/gcds -w /opt/gcds node:12 npm install
docker run --rm -v ${PWD}:/opt/gcds -w /opt/gcds node:12 node node_modules/webpack/bin/webpack.js
docker run --rm -v ${PWD}:/go/src/github.com/ha/gcp -w /go/src/github.com/ha/gcp/backend golang go mod vendor
docker run --rm -v ${PWD}:/go/src/github.com/ha/gcp -w /go/src/github.com/ha/gcp golang go build -i -o ./dist/cassandra-plugin_linux_amd64 ./backend
Locally
npm install
webpack
cd backend && go mod vendor
go build -i -o ./dist/cassandra-plugin_linux_amd64 ./backend
Building the backend with TLS support [OPTIONAL]
go get -u github.com/go-bindata/go-bindata/... - downloading the bindata package
Place your tls certificate and tls key into ./backend/creds folder
cd ./backend && go-bindata -o assets.go ./creds && cd .. - move credentials files as a .go files
go build -i -ldflags "-X main.CertPath=/creds/cert_file_name -X main.KeyPath=/creds/key_file_name -X main.InsecureSkipVerify=true" -o ./dist/cassandra-plugin_linux_amd64 ./backend - build binary with required variables filled in. If you'd like to use rootCA, do go build -i -ldflags "-X main.RootCA=/creds/root_ca_file_name" -o ./dist/cassandra-plugin_linux_amd64 ./backend
We should update our plugin from legacy implementation to new Grafana Plugin SDK
https://grafana.com/tutorials/build-a-data-source-backend-plugin/
Did I start a cassandra_1 container? I don't think I started a cassandra_1 container:
khunter-rmbp15:GrafanaCassandraDatasource kirstenhunter$ docker-compose exec cassandra cqlsh -u cassandra -p cassandra -f ./test_data.cql
ERROR: No container found for cassandra_1
The paths through the instructions need to be crystal clear. Right now there's no clue which things I should or shouldn't do.
khunter-rmbp15:GrafanaCassandraDatasource kirstenhunter$ docker-compose up -d
Starting grafanacassandradatasource_cassandra_1 ...
Starting grafanacassandradatasource_cassandra_1 ... done
khunter-rmbp15:GrafanaCassandraDatasource kirstenhunter$ docker-compose exec cassandra cqlsh -u cassandra -p cassandra -f ./test_data.cql
ERROR: No container found for cassandra_1
A warning created by dependabot
The latest possible version that can be installed is 2.1.2 because of the following conflicting dependencies:
- [email protected] requires serialize-javascript@^2.1.2
- [email protected] requires serialize-javascript@^2.1.2 via [email protected]
The earliest fixed version is 3.1.0.
Hey!
I just installed this plugin on my Grafana installation. When I want to connect it to my Cassandra cluster, it fails, saying "metric request error". The fields are correct (i'm almost 100% sure).
Should I do something in Cassandra config before trying to connect those two tools? Thanks in advance.
unable to add datasource, getting "Metric request error". Please advise
I've setup the plugin 0.4.2 with grafana 7.4.1 I'm using it with ScyllaDB(should be same as cassandra), I was not getting any points for my table so I tried the example.
CREATE TABLE IF NOT EXISTS temperature (
sensor_id uuid,
registered_at timestamp,
temperature int,
location text,
PRIMARY KEY ((sensor_id), registered_at)
);
insert into temperature (sensor_id, registered_at, temperature, location) values (99051fe9-6a9c-46c2-b949-38ef78858dd0, 2020-04-01T11:21:59.001+0000, 18, "kitchen");
insert into temperature (sensor_id, registered_at, temperature, location) values (99051fe9-6a9c-46c2-b949-38ef78858dd0, 2020-04-01T11:22:59.001+0000, 19, "kitchen");
insert into temperature (sensor_id, registered_at, temperature, location) values (99051fe9-6a9c-46c2-b949-38ef78858dd0, 2020-04-01T11:23:59.001+0000, 20, "kitchen");
Had to make minor changes(quotes) to make it work otherwise was getting errors
insert into temperature (sensor_id, registered_at, temperature, location) values (99051fe9-6a9c-46c2-b949-38ef78858dd0, '2020-04-01T11:21:59.001+0000', 18, 'kitchen');
insert into temperature (sensor_id, registered_at, temperature, location) values (99051fe9-6a9c-46c2-b949-38ef78858dd0, '2020-04-01T11:22:59.001+0000', 19, 'kitchen');
insert into temperature (sensor_id, registered_at, temperature, location) values (99051fe9-6a9c-46c2-b949-38ef78858dd0, '2020-04-01T11:23:59.001+0000', 20, 'kitchen');
{
"request": {
"url": "api/tsdb/query",
"method": "POST",
"data": {
"queries": [
{
"queryType": "query",
"target": "select timestamp, value from keyspace.table where id=123e4567;",
"refId": "A",
"rawQuery": false,
"type": "timeserie",
"datasourceId": 2,
"filtering": "",
"keyspace": "test",
"table": "temperature",
"columnTime": "registered_at",
"columnValue": "temperature",
"columnId": "sensor_id",
"valueId": "99051fe9-6a9c-46c2-b949-38ef78858dd0"
}
],
"from": "1613529163648",
"to": "1613550763648"
},
"hideFromInspector": false
},
"response": {
"results": {
"A": {
"refId": "A",
"series": [
{
"name": "99051fe9-6a9c-46c2-b949-38ef78858dd0",
"points": []
}
],
"tables": null,
"dataframes": null
}
}
}
}
I tried with raw query as well but getting the points always empty [], tried with my own table as well.
How to debug the issue?
Setup
create keyspace test with replication = {'class': 'SimpleStrategy', 'replication_factor': 1};
create table test.employee ( emp_id text, emp_name text, login_time timestamp, PRIMARY KEY (emp_id));
insert into test.employee (emp_id, emp_name, login_time) values ('id1', 'Empl1', '2021-02-14T18:54:13+00:00');
insert into test.employee (emp_id, emp_name, login_time) values ('id2', 'Empl2', '2021-02-14T17:54:13+00:00');
insert into test.employee (emp_id, emp_name, login_time) values ('id3', 'Empl3', '2021-02-14T16:54:13+00:00');
Query
SELECT emp_id, COUNT(*), login_time FROM test.employee WHERE emp_id IN ('emp1', 'emp2', 'emp3') AND login_time > $__timeFrom and login_time < $__timeTo allow filtering
Logs
[DEBUG] cassandra-backend-datasource: Executing CQL query: 'SELECT emp_id, COUNT(*), login_time FROM test.employee WHERE emp_id IN ('emp1', 'emp2', 'emp3') AND login_time > 1613307990119 and login_time < 1613329590119 allow filtering' ..."
[ERROR] cassandra-backend-datasource: Error while processing a query: can not unmarshal bigint into *float64"
Greetings.
As I can see, initially the plugin worked with Cassandra only. And the ability to work with AWS Keyspaces was added in this PR.
Once appropriate PR was merged, I've built the code (with the necessary Amazon certificate included) by myself (as no official release with necessary commits created yet).
The built plugin installed successfully.
With help of cqlsh
tool, I can successfully connect to AWS Keyspaces.
So I wouldn't expect any problems to do the same with help of the plugin if the same is in use:
Yet, a Datasource, that uses the plugin, can't connect to AWS Keyspaces.
No logs in Grafana.
And I can't find any ability to add DEBUG level to see, what's exactly wrong with my connection.
Does anyone already have a positive experience connecting this plugin to AWS Keyspaces?
Thank you.
Hi, in the current cassandra-grafana plugin implementation (as of Feb 2021) - only time-series data can be visualised in Grafana.
It would be great if the plugin would allow free form CQL in order to display for example reports in Table format like other SQL Grafana datasources allow (under "Format As", we should be able to choose Timeseries or Table). We should still have access to $__timeFrom and $__timeTo constructs in order to have a time filter in the free form query.
Is it possible to add this to the "to do" list?
Thank you!
When creating line-charts out of Cassandra queries, multiple lines can be put on the same panel. In this case both times the "ID value" field ends up being in the legend, even if I used to same ID value and wanted to query 2 different fields.
With Influx-based queries I could always rename these lines using aliases.
Hi,
I installed Grafana - 7.2.1
And also created - your Cassandra Data Source as a Plugin.
I tried to create a Query for Panel - providing - Keyspace, Table_name, ID etc.
But data is not getting fetched.
Thank you.
Regards
Arabinda
Cassandra table structure - MONITOR_DATA(resourceId text, paramName text, interfaceType text, type text, value_datatype text, value_of_resource text, unit text, timeStamp timestamp, PRIMARY KEY((resourceId,paramName),timeStamp))with clustering order by(timeStamp desc)
I am trying to create a query on the cassandra dashboard on grafana plugin , below are the fields that I am filling :
keyspace - mykeyspace , table - monitor_data,
Time column - timestamp , Value Column - value_of_resource,
ID Column - paramname , ID Vaue -
What am I supposed to fill in the ID Value , its expecting some kind of column UUID in input and because of this the query is failing if I check in Query Inspector. How can I get column UUID?
error message in query inspector = message:"Invalid UUID constant (0f770291-90c3-4952-98f8-324ffd81eac4) for "paramname" of type text"
(I tried filling the UUID with some random value).
Any help would be appreciated.
Thanks.
DataStax Cassandra-as-a-Service requires a special connection.
Currently, the plugin proved to work on Linux and OS X, but not on Windows.
Todo:
test_data.cql
After installation, unzipping grafana-cassandra-source-master.zip to /var/lib/grafana/plugin/cassandra I do see the data source under plugins on the web interface.
When I click "Apache Cassandra" under Configurations I get a continuous "loading" message. Also when I click "Apache Cassandra" from datastores I get the error message in the image below.
Unexpected token '<' Evaluating http://localhost:3000/public/plugins/hadesarchitect-cassandra-datasource/module.js Loading plugins/hadesarchitect-cassandra-datasource/module
Any assistance would be appreciated. We are trying to use Grafana on top of Cassandra.
Similar to #33 but relates to the raw query mode, not the query configurator
"Currently we are not supporting limits and offsets on cassandra level, leading to the whole data batch unloading each time we are using this plugin. This should be eliminated by adding time filtering (interval, from->to)"
The design is to be discussed first.
Grafana v7.0 released: New plugin architecture
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.