Coder Social home page Coder Social logo

michelin / snowflake-grafana-datasource Goto Github PK

View Code? Open in Web Editor NEW
66.0 6.0 33.0 1.89 MB

Snowflake grafana datasource plugin allows Snowflake data to be visually represented in Grafana dashboards.

License: Apache License 2.0

JavaScript 5.45% Go 58.60% TypeScript 35.00% Dockerfile 0.96%
snowflake grafana datasource plugin alerting snowflake-plugin series time-series

snowflake-grafana-datasource's People

Contributors

alexnederlof avatar benesch avatar devnied avatar heanlan avatar inacionery avatar rumbin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

snowflake-grafana-datasource's Issues

TypeError: t.map is not a function

With plugin version 1.3.0 running on Grafana v9.2.2 once I've created some Snowflake dashboards I started to get similar errors:

An unexpected error happened
Details
TypeError: t.map is not a function

    at Io (https://ds-dashboards.example.com/public/build/DashboardPage.addf7c66abb17ffa5168.js:363:1276)
    at div
    at div
    at https://ds-dashboards.example.com/public/build/3593.50eee21d2f7b3f7d0151.js:2941:48
    at div
    at https://ds-dashboards.example.com/public/build/DashboardPage.addf7c66abb17ffa5168.js:426:1562
    at div
    at div
    at n (https://ds-dashboards.example.com/public/build/DashboardPage.addf7c66abb17ffa5168.js:99:6538)
    at div
    at n (https://ds-dashboards.example.com/public/build/DashboardPage.addf7c66abb17ffa5168.js:99:8487)
    at div
    at n (https://ds-dashboards.example.com/public/build/DashboardPage.addf7c66abb17ffa5168.js:99:6538)
    at div
    at n (https://ds-dashboards.example.com/public/build/DashboardPage.addf7c66abb17ffa5168.js:99:8487)
    at Zt (https://ds-dashboards.example.com/public/build/DashboardPage.addf7c66abb17ffa5168.js:99:14074)
    at div
    at div
    at div
    at b (https://ds-dashboards.example.com/public/build/3593.50eee21d2f7b3f7d0151.js:5742:1111)
    at Si (https://ds-dashboards.example.com/public/build/DashboardPage.addf7c66abb17ffa5168.js:443:1519)
    at g (https://ds-dashboards.example.com/public/build/4713.9debbfba7033186a58bf.js:2:999564)
    at WithTheme(Connect(Si))
    at Na (https://ds-dashboards.example.com/public/build/DashboardPage.addf7c66abb17ffa5168.js:566:7310)
    at DashboardPage
    at g (https://ds-dashboards.example.com/public/build/4713.9debbfba7033186a58bf.js:2:999564)
    at Suspense
    at c (https://ds-dashboards.example.com/public/build/3593.50eee21d2f7b3f7d0151.js:1194:301)
    at fc (https://ds-dashboards.example.com/public/build/3593.50eee21d2f7b3f7d0151.js:8749:362)
    at t (https://ds-dashboards.example.com/public/build/4713.9debbfba7033186a58bf.js:2:1014882)
    at t (https://ds-dashboards.example.com/public/build/4713.9debbfba7033186a58bf.js:2:1016903)
    at main
    at Ql (https://ds-dashboards.example.com/public/build/3593.50eee21d2f7b3f7d0151.js:8710:13067)
    at t (https://ds-dashboards.example.com/public/build/4713.9debbfba7033186a58bf.js:2:1011500)
    at div
    at o (https://ds-dashboards.example.com/public/build/3593.50eee21d2f7b3f7d0151.js:2035:4306)
    at t.KBarProvider (https://ds-dashboards.example.com/public/build/4713.9debbfba7033186a58bf.js:2:261503)
    at l (https://ds-dashboards.example.com/public/build/3593.50eee21d2f7b3f7d0151.js:6088:19180)
    at c (https://ds-dashboards.example.com/public/build/3593.50eee21d2f7b3f7d0151.js:1194:301)
    at u (https://ds-dashboards.example.com/public/build/3593.50eee21d2f7b3f7d0151.js:1194:855)
    at y (https://ds-dashboards.example.com/public/build/4713.9debbfba7033186a58bf.js:2:2892432)
    at tc (https://ds-dashboards.example.com/public/build/3593.50eee21d2f7b3f7d0151.js:8746:21930)
    at l (https://ds-dashboards.example.com/public/build/4713.9debbfba7033186a58bf.js:2:997609)
    at qc (https://ds-dashboards.example.com/public/build/3593.50eee21d2f7b3f7d0151.js:8762:115)

I've already tried to delete the dashboard and recreate, but I still get these. Before creating/using the plugin I can't observe similar behaviour, so I suspect the problem can be related to that?

Include Sample Dashboards

It would be great if you could include a folder with sample dashboards that we could import to demo this plugin.

I noticed the screenshots that were included in your blogpost on michelin.io titled, "Embracing Kaizen Time..."

Would you be able to provide dashboards for monitoring the Snowflake account usage, consumption, and other basic metrics?

Thanks!

Time series: pivoting of label column generates phantom values

Severity

This is pretty severe, since the chart is lying.

Description

In a time series chart on a table in long/narrow format (EAV, entity-attribute-value), the pivoting operation on the attribute/label column results in the values of previous data points to be forward-filled if there is no data present for the given attribute/label.

How to reproduce

Minimal example:

select
    ts::timestamp_ntz as ts
    , label
    , val    
from (values
    (1709050594, 'state_1', 1)
    , (1709050794, 'state_2', 2)
    , (1709050994, 'state_3', 3)
    , (1709051000, 'state_3', 3)
) s(ts, label, val)
 order by
    ts

Create a time series chart out of this query.
In the following screenshot I am using ${__field.labels.LABEL} as the Display name formatter:

image

Table view:

image

Expected behavior

Only the data points of the query must be present in the table view and in the time series chart. Null values are respected.

%*p characters in password causing problem

Our Snowflake password looks like
xxxxxxx%*pxxxxxxxxx

If we try to authenticate using this password - we get an error: Validation query error : invalid URL escape "%*p"

queryresultmeta object comes as custom object

When I run timeseries query to fetch server memory utilized in group or hosts or all hosts, grafana queryresult meta always show as custom object with rowCount: 1; Because of this flowcharting-panel couldn't pickup the right metric; It always returns "Apply to column" as return table metadata instead of host column values.

image

image

Where as using mysql connector shows timeseries-Wide

image

Can we sign the Snowflake plug-in

This is not necessarily and issue but I wasn't sure how to reach out to the contributors here...

Is it possible to sign this data source?

Thanks,

Kurt

Bug: plugin requires `TIMESTAMP_TYPE_MAPPING = TIMESTAMP_NTZ`

Description

Currently this connector uses the generic to_timestamp() functions which depend on the account/user/session parameter TIMESTAMP_TYPE_MAPPING.

This is an issue, if this mapping is defined as, e.g., TIMESTAMP_TYPE_MAPPING = TIMESTAMP_LTZ for the whole Snowflake account:

SQL compilation error: Function TIME_SLICE does not support TIMESTAMP_LTZ(9) argument type

Suggested solutions

Solution 1

The connector issues the following query after initializing the session, before running charts' queries:

alter session set TIMESTAMP_TYPE_MAPPING = TIMESTAMP_NTZ;

Solution 2

We replace all to_timestamp() (and related) function calls by explicitly using the timestamp_ntz versions, e.g. to_timestamp_ntz(), try_to_timestamp_ntz()...

The plugin crashes when query returns no results.

The plugin crashes when query returns no results. Is there a way to prevent that?
I'm using the plugin to monitor Snowflake tasks and when everything's good the result has no rows which causes plugin error.

Here are the logs:

Nov 17 09:40:47 affluent-hub grafana[25533]: logger=plugin.michelin-snowflake-datasource t=2023-11-17T09:40:47.751451605Z level=info msg="Query config" config="map[queryText:SELECT DATABASE_NAME, SCHEMA_NAME, NAME, STATE, ERROR_MESSAGE, MAX(SCHEDULED_TIME) AS last_seen, COUNT(*) AS count\n  FROM SNOWFLAKE.account_usage.task_history\n WHERE STATE NOT IN ('CANCELLED', 'SKIPPED', 'SUCCEEDED')\n   AND $__timeFilter(SCHEDULED_TIME)\nGROUP BY DATABASE_NAME, SCHEMA_NAME, NAME, STATE, ERROR_MESSAGE\nHAVING count > 1\nORDER BY count DESC; queryType:table timeColumns:[time]]"
Nov 17 09:40:47 affluent-hub grafana[25533]: logger=plugin.michelin-snowflake-datasource t=2023-11-17T09:40:47.751568117Z level=info msg=Query finalQuery="SELECT DATABASE_NAME, SCHEMA_NAME, NAME, STATE, ERROR_MESSAGE, MAX(SCHEDULED_TIME) AS last_seen, COUNT(*) AS count\n  FROM SNOWFLAKE.account_usage.task_history\n WHERE STATE NOT IN ('CANCELLED', 'SKIPPED', 'SUCCEEDED')\n   AND CONVERT_TIMEZONE('UTC', 'UTC', SCHEDULED_TIME) > '2023-11-17T03:40:47.514Z' AND CONVERT_TIMEZONE('UTC', 'UTC', SCHEDULED_TIME) < '2023-11-17T09:40:47.514Z'\nGROUP BY DATABASE_NAME, SCHEMA_NAME, NAME, STATE, ERROR_MESSAGE\nHAVING count > 1\nORDER BY count DESC"
Nov 17 09:40:47 affluent-hub grafana[25533]: logger=plugin.michelin-snowflake-datasource t=2023-11-17T09:40:47.753939979Z level=info msg="Query config" config="map[queryText:SELECT SCHEDULED_TIME AS date, ERROR_MESSAGE, COUNT(*) AS count\n  FROM SNOWFLAKE.account_usage.task_history\n WHERE STATE NOT IN ('CANCELLED', 'SKIPPED', 'SUCCEEDED')\n   AND $__timeFilter(scheduled_time) \nGROUP BY date, ERROR_MESSAGE\nORDER by date ; queryType:time series timeColumns:[date]]"
Nov 17 09:40:47 affluent-hub grafana[25533]: logger=plugin.michelin-snowflake-datasource t=2023-11-17T09:40:47.754032368Z level=info msg=Query finalQuery="SELECT SCHEDULED_TIME AS date, ERROR_MESSAGE, COUNT(*) AS count\n  FROM SNOWFLAKE.account_usage.task_history\n WHERE STATE NOT IN ('CANCELLED', 'SKIPPED', 'SUCCEEDED')\n   AND CONVERT_TIMEZONE('UTC', 'UTC', scheduled_time) > '2023-11-17T03:40:47.514Z' AND CONVERT_TIMEZONE('UTC', 'UTC', scheduled_time) < '2023-11-17T09:40:47.514Z' \nGROUP BY date, ERROR_MESSAGE\nORDER by date  LIMIT 1656"
Nov 17 09:40:47 affluent-hub grafana[25533]: logger=plugin.michelin-snowflake-datasource t=2023-11-17T09:40:47.819506031Z level=info msg="Query config" config="map[queryText:SELECT LAST_LOAD_TIME, FIRST_ERROR_MESSAGE, SUM(ERROR_COUNT)\n  FROM snowflake.account_usage.copy_history\n WHERE $__timeFilter(LAST_LOAD_TIME) \nGROUP BY LAST_LOAD_TIME, FIRST_ERROR_MESSAGE\nORDER BY LAST_LOAD_TIME ASC; queryType:time series timeColumns:[date]]"
Nov 17 09:40:47 affluent-hub grafana[25533]: logger=plugin.michelin-snowflake-datasource t=2023-11-17T09:40:47.819616075Z level=info msg=Query finalQuery="SELECT LAST_LOAD_TIME, FIRST_ERROR_MESSAGE, SUM(ERROR_COUNT)\n  FROM snowflake.account_usage.copy_history\n WHERE CONVERT_TIMEZONE('UTC', 'UTC', LAST_LOAD_TIME) > '2023-11-17T03:40:47.514Z' AND CONVERT_TIMEZONE('UTC', 'UTC', LAST_LOAD_TIME) < '2023-11-17T09:40:47.514Z' \nGROUP BY LAST_LOAD_TIME, FIRST_ERROR_MESSAGE\nORDER BY LAST_LOAD_TIME ASC LIMIT 1656"
Nov 17 09:40:47 affluent-hub grafana[25533]: logger=plugin.michelin-snowflake-datasource t=2023-11-17T09:40:47.991167458Z level=info msg="Query config" config="map[queryText:SELECT PIPE_CATALOG_NAME AS DATABASE_NAME, PIPE_SCHEMA_NAME AS SCHEMA_NAME, PIPE_NAME, FIRST_ERROR_MESSAGE, SUM(ERROR_COUNT) AS ERROR_COUNT\n  FROM snowflake.account_usage.copy_history\n WHERE ERROR_COUNT > 0\n   AND $__timeFilter(LAST_LOAD_TIME)\n   GROUP BY DATABASE_NAME, SCHEMA_NAME, PIPE_NAME, FIRST_ERROR_MESSAGE; queryType:table timeColumns:[time]]"
Nov 17 09:40:47 affluent-hub grafana[25533]: logger=plugin.michelin-snowflake-datasource t=2023-11-17T09:40:47.991294673Z level=info msg=Query finalQuery="SELECT PIPE_CATALOG_NAME AS DATABASE_NAME, PIPE_SCHEMA_NAME AS SCHEMA_NAME, PIPE_NAME, FIRST_ERROR_MESSAGE, SUM(ERROR_COUNT) AS ERROR_COUNT\n  FROM snowflake.account_usage.copy_history\n WHERE ERROR_COUNT > 0\n   AND CONVERT_TIMEZONE('UTC', 'UTC', LAST_LOAD_TIME) > '2023-11-17T03:40:47.514Z' AND CONVERT_TIMEZONE('UTC', 'UTC', LAST_LOAD_TIME) < '2023-11-17T09:40:47.514Z'\n   GROUP BY DATABASE_NAME, SCHEMA_NAME, PIPE_NAME, FIRST_ERROR_MESSAGE"
Nov 17 09:40:51 affluent-hub grafana[25533]: logger=plugin.michelin-snowflake-datasource t=2023-11-17T09:40:51.984634716Z level=error msg="Could not convert long frame to wide frame" err="can not convert to wide series, input fields have no rows"
Nov 17 09:40:51 affluent-hub grafana[25533]: logger=context userId=1 orgId=1 uname=admin t=2023-11-17T09:40:51.990688787Z level=error msg="Internal server error" error="[plugin.downstreamError] client: failed to query data: Failed to query data: rpc error: code = Unavailable desc = error reading from server: EOF" remote_addr=163.116.162.123 traceID=
Nov 17 09:40:51 affluent-hub grafana[25533]: logger=context userId=1 orgId=1 uname=admin t=2023-11-17T09:40:51.99078775Z level=error msg="Internal server error" error="[plugin.downstreamError] client: failed to query data: Failed to query data: rpc error: code = Unavailable desc = error reading from server: EOF" remote_addr=163.116.162.123 traceID=
Nov 17 09:40:51 affluent-hub grafana[25533]: logger=context userId=1 orgId=1 uname=admin t=2023-11-17T09:40:51.990810508Z level=error msg="Request Completed" method=POST path=/api/ds/query status=500 remote_addr=163.116.162.123 time_ms=4381 duration=4.381258906s size=116 referer="https://grafana.afflu.net/d/d4d6589b-97d0-4ff8-83c6-1a64a74501ed/snowflake-monitoring?orgId=1" handler=/api/ds/query
Nov 17 09:40:51 affluent-hub grafana[25533]: logger=context userId=1 orgId=1 uname=admin t=2023-11-17T09:40:51.990863974Z level=error msg="Request Completed" method=POST path=/api/ds/query status=500 remote_addr=163.116.162.123 time_ms=4376 duration=4.376350174s size=116 referer="https://grafana.afflu.net/d/d4d6589b-97d0-4ff8-83c6-1a64a74501ed/snowflake-monitoring?orgId=1" handler=/api/ds/query
Nov 17 09:40:51 affluent-hub grafana[25533]: logger=context userId=1 orgId=1 uname=admin t=2023-11-17T09:40:51.990688783Z level=error msg="Internal server error" error="[plugin.downstreamError] client: failed to query data: Failed to query data: rpc error: code = Unavailable desc = error reading from server: EOF" remote_addr=163.116.162.123 traceID=
Nov 17 09:40:51 affluent-hub grafana[25533]: logger=context userId=1 orgId=1 uname=admin t=2023-11-17T09:40:51.991104203Z level=error msg="Request Completed" method=POST path=/api/ds/query status=500 remote_addr=163.116.162.123 time_ms=4078 duration=4.078323286s size=116 referer="https://grafana.afflu.net/d/d4d6589b-97d0-4ff8-83c6-1a64a74501ed/snowflake-monitoring?orgId=1" handler=/api/ds/query
Nov 17 09:40:51 affluent-hub grafana[25533]: logger=context userId=1 orgId=1 uname=admin t=2023-11-17T09:40:51.99121181Z level=error msg="Internal server error" error="[plugin.downstreamError] client: failed to query data: Failed to query data: rpc error: code = Unavailable desc = error reading from server: EOF" remote_addr=163.116.162.123 traceID=
Nov 17 09:40:51 affluent-hub grafana[25533]: logger=context userId=1 orgId=1 uname=admin t=2023-11-17T09:40:51.991281956Z level=error msg="Request Completed" method=POST path=/api/ds/query status=500 remote_addr=163.116.162.123 time_ms=4174 duration=4.17457895s size=116 referer="https://grafana.afflu.net/d/d4d6589b-97d0-4ff8-83c6-1a64a74501ed/snowflake-monitoring?orgId=1" handler=/api/ds/query
Nov 17 09:40:51 affluent-hub grafana[25533]: logger=plugin.michelin-snowflake-datasource t=2023-11-17T09:40:51.992052969Z level=error msg="plugin process exited" path=/var/lib/grafana/plugins/michelin-snowflake-datasource/gpx_snowflake-datasource_linux_amd64 pid=2563 error="exit status 2"

No data in response.

Hi,

I installed this on a test grafana instance at work and hooked it up to snowflake.

I can see the query going out to snowflake and returning results, but there's no data being plotted on the graph.

I've attached the panel JSON and the query JSON from the query inspector.

panel.txt
query.txt

Timeseries-wide group by 2 coloumns

Grafana Version: 9.1.1

Executed Query:

SELECT $__timeGroup(DATETIME,$__interval) as Time, HOST as metric, (max(USED/1024*1024)/max(TOTAL/1024*1024)*100) AS used_percent FROM SERVER_MEMORY WHERE $__timeFilter(DATETIMEUTC,$__interval) AND SERVICE='service1' GROUP BY 1,2 ORDER BY 1

Using mysql

image

Where as using snowflake plugin

image

Expected result: We should see a separate column for each HOST(metric) in a given time series

Metric Request Error on simple recursive query

Grafana Version: v7.5.5 (b5190ee547)
Plugin Version: 1.2.0

I have the following query

WITH RECURSIVE recurseruns AS (
 SELECT p.id as pr, p.created_at as pr_create, runs.upstream_run_id, runs.run_id, runs.job_name, runs.started, runs.completed
 FROM "FIVETRAN_DATABASE"."FB_CI_METRICS_PUBLIC".runs
 JOIN "FIVETRAN_DATABASE"."FB_CI_METRICS_PUBLIC".premerge_stats p
 ON p.run_id = runs.run_id
  WHERE p.run_id IS NOT NULL
  AND CONVERT_TIMEZONE('UTC', 'UTC', p.created_at::TIMESTAMP_NTZ) < '2022-10-11T17:18:15.49Z' AND CONVERT_TIMEZONE('UTC', 'UTC', p.created_at::TIMESTAMP_NTZ) > '2022-10-11T11:18:15.49Z'
  UNION ALL
    SELECT rt.pr as pr, rt.pr_create as pr_create, r.upstream_run_id, r.run_id, r.job_name, r.started, r.completed
    FROM "FIVETRAN_DATABASE"."FB_CI_METRICS_PUBLIC".runs r
    JOIN recurseruns rt ON rt.run_id = r.upstream_run_id
  )
SELECT
  rr.pr_create as "time",
  SUM(CASE WHEN (rr.job_name LIKE 's3_%') THEN TIMEDIFF('seconds', rr.started, rr.completed) END) as s3_machine_time
FROM recurseruns rr
group by (rr.pr, "time")
order by "time";

The error

{
  "request": {
    "url": "api/ds/query",
    "method": "POST",
    "data": {
      "queries": [
        {
          "refId": "A",
          "key": "Q-56419b51-e0d2-40db-8641-8828c213e003-0",
          "queryText": "WITH RECURSIVE recurseruns AS (\n SELECT p.id as pr, p.created_at as pr_create, runs.upstream_run_id, runs.run_id, runs.job_name, runs.started, runs.completed\n FROM \"FIVETRAN_DATABASE\".\"FB_CI_METRICS_PUBLIC\".runs\n JOIN \"FIVETRAN_DATABASE\".\"FB_CI_METRICS_PUBLIC\".premerge_stats p\n ON p.run_id = runs.run_id\n  WHERE p.run_id IS NOT NULL\n  AND CONVERT_TIMEZONE('UTC', 'UTC', p.created_at::TIMESTAMP_NTZ) < '2022-10-11T17:18:15.49Z' AND CONVERT_TIMEZONE('UTC', 'UTC', p.created_at::TIMESTAMP_NTZ) > '2022-10-11T11:18:15.49Z'\n  UNION ALL\n    SELECT rt.pr as pr, rt.pr_create as pr_create, r.upstream_run_id, r.run_id, r.job_name, r.started, r.completed\n    FROM \"FIVETRAN_DATABASE\".\"FB_CI_METRICS_PUBLIC\".runs r\n    JOIN recurseruns rt ON rt.run_id = r.upstream_run_id\n  )\nSELECT\n  rr.pr_create as \"time\",\n  SUM(CASE WHEN (rr.job_name LIKE 's3_%') THEN TIMEDIFF('seconds', rr.started, rr.completed) END) as s3_machine_time\nFROM recurseruns rr\ngroup by (rr.pr, \"time\")\norder by \"time\";",
          "queryType": "table",
          "timeColumns": [
            "time"
          ],
          "datasourceId": 78,
          "intervalMs": 1000,
          "maxDataPoints": 3778
        }
      ],
      "range": {
        "from": "2022-10-11T18:27:59.275Z",
        "to": "2022-10-11T19:27:59.275Z",
        "raw": {
          "from": "now-1h",
          "to": "now"
        }
      },
      "from": "1665512879275",
      "to": "1665516479275"
    },
    "hideFromInspector": false
  },
  "response": {
    "message": "Metric request error"
  }
}```

SSH key-pair authentication

Does the plugin support user and password authentication only at this time ?
Does it support SSH key-pair authentication too?

The private key of base64 URL encoded pkcs8

Thanks for this increidible plugin for visualizing snowflake data on Grafana.
I've successfully installed and tested my snowflake acount with multi-factor authentication (i.e. MFA).
But it needs Approval each time for testing SQL and refreshing visualizations, which is inconveniently for me.
Since Key Pair Authentication will be more silently and the keys obtained by the way below works well for SnowSQL.
https://docs.snowflake.com/en/user-guide/key-pair-auth.html
However, neither manually converting "+" to "-" and "" to "_" and remove "\n" as told,
nor the methods given in readme, it gives the same error as follow.
The private key is generated as RSA2048, is it means impossible to conver rsa2048 to base64 URL ?
In addition, the private key was generated as follow.
$ openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8
Thank you in advance!

egrep -v '^(-----BEGIN PRIVATE KEY|-----END PRIVATE KEY)' rsa_key.p8 | tr -d '\n' | sed 's/+/-/g; s///_/g' > rsa_key_urlbase64.p8
image

Time formatted Columns don't seem to work

When I add a field with a timestamp in it to the Time Formatted Columns, I get a 500 error.
Also, normally when you have two dimensions and set Stacked Series normal, I expect to see a stacked bar chart, but I don't, instead it stays unstacked. This works fine on the Postgres connector, and the premium snowflake connector that comes with Grafana premium.
Without these two being fixed, we are going to have to resort to using grafana cloud, which I'm not keen to do.
Are these known issues?

release a darwin_arm64 bin for M1_mac

I cannot run this plugin against a local grafana instance on the m1 mac. After running the README.md install instructions via homebrew, I am able to configure the data source on local grafana instance but I am getting a 500 internal server error. This error is called because there is not a gpx_snowflake-datasource_darwin_arm64 instance in the opt/homebrew/var/lib/grafana/plugins/michelin-snowflake-datasource/.

Screenshot 2023-01-16 at 11 52 07 AM

timeseries grafana maxdatapoints limit problem

Grafana version: 9.4.7
Plugin Version: 1.4.1

currently we are limiting query size if the query is time series and maxDataPoints is set

https://github.com/michelin/snowflake-grafana-datasource/blob/master/pkg/query.go#LL218C5-L218C34

Below is the problem with an example data:

Given a table with three columns with time series data, the following behavior has been observed with Grafana 9.4.7 with michelin-snowflake-connector 1.4.1
+-----------------------+------------------+--------------------+
| Timestamp | Node | Measurement |
+-----------------------+------------------+--------------------+
| 2023-05-17 00:00:00 | node A | 234 |
+-----------------------+------------------+--------------------+
| 2023-05-17 00:00:00 | node B | 263 |
+-----------------------+------------------+--------------------+
| 2023-05-17 00:00:00 | node C | 213 |
+-----------------------+------------------+--------------------+
| 2023-05-17 00:05:00 | node A | 212 |
+-----------------------+------------------+--------------------+
| 2023-05-17 00:05:00 | node B | 297 |
+-----------------------+------------------+--------------------+
| 2023-05-17 00:05:00 | node C | 243 |
+-----------------------+------------------+--------------------+

This results in 2 rows after the grafana timeseries transform:
+-----------------------+------------------+-------------------+---------------+
| Timestamp | node A | node B | node C |
+-----------------------+------------------+-------------------+---------------+
| 2023-05-17 00:00:00 | 234 | 263 | 213 |
+-----------------------+------------------+-------------------+---------------+
| 2023-05-17 00:05:00 | 212 | 297 | 243 |
+-----------------------+------------------+-------------------+---------------+

We are expecting 6 rows, but for the sake of example, if MaxDataPoints were set explicity to 4 (or implictly by panel width and timerange)
the plugin adds “LIMIT 4” to the query and only fetches 4 rows, which results in missing samples:

+-----------------------+------------------+-------------------+---------------+
| Timestamp | node A | node B | node C |
+-----------------------+------------------+-------------------+---------------+
| 2023-05-17 00:00:00 | 234 | 263 | 213 |
+-----------------------+------------------+-------------------+---------------+
| 2023-05-17 00:05:00 | 212 | | |
+-----------------------+------------------+-------------------+---------------+

Feature request for Oauth connection in this plugin

Hi,

Great work with the open source grafana to snowflake plugin ! Unfortunately, I'm unable to use this plugin as I can only access our snowflake via oauth connection with access token which only have password and key pair authentication. We have successfully access our snowflake using python Jupyter Notebook. Is there a quick workaround to use oauth with this plugin? If not, hope you can add oauth authenication with access token to this plugin.

Thanks !

Plugin crashes on query V 1.4.0

Hello,
After installing the latest version of the plugin the plugin started to crash on query.
Plugin version: 1.4
Grafana version: 9.3.1

`2023-03-04 01:34:54
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:54.488488322Z level=debug msg="Plugin restarted"
2023-03-04 01:34:54
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:54.487676224Z level=debug msg="plugin address" address=/tmp/plugin4110788322 network=unix
2023-03-04 01:34:54
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:54.487655902Z level=debug msg="using plugin" version=2
2023-03-04 01:34:54
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:54.487187228Z level=debug msg="Serving plugin" plugins="[data diagnostics]"
2023-03-04 01:34:54
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:54.469196051Z level=debug msg="waiting for RPC address" path=/var/lib/grafana/plugins/snowflake-grafana-datasource/gpx_snowflake-datasource_linux_amd64
2023-03-04 01:34:54
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:54.469139522Z level=debug msg="plugin started" path=/var/lib/grafana/plugins/snowflake-grafana-datasource/gpx_snowflake-datasource_linux_amd64 pid=177
2023-03-04 01:34:54
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:54.468196332Z level=debug msg="starting plugin" path=/var/lib/grafana/plugins/snowflake-grafana-datasource/gpx_snowflake-datasource_linux_amd64 args=[/var/lib/grafana/plugins/snowflake-grafana-datasource/gpx_snowflake-datasource_linux_amd64]
2023-03-04 01:34:54
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:54.467940261Z level=debug msg="Restarting plugin"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.825025105Z level=debug msg="plugin process exited" path=/var/lib/grafana/plugins/snowflake-grafana-datasource/gpx_snowflake-datasource_linux_amd64 pid=164 error="exit status 2"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.822173444Z level=debug msg="\t/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:921 +0x28a"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.822155561Z level=debug msg="created by google.golang.org/grpc.(*Server).serveStreams.func1"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.822089889Z level=debug msg="\t/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:923 +0x98"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821987396Z level=debug msg=google.golang.org/grpc.(*Server).serveStreams.func1.2()
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821979717Z level=debug msg="\t/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:1608 +0xa1b"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821974158Z level=debug msg="google.golang.org/grpc.(*Server).handleStream(0xc00025d180, {0x1297e00, 0xc000404180}, 0xc00037e000, 0x0)"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.82196847Z level=debug msg="\t/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:1279 +0xccf"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821957994Z level=debug msg="google.golang.org/grpc.(*Server).processUnaryRPC(0xc00025d180, {0x1297e00, 0xc000404180}, 0xc00037e000, 0xc000550990, 0x193e8f0, 0x0)"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821951174Z level=debug msg="\t/home/runner/go/pkg/mod/github.com/grafana/[email protected]/genproto/pluginv2/backend_grpc.pb.go:197 +0x138"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821945477Z level=debug msg="github.com/grafana/grafana-plugin-sdk-go/genproto/pluginv2._Data_QueryData_Handler({0xf411e0?, 0xc0003061a0}, {0x12934c0, 0xc0000cd440}, 0xc00051c7e0, 0xc000550210)"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821939748Z level=debug msg="\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34 +0xbf"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821933995Z level=debug msg="github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1({0x12934c0, 0xc0000cd440}, {0x101a2c0, 0xc0003146e0}, 0xc000242ae0?, 0xf65d60?)"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821925784Z level=debug msg="\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25 +0x3a"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821914816Z level=debug msg="github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1({0x12934c0?, 0xc0000cd440?}, {0x101a2c0?, 0xc0003146e0?})"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821907913Z level=debug msg="\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/server_metrics.go:107 +0x87"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.82190071Z level=debug msg="github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).UnaryServerInterceptor.func1({0x12934c0, 0xc0000cd440}, {0x101a2c0, 0xc0003146e0}, 0x7f3dd2a44118?, 0xc00027e690)"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.82189452Z level=debug msg="\t/home/runner/go/pkg/mod/github.com/grafana/[email protected]/genproto/pluginv2/backend_grpc.pb.go:195 +0x78"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821886982Z level=debug msg="github.com/grafana/grafana-plugin-sdk-go/genproto/pluginv2._Data_QueryData_Handler.func1({0x12934c0, 0xc0000cd440}, {0x101a2c0?, 0xc0003146e0})"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821880026Z level=debug msg="\t/home/runner/go/pkg/mod/github.com/grafana/[email protected]/backend/grpcplugin/grpc_data.go:47 +0x2b"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.82185393Z level=debug msg="github.com/grafana/grafana-plugin-sdk-go/backend/grpcplugin.(*dataGRPCServer).QueryData(0xc000261990?, {0x12934c0?, 0xc0000cd440?}, 0xc0000cd590?)"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821819446Z level=debug msg="\t/home/runner/go/pkg/mod/github.com/grafana/[email protected]/backend/data_adapter.go:21 +0x4d"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821769016Z level=debug msg="github.com/grafana/grafana-plugin-sdk-go/backend.(*dataSDKAdapter).QueryData(0xc000511df0, {0x12934c0, 0xc0000cd440}, 0x0?)"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821720975Z level=debug msg="\t/home/runner/work/snowflake-grafana-datasource/snowflake-grafana-datasource/pkg/snowflake.go:60 +0x385"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821694341Z level=debug msg="main.(*SnowflakeDatasource).QueryData(0xc0003146e0?, {0x1032c00?, 0x0?}, 0xc000314780)"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821654374Z level=debug msg="\t/home/runner/work/snowflake-grafana-datasource/snowflake-grafana-datasource/pkg/query.go:223 +0x4bd"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821583274Z level=debug msg="main.(*SnowflakeDatasource).query(0x0?, {{0x19139a8, 0x1}, {0xc0003081e8, 0x5}, 0x9a7, 0x6fc23ac00, {{0x2634e240, 0xedb9329b9, 0x1958760}, ...}, ...}, ...)"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821571414Z level=debug msg="\t/home/runner/work/snowflake-grafana-datasource/snowflake-grafana-datasource/pkg/query.go:101 +0x705"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821538776Z level=debug msg="main.(*queryConfigStruct).fetchData(0xc0005cf478, 0x10b3d29?, {0xc00054c580?, 0xc0000b9980?}, {0x0?, 0x16?})"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821530699Z level=debug msg="\t/home/runner/work/snowflake-grafana-datasource/snowflake-grafana-datasource/pkg/query.go:161 +0xed5"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821524083Z level=debug msg="main.(*queryConfigStruct).transformQueryResult(0xc0005cf478, {0xc0004f2c60, 0xb, 0xc00033a300?}, 0xb8?)"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821510679Z level=debug msg="goroutine 84 [running]:"
2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.82150152Z level=debug msg=

2023-03-04 01:34:53
logger=plugin.michelin-snowflake-datasource t=2023-03-03T23:34:53.821453898Z level=debug msg="panic: interface conversion: interface {} is float64, not string"`

Error when adding "LIMIT" keyword to time-series

When I try to add "LIMIT" keyword to queries of time-series panel, it always reports error "001003 (42000): SQL compilation error: syntax error line 7 at position 9 unexpected 'LIMIT'."

Query:

SELECT TIME_SLICE(TO_TIMESTAMP(CONVERT_TIMEZONE('UTC', flowEndSeconds)), 60, 'SECOND', 'START') as time, 
count(*) as count,
sourceNodeName
FROM flows
WHERE $__timeFilter(time)
GROUP BY time, sourceNodeName
ORDER BY time
LIMIT 10

Error msg:
image

Query config:
image

From the error message we can tell it applies both the default max data points 1526 and the limit 10 defined in the query, which leads to error.

When looking at the code, I have trouble to understand these line:

if queryConfig.MaxDataPoints > 0 && queryConfig.isTimeSeriesType() && strings.Contains(queryConfig.FinalQuery, "LIMIT ") {
queryConfig.FinalQuery = fmt.Sprintf("%s LIMIT %d", queryConfig.FinalQuery, queryConfig.MaxDataPoints)

It will automatically append "LIMIT maxDataPoints" to the end of the query, which always cause error.

The purpose for me to add "LIMIT 10" to the query is to limit the number of series it returns. While the definition of maxDataPoints is "sets the maximum numbers of data points for EACH series returned." I don't think the code is correctly supporting maxDataPoints.

Initializing variables is not working

Hi, I'm trying to create a variable based on a query and I'm reciving the following error:

Validation
(0 , V.firstValueFrom) is not a function

Templating
Template variables could not be initialized: (0 , V.firstValueFrom) is not a function

I also tried with a very simple query just to make sure that there is not the problem
The query I used is: SELECT $1 as x FROM VALUES ('val1'), ('val2');
But still facing the same problem

Anyone has the same problem?

EDIT:
I figure out that is something related to the version. I'm usign grafana 7.0.6 but in the lastest version, it works. There is any possiblity to make this feature backward comptible?

timezone conversion is inconsistent and prevents from partition pruning

Observed behavior

A simple query like this one...

SELECT
    $__timeGroup(timestamp_ms, $__interval, previous) as time
    , count(*) as nb 
FROM my_table
WHERE true
    and $__timeFilter(timestamp_ms) 
GROUP BY
    1

...gets translated to...

SELECT
    TIME_SLICE(TO_TIMESTAMP(timestamp_ms), 60, 'SECOND', 'START') as time
    , count(*) as nb 
FROM my_table
WHERE true
    and CONVERT_TIMEZONE('UTC', 'UTC', timestamp_ms) > '2023-12-06T08:00:26.372Z' AND CONVERT_TIMEZONE('UTC', 'UTC', timestamp_ms) < '2023-12-08T08:00:26.372Z' 
GROUP BY
    1

I see two issues with this behavior:

  1. The timezone conversion is only applied on the $__timeFilter() and not on the $__timeGroup(). This is inconsistent, imho, and may lead to misinterpretation of the visialized time scale. Or am I wrong here?
  2. The timezone conversion performed by the $__timeFilter() is too complex for Snowflake in order to apply partition pruning. (At least if the timestamp column belongs to a view which is projecting a VARIANT column to a timestamp.)

Expected behavior

  1. $__timeFilter() and $__timeGroup() apply the same timezone conversions.
  2. The timezone conversion of the $__timeFilter() is...
    a. ...entirely skipped if the from and to timezone of the conversion are identical (UTC)
    b. ...performed in a way which does not hamper partition pruning. I.e., we could probably convert the timezone of the filter values (the timestamp literals provided by Grafana) instead of converting the timezone of the timestamp column.

Happy to discuss this issue. Is there any Slack channel where we could sync on it?

Query Editor not saving queries after edition

Hello,

Great project! Hope to help with some feedback.

The release v1.6.0 came with a bug in the query editor. When we update the query in the editor, it tries to run the "demo" query in the preview, instead of my custom query, and when we Apply the changes to the panel, it keeps running the "demo" query instead of my customized query.

When I open to Edit the panel, my custom query is not on the editor, its the "demo" query that is there, even after I've applied the changes.

In my specific case, I came from v1.5.0 so I already had some queries working. Those kept working after upgrading, but when I edited them, they kept referencing the old version of the query.

Looking into the json of the dashboard, I could see the "demo" query or the old version of the query was in the JSON instead of the new queries after saving.

I'm not sure what is causing the issue, if it is "Use Grafana's Code editor for query editing." or "Add query context cancellation."

But I got this error on my logs:

│ logger=plugin.michelin-snowflake-datasource t=2024-04-22T18:00:25.140415658Z level=error msg="Could not execute query" err="context canceled" query="myquery"

I rolled back to v1.5.0 and queries are saving again.

Thank you! Hope this to be of some help.

Time-series unable to add "Labels to fields" transformation

I'm trying to add a "Labels to fields" transformation to the time-series panel, so that it will show the "group by" column instead of aggregated metric column in the legend. But it tells me there is no available label.

Executed query:

SELECT timeField as time,
CONCAT(source, '->', destination) as pair,
AVG(throughput)
FROM table
WHERE $__timeFilter(time)
GROUP BY time, pair
HAVING AVG(throughput) > 0
ORDER BY time

Expected(snowflake plugin by Grafana-lab):
image

Actual graph-view:
image

Actual table-view:
image

Grafana: v8.3.3

GROUP BY 2 columns results column name with field name + label

Grafana Version: 9.1.1
Plugin version: v1.2.0

Executed Query:

SELECT $__timeGroup(DATETIME,$__interval) as Time, HOST as metric, (max(USED/10241024)/max(TOTAL/10241024)*100) AS used_percent FROM SERVER_MEMORY WHERE $__timeFilter(DATETIMEUTC,$__interval) GROUP BY 1,2 ORDER BY 1

Everything looks good except data comes like below

image

image

USED_PERCENT is appended with HOST (included with a space). Using flowchart plugin which grabs the field name always displays as "USED_PERCENT" for all the hosts

image

Expected Behavior:

We should see labels as field names i.e., each HOST should displayed as column name instead of "USED_PERCENT {HOST}" it should be just "HOST"

Validation query error : 390100 (08004): Incorrect username or password was specified.

Even after specifying correct id and password the package fails to connect to snowflake.
Things that I can share,

  1. Our snowflake admins were able to look at their logs and noted a rejection for wrong password on their end.
    5-16-2024 3-47-45 PM
  2. Using snowflake UI allows me to log in with the account.
  3. We tried with 2 accounts, one with some special characters in password and other with a very simple alphanumeric password

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.