Coder Social home page Coder Social logo

grafana-plugin-sdk-go's Introduction

Grafana Plugin SDK for Go

This SDK enables building Grafana backend plugins using Go.

License Go.dev Go Report Card Circle CI

Current state

This SDK is still in development. The protocol between the Grafana server and the plugin SDK is considered stable, but we might introduce breaking changes in the SDK. This means that plugins using the older SDK should work with Grafana, but might lose out on new features and capabilities that we introduce in the SDK.

Navigating the SDK

The SDK documentation can be navigated in the form of Go docs. In particular, you can find the following packages:

  • backend: Package backend provides SDK handler interfaces and contracts for implementing and serving backend plugins. It includes multiple sub-packages.
  • build: Package build includes standard mage targets useful when building plugins.
  • data: Package data provides data structures that Grafana recognizes. It includes multiple subpackages like converters, framestruct and sqlutil.
  • experimental: Package experimental provides multiple experimental features. It includes multiple sub-packages.
  • live: Package live provides types for the Grafana Live server.

See the list of all packages here.

Contributing

If you're interested in contributing to this project:

License

Apache 2.0 License

grafana-plugin-sdk-go's People

Contributors

academo avatar aknuds1 avatar andresmgot avatar aocenas avatar atifali avatar bergquist avatar briangann avatar dependabot[bot] avatar fzambia avatar gabor avatar idafurjes avatar itsmylife avatar kminehart avatar kylebrandt avatar marcusolsson avatar marefr avatar masslessparticle avatar njvrzm avatar oshirohugo avatar papagian avatar ryantxu avatar scottlepp avatar stephaniehingtgen avatar tanner-bruce avatar toddtreece avatar wbrowne avatar xnyo avatar yesoreyeram avatar ying-jeanne avatar yuri-tceretian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grafana-plugin-sdk-go's Issues

Consider renaming "resource" to "route"

I have described the new feature to many people -- all very excited... but they say things like "its great we can now define routes"

Should we change CallResource to just Route? so we have Route.Request|Response?

I like route better for: api.go

apiRoute.Any("/datasources/:id/route/*", Wrap(hs.CallDatasourceRoute))

Build/CI: Integrate gorelease tool to identify/verify incompatible changes to highlight any breaking changes

We're already using the gorelease tool when tagging/releasing a new version of the SDK, see https://github.com/grafana/grafana-plugin-sdk-go/blob/main/contribute/developer-guide.md#releasing. It would be nice if we could integrate gorelease tool to identify/verify incompatible in PRs as well changes to highlight any breaking changes and make people aware. Similar to how we use levitate in the grafana/grafana repository.

Unknowns:

  • Can you run the gorelease tool to compare changes in a branch against the latest SDK version/tag?
  • Is it feasible to create a GitHub action using the gorelease tool that can comment/report breaking changes in a PR?

data: Remove/Relocate Frame.AppendRowSafe

Adding slice of []interface{} isn't that useful as an argument in this context for reasons explained in #112 (comment) .

More generally though, if we want to add more "Safe" methods I think it would be better to have another type of frame, maybe SafeFrame and give that similar methods but they will return errors in cases where a regular Frame would panic.

The general reasons things like At() and Set() have panic possibilities is that in the many cases where we are working with slices/vector/array logical types, you can check these things either by guarding against input types, or checking lengths etc once in the functions that enclose them. This eliminates redundant checking in that context (one call per Field, instead of per element per Field). Also, these areas that can panic are generally idiomatic places to check for things to prevent panics in Go anyways.

However, if you have a structure that is row oriented or offers less guarantees, then with the SafeFrame type we can do checking for you and reduce the code footprint from the consumers side.

(or we can never add it, but if we do want methods with more checking, a separate type should keep things cleaner - and now is the time to make a breaking change like that).

Will make PR to illustrate.

Codegen for dataframe package

The dataframe package has a lot of repeated code, in particular around the Vector type.

Since Go lacks generics, and reflection can be slow and lead to some nasty traces etc, I think we should use codegen for this.

Maybe something like https://github.com/cheekybits/genny

This should allow us to fill out the vector types, allowing easy mapping of most of the primitive types the would occur in datasources (for example, sql table types).

Potential areas for codegen line up with this issue: #20

Workflow for tagging new releases

What would you like to be added:
Would be nice to have a workflow in CircleCI that create a new git tag, github releases with changelog.

Why is this needed:
Doing it manually calls for human errors and "anyone" should be able to tag a new release.

Consider Removing Sloth Prefix (๐Ÿฆฅ: idx)

In #83 we had support for duplicate Field names by adding a sloth emoji and fieldIdx when encoding to arrow, and removing it, if present, when re-encoding.

However, the go library seems to support duplicate names as of apache/arrow#6580 which is tied to Apache Jira issue https://issues.apache.org/jira/browse/ARROW-8028

I'm entirely sure the direction for the Arrow standard as a whole in regards to this. In particular, we would at least need the JS arrow lib to load a table that has this. However it seems this is going to be supported:

In my experience duplicate field names do arise in practice and it's a slippery slope if Arrow implementations start making arbitrary (or otherwise opinionated) decisions about what to do with such data (whether disallowing them or otherwise disambiguating them by modifying the field names).

-- @wesm

Which is what we went and did (modifying field names). Example justification in the jira issue:

SELECT 1 AS one, 1 AS one;

is basically the same as our justification.

Also, it looks like we merged this in the same day arrow merged the change :-)

Implement support for resources

Should be possible for a backend plugin to register and implement resource handlers (HTTP style API) that's exposed through Grafana's HTTP API.

One idea is to introduce a GetSchema rpc method or similar where the plugin can tell what kind of resources/routes it want to expose through Grafana's HTTP API:

service Backend {
// Information about what a backend plugin supports/expects
rpc GetSchema(GetSchema.Request) returns (GetSchema.Response);

message GetSchema {
message Request {
}
message Response {
Schema pluginSchema = 1; // global configuration schema, eg.g from Grafana config or env variables
Schema pluginConfigSchema = 2; // plugin configuration schema, e.g. datasource/app config stored in Grafana database
map<string,Resource> resources = 3; // resource configuration
}
}

message Resource {
enum Permission {
VIEWER = 0;
EDITOR = 1;
ADMIN = 2;
}
message Route {
enum Method {
ANY = 0;
GET = 1;
PUT = 2;
POST = 3;
DELETE = 4;
PATCH = 5;
HEAD = 6;
}
string path = 1;
Method method = 2;
Permission permission = 3;
}
Permission permission = 1;
map<string,Resource> childResources = 2;
repeated Route routes = 3;
}
message GetSchema {
message Request {
}
message Response {
Schema pluginSchema = 1; // global configuration schema, eg.g from Grafana config or env variables
Schema pluginConfigSchema = 2; // plugin configuration schema, e.g. datasource/app config stored in Grafana database
map<string,Resource> resources = 3; // resource configuration
}
}

Given this the SDK can add some abstractions and http handler interface to help plugin developer implement it routes. Experimental idea of what this could look like for a plugin developer configuring it's plugin:

func BackendProvider(configure ConfigurePlugin) {
configure.
Metrics(func(m ConfigureMetricsCollector) {
m.Register(nil)
}).
HealthCheck(nil).
Resource("test", func(r ConfigureResource) {
r.Get(":id", nil)
r.Update(":id", nil)
r.Post(":id", nil)
r.Delete(":id", nil)
}).
Resource("test2", func(r ConfigureResource) {
r.Get(":id", nil)
r.Update(":id", nil)
r.Post(":id", nil)
r.Delete(":id", nil)
})
}

Add support for validating plugin config

Currently, the required config properties for app and datasource plugins lives in the client side of plugin implementation. Some uses the HTTP settings component provided by Grafana having no control of the validation of that input. Would be nice if Grafana could call backend plugin to validate config before saving app/datasource settings to the database. Could be used when saving from Grafana UI and from provisioning/HTTP API. In addition the plugin could potentially return default values if not provided.

Nice to have or later: If plugin could tell which app/datasource plugin config properties it expects and whether those are required, default value etc. This could potentially be used in future to build config form in UI based on field configuration. Also, datasource provisioning schema could be exposed in UI removing the hurdle of keeping Grafana documentation up to date regarding supported provisioning configuration.

Add duration vector type to dataframe

Duration type will be useful eventually, good task to get a little familiar with arrow.

All in dataframe package:

  • Make []time.Duration and []*time.Duration Vector types and update newVector()
  • Update NewField to support the type
  • Update golden arrow test file with duration and nullable duration vectors via var update and goldenDF() in arrow_test.go. See dataframe package's README for more info.
  • Make two arrow column builders for these, update buildArrowColumns() type switch to use them, and update fieldToArrow(). See NewDurationBuilder
  • Update the the two type switches in in UnMarhsalArrow to support this field. (May be different if someone has completed #19). See NewDurationData

Move Labels from Frame to Field

This has already happened on the frontend: grafana/grafana#19926 .

Since GEL currently uses labels on the Dataframe for Union/Join operations, this will be a much more significant change to the gel plugin than the SDK. So that will get its own issue (TODO edit issue here once created). So this change will probably live in a branch until GEL has been updated with a corresponding branch.

WIP Issue: Stubbing Plugin With Backend Component

Yes, a WIP issue ;-) ... likely an Epic?

Building out the checklist as I work on a test datasource:

Datasource with Backend Stubbing:

  • Makefile(?) for building backend
  • Generate Go sub files with imports
  • Whatever healthcheck, logger, debug, pprof etc?
  • Eliminate/Reduce datasource name/id being repeated in json, Go files, Makefile
  • Go Stub for Query method testing?
  • Add needed fields to plugin.json (capabilities, e.g. annotations), executable
  • .gitignore updates as needed.
  • Add sub for testDatasource in backend
  • query and testDatasource in DataSource.ts point to backend API endpoint [Frontend]
  • Frontend files have lots of jsx and esModuleInterop flags problems in my vscode, maybe instructions for vscode config when working with these stuff [Frontend]
  • Resolve conflict in dist being overwritten with backend executable in different build step (or have gtk run it?) [Frontend]

Can we unify stubbing out a plugin with a backend component. Same with other items like the Makefile, make building the frontend and backend generally the same (with sub commands to develop on just one if desired)

Package (re)naming?

Currently main SDK package is named backend and have the path github.com/grafana/grafana-plugin-sdk-go/backend.

Are we satisfied with this name or would it make sense to rename this package to sdk or something else and if so what?

Something like this could be used to update import paths for existing plugins already using the SDK:

gofmt -w -r '"github.com/grafana/grafana-plugin-sdk-go/backend" -> "github.com/grafana/grafana-plugin-sdk-go/sdk"' **/*.go

HealthCheck: support plugin and datasource healthchecks

The healthcheck endpoint needs to be wired up so it can be called from

		apiRoute.Get("/plugins/:pluginId/health", Wrap(hs.CheckHealth))

and

		apiRoute.Any("/datasources/:id/health", ???)

Currently this is implemented by all plugins using /resource/test -- and that should be made required

Allow plugins to expose metrics

As a plugin developer I should be able to instrument my plugin code so that Grafana users can monitor the health of my plugin.

Suggested to use Prometheus client library since Grafana are using that to instrument its code/expose metrics. Plugin developers should be able to register their collectors/metrics using the SDK and instrument their code using the Prometheus client library in a standard way.

The SDK should hide the underlying details of gathering and returning the metrics when Grafana requests them.

Should be able to use expfmt.MetricFamilyToText to convert gathered metrics into stream of text and return over GRPC to Grafana.

Related to grafana/grafana#20980.

Work out dataframe unique name restriction due to arrow

When encoded to Arrow, Field Names must be unique within in a table.

We need to either accept this restriction and indicate it as part of the model, or change how we map the Names of fields on our dataframe to Arrow Field names.

backend.proto JSON is incohesive

Looking at these messages in backend.proto there are three distinct namings/typings for representing JSON data:

message PluginConfig {
  ...
  bytes jsonData = 4;
  ...
}
message DataQuery {
  ...
  bytes json = 5;
}
message StreamingMessage {
  ...
  string message = 3;
}

I think we should make sure that JSON payloads sent together with gRPC messages are not too different from each other so that our API is internally consistent.

(cc @bergquist)

Make "instance" a core SDK idea

on the frontend, a plugin instance is managed independently -- on the backend each request needs to sort out which instance it is and does not have a clear place to initalize/destroy config changes

Perhaps something like:

type InstanceRecycler interface {
	Recycle() error
}

type PluginInstance struct {
	CheckHealthHandler  CheckHealthHandler
	CallResourceHandler CallResourceHandler
	QueryDataHandler    QueryDataHandler
	Recycler            InstanceRecycler
}

type PluginHandler interface {
	CreateAppPluginInstance(ctx context.Context, config PluginConfig) (*PluginInstance, error)
	CreateDataSourceInstance(ctx context.Context, config PluginConfig) (*PluginInstance, error)
}

The SDK would then manage calls so that:

  • Create{DataSource|AppPlugin}Instance was called when appropriate
  • the correct instance is called for each health|query|resource|... request
  • when the configs change (based on the time parameter) the Recycle function is called

Thoughts? This is related to #99 -- and hopefully explains why I think this solves @bergquist's issue with the healthcheck clarity

Change type of Frame.QueryResultMeta's Custom property

Instead of map[string]interface{} I think we should just make it an interface.

This will make a better experience, as you can create your own custom well Typed struct in a plugin and use those with an assertion:

package main

import (
	"fmt"
)

type Meta struct {
	Whatever string
	Custom   interface{}
}

type MyThing struct {
	Foo string
}

func main() {
	mt := MyThing{Foo: "Hi"}
	meta := Meta{}
	meta.Custom = mt
	fmt.Println(meta)

	aMT, ok := meta.Custom.(MyThing)
	if !ok {
		fmt.Println(ok)
	}
	fmt.Println(aMT)
}

Add field to data queries to ease "routing"/looking up query type

Core data datasources in Grafana are supporting/handling multiple query types and/or data source types.
Azure monitor example:
https://github.com/grafana/grafana/blob/c6cc840ceb6ff64dc45200888477b14c89bc0785/pkg/tsdb/azuremonitor/azuremonitor.go#L49-L65
CloudWatch example:
https://github.com/grafana/grafana/blob/c6cc840ceb6ff64dc45200888477b14c89bc0785/pkg/tsdb/cloudwatch/cloudwatch.go#L47-L63

We would like to enhance the data query request to include a new field that can be used to identify "query type". SDK can then provide some basic routing/map of "query type" to query handler and by that supporting several query handlers without having to parse the model json first to understand the query type.

With that plugin developer should be able to register a handler for a "query type" and marschal model json to struct for type safe handling.

Can we use a tagged version of arrow?

Adding this as an issue so I don't forget about it.

Looking at arrows go.mod file it names the module github.com/apache/arrow/go/arrow, but they create tags named apache-arrow-<version>, for example apache-arrow-0.17.0.

So I'm not sure it's possible to reference go module version other than master yet, but think @kylebrandt they had just added this a while ago?

Refactor dataframe UnMarshalArrow

Rename when doing this: s/ df.UnMarshalArrow / df.UnmarshalArrow

This is a long sprawling function. Would probably be nice to break up like MarshalArrow with its decoders.

Good way to get some familiarity with the arrow code.

module does not contain package github.com/grafana/grafana-plugin-sdk-go

I'm trying to build a simple example against this library:

import (
	gf "github.com/grafana/grafana-plugin-sdk-go"
}

and am getting this:

build gis.rit.edu/grafana-alc: cannot load github.com/grafana/grafana-plugin-sdk-go:
module github.com/grafana/grafana-plugin-sdk-go@latest found (v0.14.0),
but does not contain package github.com/grafana/grafana-plugin-sdk-go

go version 1.13.7 linux/amd64.

Just running 'go install' yields the same results.

Add logger abstraction

The default logging library included from SDK is https://github.com/hashicorp/go-hclog. Based on my research we want to enable the JSON format of hclog in plugins to get proper output of logs in Grafana. Suggested to add a basic logger infterface to sdk and expose that through sdk. This is good since plugins don't need to depend on hclog any longer, just the sdk.

Add support for receiving global configuration properties from Grafana

Currently each plugin have to use environment variables as a way to receive global configuration properties for the plugin process. Would be nice if Grafana could provide configuration properties for the plugin process based on plugin configuration in Grafana's configuration file, to start with.

Idea is to add a some sort of configuration rpc method to backend service that Grafana can call after a plugin have been started. Payload can include configuration properties and information about the host Grafana instance about its version, edition and whether it has a valid or expired license:

rpc Configure(Configure.Request) returns (Configure.Response);

message HostInfo {
string version = 1; // v6.5.2
string edition = 2; // oss or enterprise
bool validLicense = 3;
bool expiredLicense = 4;
}
message Configure {
message Request {
HostInfo grafanaHostInfo = 1; // information about the Grafana host instance
bytes config = 2; // extracted json config from Grafana config or env variables based on GetSchema
}
message Response {
}
}
message ValidatePluginConfig {
message Request {
PluginConfig config = 1;
}
message Response {
}
}

Nice to have: If plugin could tell which configuration properties (name and type) it expects and whether those are required, default value etc.

Go Implementation of Long/Wide Time Series

In tandem with grafana/grafana#22219 which explains the issue and logic.

Will be used for:

  • Converting to Grafana time series for alerting - grafana/grafana#22511
  • GEL (backend transforms)
  • Needs to match front end in that these series can be Graphed etc.

SDK will have logic to detect type from schema. Also some sort of Getters that make converting to time series easy without having to actually importing the Grafana go module (which contains the tsdb package).

data: Make FieldConfig documentation useful

Currently, looking at the documentation for FieldConfig's Properties and associated objects that are in the SDK, I honestly can't tell what they are really for and what it means to set them.

Documentation needs to be extended to explain what these values do and why you might set them. I can't do this since I don't know myself.

Add support for checking plugin config health

Given a Grafana app/datasource plugin before saving it it will call a test function provided by the client side app/datasource. It would be nice if you could call the backend plugin to verify the health of a configured app/datasource instance.

Later we could possibly show iindication n UI whether certain app/datasource is online/working or not.

Possible Design Issue: Many-to-one structure with Query/Response

The relationship between queries and their responses is confusing when developing plugins, since we don't directly map queries to query responses in the structure.

A data query request has multiple queries:

// QueryDataRequest
message QueryDataRequest {
  // Plugin Configuration
  PluginConfig config = 1;

  //Info about the user who calls the plugin.
  User user = 2;

  // Environment info
  map<string,string> headers = 3;

  // List of data queries
  repeated DataQuery queries = 4;
}

However, a response groups all the queries in a slice of frames (where the bytes of frames is really a repeated frame, but encoded at a different layer):

message QueryDataResponse {
  // Arrow encoded DataFrames
  // Each frame encodes its own: Errors, meta, and refId
  repeated bytes frames = 1;

  // Additional response metadata
  map<string,string> metadata = 2;
}

While the frames can be tied to the response by their RefId property, this structure seems to lead to some awkward situations:

  1. The QueryData endpoint in the normal case handles multiple queries. If some of those queries fail, one still wants to return the queries that did work. This can be worked around by adding an empty frame with only RefId, Meta, and Error/Warning info, but this not intuitive. This case stands out as being strange in particular when you consider a query error that has been detected before it is even sent to whatever service the plugin talks to.
  2. One of the most common results of a single Query is to have multiple frames as the result of that query. For example, a query might return a bunch of time series that don't share the same time index, and the way to represent this is with []Frame (So a response ends up like []{ Frame.Refid = A, Frame.Refid = A, Frame.RefId = B, Frame.RefId = B, Frame.RefId = B }). However, within a Frame we have QueryResultMeta. When you have multiple frames, this gets confusing - what frame(s) should you attach the QueryResultMeta to? One of them, all of them, ( some of them ;-) ) or add an extra frame?

I also noticed when writing a plugin, the removing of the query encapsulation the came in on the query in the response is confusing to me, because the query->queryResponse mapping doesn't exist in the Response structure, only in the request. So I believe this relationship change from the request to response within the QueryData method is too confusing.

I think it might be less confusing if a QueryDataResponse where is a collection of QueryResponses and each QueryResponse has []Frame. Frames still can have metadata particular to the frame, and the QueryResponses, which can contain []Frame, have a metadata for information related to the query.

So in summary:

A request contains multiple query objects. I think a response should also contain multiple queryResponse objects, so there is a one-to-one mapping (symmetry) from query to response within the call.

Also of note is that this basically maps to the way plugin query/responses were before (a map of refId -> response), and I think it was better - so this might be taken as a regression.

Add support for collecting plugin config metrics

Similar to #38, include plugin config in collect metrics requests to allow plugin developer to return custom metric response depending on what plugin config contains (/api/plugins/<plugin id>/metrics or /api/datasources/<data source id>/metrics).

Support Non-Nullable Types

We should support non-nullable types for vectors. When creating a datasource, if you don't have NULL values as a possibility in your datasource, then working with pointers can be annoying.

We will have to figure out the FieldTypes for this which will need a design decision to line up with Frontend's understanding of FieldTypes.

Protocol/protobuf: Naming of things before too late for breaking changes

Now is the time to do breaking changes in the protocol/protobuf definition and one of the things is naming.

Services:
Should services be suffixed and if so with what? Plugin or Srv or Service or something else ?

  • Core: Rename to something else, Backend?
  • TransformCallback: Rename to TransformDataCallback?
  • GrafanaPlatform: Remove for now since it's not used and the history is in git.
  • Streaming: Remove for now since it's not used and the history is in git.

RPC methods:
We currently have a mix of naming RPC methods, some uses <verb + something> and some other doesn't use verb. Examples:

service Core {
// HTTP Style request
rpc CallResource(CallResource.Request) returns (CallResource.Response);
// Well typed query interface
rpc DataQuery(DataQueryRequest) returns (DataQueryResponse);
}

service Diagnostics {
rpc CollectMetrics(CollectMetrics.Request) returns (CollectMetrics.Response);
rpc CheckHealth(CheckHealth.Request) returns (CheckHealth.Response);
}

service Transform {
rpc DataQuery(DataQueryRequest) returns (DataQueryResponse);
}
service TransformCallBack {
rpc DataQuery(DataQueryRequest) returns (DataQueryResponse);
}

Suggested changes:

  • Core: rpc DataQuery(...) => rpc QueryData(...)
  • Transform: rpc DataQuery(...) => rpc TransformData(...)
  • TransformCallBack: rpc DataQuery(...) => rpc QueryData(...)

Message contracts:
We currently have a mix of request/response messages on root level and request/response messages in "message level". Examples:

message DataQueryRequest {
// Plugin Configuration
PluginConfig config = 1;
// Environment info
map<string,string> headers = 2;
// List of queries
repeated DataQuery queries = 3;
//Info about the user who calls the plugin.
User user = 4;
}
message DataQueryResponse {
// Arrow encoded DataFrames
// Each frame encodes its own: Errors, meta, and refId
repeated bytes frames = 1;
// Additional response metadata
map<string,string> metadata = 2;
}

message CallResource {
message StringList {
repeated string values = 1;
}
message Request {
PluginConfig config = 1;
string path = 2;
string method = 3;
string url = 4;
map<string,StringList> headers = 5;
bytes body = 6;
User user = 7;
}
message Response {
int32 code = 1;
map<string,StringList> headers = 2;
bytes body = 3;
}
}

Suggested changes:
My suggestion is to use request/response in "message level" since that seems to be an unofficial convention being used in other places (terraform).

  • message DataQueryRequest / message DataQueryRespone => message QueryData { message Request { } message Response { }
  • Transform: DataQuery => TransformData
    • I would prefer specific request/response messages instead of reusing data query request/response since then we can add fields to any of them without affecting the other one. message TransformData { message Request { } message Response { }
  • TransformCallback: message DataQueryRequest / message DataQueryRespone => message TransformDataCallback { message Request { } message Response { }
    • Request plugin config not needed, only org id and datasource id.

Various:

Sync frontend/backend meatadata response and error/warning handling

To support a rollup indicator, the frontend has added a "notice" section to the response metadata:
https://github.com/grafana/grafana/blob/master/packages/grafana-data/src/types/data.ts#L18

This includes a list that can be info|warning|error:
https://github.com/grafana/grafana/blob/master/packages/grafana-data/src/types/data.ts#L48

We previously added a "warnings" section to the metadata response... this should be removed and replaced with the matching structure. (that now has frontend support)

Make generated prorobuf code internal and copy protobuf definition to Grafana repository?

Suggest copy protobuf definition to grafana/grafana repository and let Grafana compile protobuf definition to an internal package.
Suggest putting compiled protobuf definition in an internal go package in this repository.

Why?
This way protobuf definition (protocol) will follow Grafana's semantic versioning. Since Grafana owns/defines/hosts the plugins it makes sense that changes to .proto are made first in Grafana repository. After this , this SDK and possible others implementing the protocol can decide when to update their .proto file and integrate the changes.
This will also allow Grafana to define extended protocol to support for example renderer, which may not be something we want to expose in SDK, at least not for now.

Would be good to do this before v1.0.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.