Coder Social home page Coder Social logo

dosco / graphjin Goto Github PK

View Code? Open in Web Editor NEW
2.8K 43.0 167.0 147.51 MB

GraphJin - Build NodeJS / GO APIs in 5 minutes not weeks

Home Page: https://graphjin.com

License: Apache License 2.0

Dockerfile 0.09% JavaScript 4.90% CSS 0.19% HTML 0.14% Shell 0.15% Go 92.92% Roff 0.34% Makefile 0.43% GAP 0.10% PLpgSQL 0.76%
graphql graphql-server postgresql golang automatic-api sql cloud-native database graph docker

graphjin's People

Contributors

andybar2 avatar brandstettermichael avatar chirino avatar diegosz avatar dosco avatar foursigma avatar frederikhors avatar gonzaloserrano avatar goreleaserbot avatar kardasis avatar kpbird avatar lennyridel avatar lsnow99 avatar nicocesar avatar ohkinozomu avatar pestdoktor avatar petmal avatar podhy avatar r3eg avatar rhnsharma avatar rkrishnasanka avatar rkumar0322 avatar simonw avatar slava-vishnyakov avatar stnrd avatar storrence88 avatar sudhakar avatar therealrara avatar wolfulus avatar wttw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graphjin's Issues

Error reading metadata

Hello,

Still trying to make embedded mode work. SuperGraph instance initilization fails when loading metadata from the server; inside GetFunctions(), this line is called:

err = rows.Scan(&fn, &fid, &fp.Type, &fp.Name, &fp.ID)

and the following error is returned:

sql: Scan error on column index 3, name "parameter_name": converting NULL to string is unsupported

The internal query, which reads from information_schema.routines, is returned successfully. The problem is scanning the data from that column "parameter_name".

I'm guessing this could be due to internally changing from pgx driver to the stdlib one, which does not support scanning from null to string? Their recommendation in that case would be using sql.NullString, sql.NullInt32, etc. But that would require some major refactoring in source code on all those metadata structures.

I could be doing something silly here, otherwise it seems to me that others would be having the same error?

Blocklist only block queries with singular name and not plural and the other way around

What version of Super Graph are you using?

Latest master version using go.mod, SuperGraph as a library.

Steps to reproduce the issue (config used to run Super Graph).

Using this code:

Blocklist: []string{
  "players",
},

Expected behaviour and actual result.

Querying with:

query players {
  players {
    id
    name
  }
}

is working because I get HTTP 200 with an empty body.

But if I query with:

query player {
  player {
    id
    name
  }
}

I get the first one! (ordered by ID ASC).

I think it should also block the query with the singular name, right?

ReadInConfig folder path not work

What version of Super Graph are you using? super-graph version

v0.14.10

Steps to reproduce the issue (config used to run Super Graph).

package main

import (
	"context"
	"database/sql"
	"fmt"
	"github.com/dosco/super-graph/core"
	_ "github.com/jackc/pgx/v4/stdlib"
	"log"
)

func main() {
	db, err := sql.Open("pgx", "postgres://postgres:postgres@localhost:5432/api_development")
	if err != nil {
		log.Fatal(err)
	}

	conf, err := core.ReadInConfig("./config/dev.yml")
	if err != nil {
		log.Fatalln(err)
	}

	sg, err := core.NewSuperGraph(conf, db)
	if err != nil {
		log.Fatalln(err)
	}

	query := `
		query {
			users {
				id
				name
			}
		}`
	ctx := context.Background()
	ctx = context.WithValue(ctx, core.UserIDKey, 1)
	res, err := sg.GraphQL(ctx, query, nil)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(string(res.Data))
}

Expected behaviour and actual result.

core.ReadInConfig("./config/dev.yml")

It actually read dev.yml not ./config/dev.yml, folder not included in path.

2020/06/07 12:05:00 open dev.yml: The system cannot find the file specified.

Add support for introspection queries

What would you like to be added:

Super Graph currently does not support introspection queries. Introspection queries are used by GraphQL editors to help with auto-complete and other useful features. They help someone building a query know what tables, columns, functions, etc are available to query. We currently use a mocked response to satisfy the GraphQL Playground UI.

The changes to create this would be limited to serv/introsp.go. We would need to iterate through the database schema (Global variable schema) and built out the introspection response object to en encoded and returned as json. In production mode this feature would need to be disabled.

To learn more
https://graphqlmastery.com/blog/graphql-introspection-and-introspection-queries
https://graphql.org/learn/introspection/

This is what an introspection query response looks like

"data":{
   "__schema":{
      "queryType":{
         "name":"Query"
      },
      "mutationType":null,
      "subscriptionType":null
   }
}
}

Why is this needed:

Will improve our support for GraphQL IDEs. This includes the GraphQL Playground UI that is built in to Super Graph.

Nested Inserts

What would you like to be added:

Currently single or bulk inserts do not support nested objects. For example you cannot create a post and an author at the same time. It would be useful to be able to create a new a row and be able to insert or update related rows. Majority of the changes would be in the psql/mutate.go file.

Why is this needed:

This is a common pattern seen in web apps and supporting it will eliminate multiple round trips currently used to achieve this.

super-graph new generates invalid yml format

What version of Super Graph are you using? super-graph version

Says unknown.... Though I just installed with go get

Have you tried reproducing the issue with the latest release?

This is occurring directly after running go get, so I believe so

What is the hardware spec (RAM, OS)?

16GB Ram, Manjaro Linux

Steps to reproduce the issue (config used to run Super Graph).

Install supergraph with go get
Run super-graph new myapp

Expected behaviour and actual result.

Expected behavior:
The dev and prod ymls should look like:

database:
  type: postgres
  host: db
  port: 5432
  dbname: myapp
  user: postgres
  password: postgres

Actual result:
(note the dbname line)

database:
  type: postgres
  host: db
  port: 5432
  dbname:myapp
  user: postgres
  password: postgres

Super Graph does not seem to be able to start with this small bug until you insert the space after dbname ERR failed to read config: While parsing config: yaml: line 138: could not find expected ':'

I tried taking a look at the template file for the config ymls, but I did not see anything that would cause this at first glance.

Add support for request tracing.

What would you like to be added:

Integrate with a popular request tracing framework like Jaeger. There maybe other options I'm not super familiar. If possible request tracing should go all the way to the DB and Remote endpoints called for Remote Joins.

Why is this needed:

Observability is a big part of good infrastructure. This will allow people using Super Graph to automatically get insight into request flowing through our software.

Move globals used in the `serv` package to their own struct.

What would you like to be added:

The serv package maintains a set of global variables for things like database connection pools, loggers, compliers, config etc. I would like these to be wrapped in a struct and this struct be passed around to everywhere it's needed instead of using globals.

var (
	logger   zerolog.Logger  // logger for everything but errors
	errlog   zerolog.Logger  // logger for errors includes line numbers
	conf     *config         // parsed config
	confPath string          // path to the config file
	db       *pgxpool.Pool   // database connection pool
	schema   *psql.DBSchema  // database tables, columns and relationships
	qcompile *qcode.Compiler // qcode compiler
	pcompile *psql.Compiler  // postgres sql compiler
)

Why is this needed:

Firstly this will make it easier to write tests. This can help with things like supporting multiple databases or instances, etc. Also its much cleaner than using globals.

Compare to hasura and other alternatives

Hi Team. Stumbled across this project while looking for a good "automatic-api" projects.
Surely you know about similar offering from Hasura -- https://github.com/hasura/graphql-engine, which is at a much later stage in its lifecyle as compared to super-graph.

Curious as to how you think this project is currently different and how you intend to evolve it differently? It would be helpful to document that.

Summary of ideas for SuperGraph as a library from a neophyte's point of view

I think I can summarize the problems that a neophyte (GraphQL and Go) like me can have in these three points:

  1. How to integrate SuperGraph (especially for CRUD) into an existing Go project that does not already use GraphQL
  2. how to integrate SuperGraph (especially for CRUD) into an existing Go project that already uses GraphQL
  3. how to perform other actions (call other code/packages) before, during or after SuperGraph operations (the so-called "actions" of Hasura and other similar projects)

#1

For point number 1 I think I have found a good way with this code:

package main

import (
	"context"
	"database/sql"
	"encoding/json"
	"net/http"

	"github.com/go-chi/render"
	"github.com/dosco/super-graph/core"
	"github.com/go-chi/chi"
	_ "github.com/jackc/pgx/v4/stdlib"
)

type reqBody struct {
	Query string `json:"query"`
}

func sgHandler(sg *core.SuperGraph) http.HandlerFunc {
	return func(w http.ResponseWriter, r *http.Request) {
		// Check to ensure query was provided in the request body
		if r.Body == nil {
			http.Error(w, "Must provide graphQL query in request body", 400)
			return
		}

		var rBody reqBody
		// Decode the request body into rBody
		err := json.NewDecoder(r.Body).Decode(&rBody)
		if err != nil {
			http.Error(w, "Error parsing JSON request body", 400)
		}

		// Execute graphQL query
		ctx := context.WithValue(r.Context(), core.UserIDKey, 3) // whatever
		res, err := sg.GraphQL(ctx, rBody.Query, nil)
		// check err

		// render.JSON comes from the chi/render package and handles
		// marshalling to json, automatically escaping HTML and setting
		// the Content-Type as application/json.
		render.JSON(w, r, res.Data)
	}
}

func main() {
	dbConn, err := sql.Open("pgx", "DB_URL")
	// check err

	sg, err := core.NewSuperGraph(nil, dbConn)
	// check err

	router := chi.NewRouter()

	router.Group(func(r chi.Router) {
		router.Post("/graphql", sgHandler(sg))
	})

	server.Start(router)
}

// Some code from https://medium.com/@bradford_hamilton/building-an-api-with-graphql-and-go-9350df5c9356

it works (although now I have to fully understand how SuperGraph works).

  • Do you have any advice to give me?

  • Anything more solid for future scaling?

#2

For point number 2 (How to integrate SuperGraph (especially for CRUD) into an existing Go project that already uses GraphQL) I don't really know how to do, I have an idea that could solve: a chain of middlewares, but I still have to understand well; I'll try to explain myself with an example:

func main() {
  // initialization... see #1
	router.Group(func(r chi.Router) {
		router.Post("/graphql", superGraphOrSecondHandler())
	})
}

func superGraphOrSecondHandler() {

  // if SuperGraphHandler is

    err != nil && err == supergraph.ErrQueryNotFound // I'm just imagining

  // I can call the second graphQL handler with

    next()
}
  • Is this a good way of doing it in your opinion?

  • Is there a type of error that I can already use for this case (when I can't find the query in the allow list)? Or should I simply check the error string?

#3

For point number 3 (how to perform other actions (call other code/packages) before, during or after SuperGraph operations (the so-called "actions" of Hasura and other similar projects)) I don't really have good ideas. And this is the point that scares me most of all.

I read #69. I think @howesteve had a good idea, although your example clarified my doubts.

I thought of something like this:

func httpHandler(w, r) {
  ...read in json
  ...validate json
  ...call core.GraphQL(context, query, validated_json)
  // my idea here: check if this just finished query is followed by an "action"/code to call
  query_name := core.Name() // https://pkg.go.dev/github.com/dosco/super-graph/core?tab=doc#Name
  query_operation := core.Operation() // https://pkg.go.dev/github.com/dosco/super-graph/core?tab=doc#Operation
  ...checkForActionsAndDo(query_name, query_operation)
  ...return output to user
}
  • Is this a good way of doing it in your opinion? What am I doing wrong?

Extra point

I think it would be very useful to have an example project (like this one for AuthBoss) with many (if not all) of the SuperGraph features implemented.

This would also greatly help the spread of SuperGraph in the world in my opinion.

As soon as I gather some of your feedback I'll start working on this sample project.


I hope these ideas can help someone.

I look forward to your ideas, @dosco.


Related issues:

failed to connect to database

Using steps here: https://supergraph.dev/guide.html#get-started

specifically with this:

docker-compose run blog_api ./super-graph db:setup

I have this error:

ERR connect failed err="failed to connect to `host=db user=postgres database=`: dial error (dial tcp: lookup db on 127.0.0.11:53: no such host)"
FTL app/serv/cmd_migrate.go:62 > failed to connect to database error="failed to connect to `host=db user=postgres database=`: dial error (dial tcp: lookup db on 127.0.0.11:53: no such host)"

Order_by ID

I think there is a problem with order_by ID.

I'm trying this query:

{
  players(first: 5, after: $cursor, order_by: {id: desc}) {
    id
    score
  }
}

decrypted cursor is: 4695 (the ID)

and the query it generates is:

SELECT json_build_object('players', "__sel_0"."json", 'players_cursor', "__sel_0"."cursor") as "__root" FROM (SELECT coalesce(json_agg("__sel_0"."json"), '[]') as "json", CONCAT_WS(',', max("__cur_0")) as "cursor" FROM (SELECT json_build_object('id', "players_0"."id", 'score', "players_0"."score") AS "json", LAST_VALUE("players_0"."id") OVER() AS "__cur_0" FROM (WITH "__cur" AS (SELECT a[1] as "id" FROM string_to_array('4695', ',') as a) SELECT "players"."id", "players"."score" FROM "players", "__cur" ORDER BY "players"."id" DESC LIMIT ('5') :: integer) AS "players_0") AS "__sel_0") AS "__sel_0"

As you can see there is no WHERE clause for cursor.

If I use this query:

{
  players(first: 5, after: $cursor, order_by: {created_at: desc}) {
    id
    score
  }
}

decrypted cursor is: 2020-02-27 12:21:23.282411+00,4695

and the query it generates is:

SELECT json_build_object('players', "__sel_0"."json", 'players_cursor', "__sel_0"."cursor") as "__root" FROM (SELECT coalesce(json_agg("__sel_0"."json"), '[]') as "json", CONCAT_WS(',', max("__cur_0"), max("__cur_1")) as "cursor" FROM (SELECT json_build_object('id', "players_0"."id", 'score', "players_0"."score") AS "json", LAST_VALUE("players_0"."created_at") OVER() AS "__cur_0", LAST_VALUE("players_0"."id") OVER() AS "__cur_1" FROM (WITH "__cur" AS (SELECT a[1] as "created_at", a[2] as "id" FROM string_to_array('2020-02-27 12:21:23.282411+00,4695', ',') as a) SELECT "players"."id", "players"."score", "players"."created_at" FROM "players", "__cur" WHERE (((("players"."created_at") < "__cur"."created_at" :: timestamp with time zone) OR ((("players"."created_at") = "__cur"."created_at" :: timestamp with time zone) AND (("players"."id") > "__cur"."id" :: bigint)))) ORDER BY "players"."created_at" DESC, "players"."id" ASC LIMIT ('5') :: integer) AS "players_0") AS "__sel_0") AS "__sel_0"

It is working correctly.

So... something strange, right?

Is order_by ID working?

Super Graph returns a row instance when the table name is singular

query {
  actor {
    actor_id
    first_name
    last_name
  }
}

Expected Behaviour

Should return an array of actors.

{
  "data": {
    "actor": [{
      "actor_id": 1,
      "first_name": "Penelope",
      "last_name": "Guiness"
    },
    {...},
    {...}]
  }
}

Actual Behavior

Returns one instance of an actor.

{
  "data": {
    "actor": {
      "actor_id": 1,
      "first_name": "Penelope",
      "last_name": "Guiness"
    }
  }
}

Steps to Reproduce the Problem

  1. The behaviour happens when the table name is singular (actor vs actors).
  2. ALTER TABLE actor RENAME TO actors fixes the issue

Misc

I built super graph from the source code, ran go build . modified the the dev.yaml file

Getting error while trying to login to rail app.

Hi,

I have deployed supergraph, postgres and rails app using https://github.com/dosco/super-graph/blob/master/docker-compose.yml but when I try to sign up rail app getting error. Can you please let me know the default login for this rails app.

ActiveRecord::NotNullViolation in Devise::RegistrationsController#create
PG::NotNullViolation: ERROR: null value in column "full_name" violates not-null constraint DETAIL: Failing row contains (37, null, null, null, [email protected], $2a$11$z6h2KUZYGLJz3Otr3BBtoOm8ID.LNPM6bx9qn.zZ12dyOx/EbspMC, null, null, null, 2020-06-15 14:55:10.488498, 2020-06-15 14:55:10.488498).

Error when trying to start supergraph via docker-compose which generated using go.

Hi,

I could able to generate files using the below commands,

go get github.com/dosco/super-graph
super-graph new testapp

After that, I tried to start services using docker-compose file,

ashar@testing:~/testapp$ docker logs -f testapp_testapp_api_1
restarting "./super-graph" when it changes (additional dirs: ./config)
INF roles_query not defined: attribute based access control disabled
ERR failed to initialize Super Graph: error fetching version: failed to connect to host=db user=postgres database=testapp_development: server error (FATAL: database "testapp_development" does not exist (SQLSTATE 3D000))

``

Upon searching I could able to see that the below command to fix the issue but getting some query issues also,

ashar@testing:~/testapp$ docker-compose run testapp_api ./super-graph db:setup
Creating network "testapp_default" with the default driver
Creating testapp_db_1 ... done
INF created database 'testapp_development'
INF 2020-06-16 10:15:01 executing 0_init.sql up
-- Write your migrate up statements here

CREATE TABLE public.users (
id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
full_name text,
email text UNIQUE NOT NULL CHECK (length(email) < 255),
created_at timestamptz NOT NULL NOT NULL DEFAULT NOW(),
updated_at timestamptz NOT NULL NOT NULL DEFAULT NOW()
);

ERR ERROR: syntax error at or near ";" (SQLSTATE 42601)

Please let me know what is wrong.

Super Graph as a library (aka. embedded mode)

What would you like to be added:

A clean API to integrate Super Graph into other GoLang apps. This would be done as a http handler that can be plugged into an existing router or at a much lower level where you can provide a config object and create a new Super Graph instance to be used in your code.

Also hooks can be added for various things like onQuery, onMutation, onMutationComplete, etc, etc. This would help code using Super Graph as a library provide their own behaviour to execute during request handling.

The Super Graph GraphQL compilers (QCode and SQL) are already available as a library, this work would focus on moving more pieces of the serv package into a clean API.

Why is this needed:

Currently I run two services one for custom apis like authentication and the other Super Graph. Going ahead other custom endpoints like file upload etc would possibly also be added to the first service. It would be great if I could instead bundle it all together into a single app and also be able to augment Super Graph with my own app code.

RLS security support via session variables

Hello,

I'd like to suggest a feature: setting current role from the session. That would allow RLS policies to be run using the session's user name, while not needing to set any securrity models the config file.

My implementation suggestion would be hooking into sql generating and prepending the session user name like this:

conn.Query(ctx, 'SET session authorization howe; SELECT * FROM...')

This would allow reusing connections from the pool (as they already are) without more worries.

While we're at it, why not supporting session variables as well? They could be used on RLS policies, functions, triggers, etc. That woud be just as easy to implement:
conn.Query(ctx, `SET supergraph.company_id=1234; SELECT * FROM...')

Of course the '1234' parameter above should be sent as a parameter in the prepared query; that was just an easier to read example.

Let's not forget to reset all those variables before running new queries; the final command should look like something this:
conn.Query(ctx, `RESET ALL;SET supergraph.myvar1='xxx'; SET supergraph.myvar2='yyy'; SELECT * FROM...')

I think there should be internally a hook for before sending queries to the server, and the standalone server should have a default implementation getting vars from jwt etc., but when running the server on embedded mode, the caller should be able to customize it. On jwt, the "sub" token should probably be the role name, and the payload could set the session variables.

I didn't have a deep look into the source, but should be easy enough to implement. I could look into it if you're not willing to implement this, but think it's a good idea.

Any comments?

Thanks,
Howe

Support TLS connections

What would you like to be added:

Using a TLS connection to the DB.

Why is this needed:

For improved security in this day of cloud infrastructure.

"super-graph new" HTML escapes generated configuration files

What version of Super Graph are you using? super-graph version

"unknown-version"

git describe says v0.11-210-gcd7f26b

Steps to reproduce the issue (config used to run Super Graph).

super-graph new app

Expected behaviour and actual result.

config/seed.js and config/migrations/o_init.sql are generated HTML escaped

e.g.

expected:

for (i = 0; i < 10; i++) {

actual:

for (i = 0; i &lt; 10; i++) {

It looks like internal/serv/cmd_new.go should be importing text/template instead of html/template

https://github.com/dosco/super-graph/blob/master/internal/serv/cmd_new.go#L5

CRUD roles inheriting

Let's take the example config here:

roles:
  - name: user
    tables:
      - name: products
        query:
          filters: ["{ user_id: { eq: $user_id } }"]
        insert:
          filters: ["{ user_id: { eq: $user_id } }"]
        update:
          filters: ["{ user_id: { eq: $user_id } }"]

As you can see we're repeating multiple times the string:

["{ user_id: { eq: $user_id } }"]

It would be amazing to have something like:

roles:
  - name: user
    tables:
      - name: products
        crud:
          filters: ["{ user_id: { eq: $user_id } }"]
        query:
          # here it is inherited crud's "filters"
        insert:
          # here it is inherited crud's "filters"
        update:
          # here it is inherited crud's "filters"

And not just for filters: any!

(And the same in SuperGraph as a library Golang config, of course.)

Don't you think, @dosco?

Add API request rate limiting

What would you like to be added:

Add support to configure either a local or centralized rate limiting for the query endpoint. Rate limits should be at the IP, User ID and maybe at a higher global level. This change will require new config options and mostly be limited to the serv package.

Possible options
https://godoc.org/golang.org/x/time/rate
https://github.com/uber-go/ratelimit

Why is this needed:

APIs need rate limiting to prevent a single client from overloading the DB and help protect against any sort of distributed resource consumption attack.

How to perform authorization?

It's pretty cool repo ๐Ÿ‘

You have a built-in solution for authentication by JWT token but how to be with authorization?

docker-compose environment documentation needed.

Hi,

I can see that below Postgres environment for supergraph docker container but unable to see the environment for the database.

Postgres related environment Variables
SG_DATABASE_HOST
SG_DATABASE_PORT
SG_DATABASE_USER
SG_DATABASE_PASSWORD

Can you please let me know the environment variable for the database?

Add ability to fire web-hooks on mutation queries

What would you like to be added:

Add the ability to call a web-hook on successful completion of inserts, updates, deletes or upserts.

Why is this needed:

This will allow app developers to build complex flows like sending an email once a new item is added to the database or trigger some other kind of workflow after a mutation is successful.

Add a health check endpoint

What would you like to be added:

Add health check. Currently in production the / route does not exists I would like it to serve as a health check endpoint in production mode. It would have to confirm the database connect is working fine and return a 200 or 500 if and failure is detected. This should be a fairly start forward change to implement and be limited to the serv package.

Why is this needed:

Many cloud environments like Kubernetes, Google Cloud Run, App Engine and several load balancers perform health checks before directing traffic to an instance.

Incorrect SQL query using variables for filters

I think we have a problem with filters.

Inspired by this: #1 (comment)

I used this in my dev.yml:

variables:
    account_id: "select account_id from users where id = $user_id"

- name: user
    tables:
        - name: players
            query:
                filters: ["{ account_id: { _eq: $account_id } }"]

And I get this error:

ERR C:/super-graph/serv/http.go:104 > failed to handle request error="ERROR: invalid input syntax for type bigint: \"select account_id from users where id = 2\" (SQLSTATE 22P02)"
SELECT "_sg_auth_info"."role", (CASE "_sg_auth_info"."role" WHEN 'user' THEN (SELECT json_build_object('player', "__sel_0"."json") as "__root" FROM (SELECT json_build_object('id', "players_0"."id", 'created_at', "players_0"."created_at", 'account_id', "players_0"."account_id", 'amount', "players_0"."amount", 'note', "players_0"."note") AS "json" FROM (SELECT "players"."id", "players"."created_at", "players"."account_id", "players"."amount", "players"."note" FROM "players" WHERE (((("players"."account_id") = 'select account_id from users where id = 2' :: bigint) AND (("players"."id") =  '2' :: bigint))) LIMIT ('1') :: integer) AS "players_0") AS "__sel_0") WHEN 'admin' THEN (SELECT json_build_object('player', "__sel_0"."json") as "__root" FROM (SELECT json_build_object('id', "players_0"."id", 'created_at', "players_0"."created_at", 'account_id', "players_0"."account_id", 'amount', "players_0"."amount", 'note', "players_0"."note") AS "json" FROM (SELECT "players"."id", "players"."created_at", "players"."account_id", "players"."amount", "players"."note" FROM "players" WHERE ((("players"."id") =  '2' :: bigint)) LIMIT ('1') :: integer) AS "players_0") AS "__sel_0") END) FROM (SELECT (CASE WHEN EXISTS (SELECT * FROM users WHERE id = 2) THEN (SELECT (CASE WHEN id = 1000 THEN 'admin' ELSE 'user' END) FROM (SELECT * FROM users WHERE id = 2) AS "_sg_auth_roles_query" LIMIT 1) ELSE 'anon' END) FROM (VALUES (1)) AS "_sg_auth_filler") AS "_sg_auth_info"(role) LIMIT 1

The query is visibly incorrect. A bug or is it my fault?

panic: file does not exist

What version of Super Graph are you using? super-graph version

Super Graph v0.12.8-1-g76340ab
For documentation, visit https://supergraph.dev

Commit SHA-1 : 76340ab
Commit timestamp : 2020-01-14 01:08:04 -0500
Branch : master
Go version : go1.13.6

Licensed under the Apache Public License 2.0
Copyright 2015-2019 Vikram Rangnekar.

Have you tried reproducing the issue with the latest release?

Cloned the latest release

What is the hardware spec (RAM, OS)?

72GB, RHEL 7 OS

Steps to reproduce the issue (config used to run Super Graph).

$ super-graph new gqlapp
4:46PM INF created 'gqlapp'
4:46PM INF created 'gqlapp/Dockerfile'
4:46PM INF created 'gqlapp/docker-compose.yml'
4:46PM INF created 'gqlapp/config'
4:46PM INF created 'gqlapp/config/dev.yml'
4:46PM INF created 'gqlapp/config/prod.yml'
4:46PM INF created 'gqlapp/config/seed.js'
4:46PM INF created 'gqlapp/config/migrations'
panic: file does not exist

goroutine 1 [running]:
github.com/GeertJohan/go%2erice.(*Box).MustString(...)
/home/appworld/go/pkg/mod/github.com/!geert!johan/[email protected]/box.go:329
github.com/dosco/super-graph/serv.(*Templ).get(0xc00006be50, 0xd112dd, 0xc, 0x1886480, 0xc00006a130, 0xc00006a130, 0x1da8701, 0x2)
/home/appworld/super-graph/serv/cmd_new.go:114 +0x205
github.com/dosco/super-graph/serv.cmdNew.func9(0xc0000345a0, 0x25, 0x1886480, 0xc00006a130)
/home/appworld/super-graph/serv/cmd_new.go:94 +0x43
github.com/dosco/super-graph/serv.ifNotExists(0xc0000345a0, 0x25, 0xc000103c50)
/home/appworld/super-graph/serv/cmd_new.go:144 +0xb3
github.com/dosco/super-graph/serv.cmdNew(0xc000249180, 0xc00006be30, 0x1, 0x1)
/home/appworld/super-graph/serv/cmd_new.go:93 +0x7b5
github.com/spf13/cobra.(*Command).execute(0xc000249180, 0xc00006be00, 0x1, 0x1, 0xc000249180, 0xc00006be00)
/home/appworld/go/pkg/mod/github.com/spf13/[email protected]/command.go:830 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0xc000139680, 0x1f4d620, 0xd04a55, 0x4)
/home/appworld/go/pkg/mod/github.com/spf13/[email protected]/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
/home/appworld/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
github.com/dosco/super-graph/serv.Init()
/home/appworld/super-graph/serv/cmd.go:154 +0xbd0
main.main()
/home/appworld/super-graph/main.go:8 +0x20

Expected behaviour and actual result.

Generate app as per documentation

Bug reading config file?

Hello,

On the most recent version, there seems to be a bug finding the config file (function newViper()) if a file extension is provided.

conf, err := core.ReadInConfig("supergraph") // => finds the config file
conf, err := core.ReadInConfig("supergraph.yaml") // => returns an error, config file not find.

The following seems to fix it.

func newViper(configPath, configFile string) *viper.Viper {
	vi := viper.New()

	vi.SetEnvPrefix("SG")
	vi.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
	vi.AutomaticEnv()

	if filepath.Ext(configFile) != "" {
		vi.SetConfigFile(configFile)
	} else {
		vi.SetConfigName(configFile)
		vi.AddConfigPath(configPath)
		vi.AddConfigPath("./config")
	}

	return vi
}

Does not resolve queries through a linking table (Many-To-Many)

I have three three tables: film, actor, and film_actor (details below). The following GraphQL query:

query {
  actor {
    actor_id
    first_name
    last_name
    film {
      film_id
    }
  }
}

# Output 

{
  "error": {
    "error": "something wrong no remote ids found in db response",
    "data": null
  }
}

                                              Table "public.film"
      Column      |            Type             | Collation | Nullable |                Default                
------------------+-----------------------------+-----------+----------+---------------------------------------
 film_id          | integer                     |           | not null | nextval('film_film_id_seq'::regclass)
 title            | character varying(255)      |           | not null | 
 description      | text                        |           |          | 
 release_year     | year                        |           |          | 
 language_id      | smallint                    |           | not null | 
 rental_duration  | smallint                    |           | not null | 3
 rental_rate      | numeric(4,2)                |           | not null | 4.99
 length           | smallint                    |           |          | 
 replacement_cost | numeric(5,2)                |           | not null | 19.99
 rating           | mpaa_rating                 |           |          | 'G'::mpaa_rating
 last_update      | timestamp without time zone |           | not null | now()
 special_features | text[]                      |           |          | 
 fulltext         | tsvector                    |           | not null | 
Indexes:
    "film_pkey" PRIMARY KEY, btree (film_id)
    "film_fulltext_idx" gist (fulltext)
    "idx_fk_language_id" btree (language_id)
    "idx_title" btree (title)
Foreign-key constraints:
    "film_language_id_fkey" FOREIGN KEY (language_id) REFERENCES language(language_id) ON UPDATE CASCADE ON DELETE RESTRICT
Referenced by:
    TABLE "film_actor" CONSTRAINT "film_actor_film_id_fkey" FOREIGN KEY (film_id) REFERENCES film(film_id) ON UPDATE CASCADE ON DELETE RESTRICT
    TABLE "film_category" CONSTRAINT "film_category_film_id_fkey" FOREIGN KEY (film_id) REFERENCES film(film_id) ON UPDATE CASCADE ON DELETE RESTRICT
    TABLE "inventory" CONSTRAINT "inventory_film_id_fkey" FOREIGN KEY (film_id) REFERENCES film(film_id) ON UPDATE CASCADE ON DELETE RESTRICT
Triggers:
    film_fulltext_trigger BEFORE INSERT OR UPDATE ON film FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger('fulltext', 'pg_catalog.english', 'title', 'description')
    last_updated BEFORE UPDATE ON film FOR EACH ROW EXECUTE PROCEDURE last_updated()
                                            Table "public.actor"
   Column    |            Type             | Collation | Nullable |                 Default                 
-------------+-----------------------------+-----------+----------+-----------------------------------------
 actor_id    | integer                     |           | not null | nextval('actor_actor_id_seq'::regclass)
 first_name  | character varying(45)       |           | not null | 
 last_name   | character varying(45)       |           | not null | 
 last_update | timestamp without time zone |           | not null | now()
Indexes:
    "actor_pkey" PRIMARY KEY, btree (actor_id)
    "idx_actor_last_name" btree (last_name)
Referenced by:
    TABLE "film_actor" CONSTRAINT "film_actor_actor_id_fkey" FOREIGN KEY (actor_id) REFERENCES actor(actor_id) ON UPDATE CASCADE ON DELETE RESTRICT
Triggers:
    last_updated BEFORE UPDATE ON actor FOR EACH ROW EXECUTE PROCEDURE last_updated()
                         Table "public.film_actor"
   Column    |            Type             | Collation | Nullable | Default 
-------------+-----------------------------+-----------+----------+---------
 actor_id    | smallint                    |           | not null | 
 film_id     | smallint                    |           | not null | 
 last_update | timestamp without time zone |           | not null | now()
Indexes:
    "film_actor_pkey" PRIMARY KEY, btree (actor_id, film_id)
    "idx_fk_film_id" btree (film_id)
Foreign-key constraints:
    "film_actor_actor_id_fkey" FOREIGN KEY (actor_id) REFERENCES actor(actor_id) ON UPDATE CASCADE ON DELETE RESTRICT
    "film_actor_film_id_fkey" FOREIGN KEY (film_id) REFERENCES film(film_id) ON UPDATE CASCADE ON DELETE RESTRICT
Triggers:
    last_updated BEFORE UPDATE ON film_actor FOR EACH ROW EXECUTE PROCEDURE last_updated()

Thoughts on generic adapters for different SQL servers?

What would you like to be added:

I'd love to use this with sqlite3. Right now I see that a lot of stuff is coded for PostgreSQL. By using a adapter design we could build different versions that support sqlite3, mysql, mssql, etc...

Why is this needed:

Would make it easier to port to different SQL backends.

__typename form

The GraphQL servers I've always used respond to the __typename with the singular form capital of the table name.

With SuperGraph, however, this is not the case because the answer is the name of the table itself, which with libraries like URQL creates havoc and nothing works anymore:

urql-exchange-graphcache.mjs:58:
Heuristic Fragment Matching: A fragment is trying to match against the `players` type, but the type condition is `Player`.
Since GraphQL allows for interfaces `Player` may be an interface.

What do you think?

Can we do it automatically or do we need to add a field in the config of the single entity?

"data" field in JSON response using SuperGraph as library

Using SuperGraph as library I found JSON responses does not start with: "data".

Example:

{
  "players": [...]
}

instead of:

{
  "data": {
    "players": [...]
  }
}

I'm using this code:

func main() {
	sg, err := core.NewSuperGraph(nil, sb)

	r := chi.NewRouter()

	r.Group(func(r chi.Router) {
		r.Post(config.ApiEndpoint, myHandler(sg))
	})

	StartServer(r)
}

type reqBody struct {
	Query string `json:"query"`
}

func sgHandler(sg *core.SuperGraph, next http.Handler) http.HandlerFunc {
	return func(w http.ResponseWriter, r *http.Request) {
		var rBody reqBody
		bodyBytes, _ := ioutil.ReadAll(r.Body)
		_ = r.Body.Close(
		r.Body = ioutil.NopCloser(bytes.NewBuffer(bodyBytes))
		err := json.NewDecoder(bytes.NewBuffer(bodyBytes)).Decode(&rBody)
		if err != nil {
			http.Error(w, "Error parsing JSON request body", 400)
		}

		res, err := sg.GraphQL(ctx, rBody.Query, nil)
		if err == nil {
			result, _ := json.Marshal(res.Data) // IS THIS WRONG?
			_, err = w.Write(result)
		}
	}
}

Is it my fault?

Is this wrong? result, _ := json.Marshal(res.Data)?

Maybe I need to Marshal all res? Isn't it heavier?

Test Super Graph with Yugabyte DB

What would you like to be added:

Document what it would take to get Super Graph working with Yugabyte DB a distributed database that is designed to be compatible with Postgres. Super Graph makes use of a bunch of Postgres specific features like querying for database tables and columns, lateral joins, json functions etc. I have never tried using Super Graph with Yugabyte if someone can take the time to do this and document what worked and what did not. And maybe what it would take to get this working.

An article on their blog talks about their Postgres compatibility
https://blog.yugabyte.com/postgresql-compatibility-in-yugabyte-db-2-0/

Why is this needed:

Since Yugabyte DB is Postgres compatible this would help Super Graph work with a massively scalable distributed database and possibly help the Cockroach DB team see what gaps can be filled in their compatibility layer.

Escape characters in debug console logs make it harder to read

It is hard to read the SQL in the debug logs when escape characters are present (\n, \t, and \").

Query args=["public","sales_by_store"] module=pgx pid=23073 rowCount=0 sql="\nSELECT \n\tf.attnum AS id, \n\tf.attname AS name, \n\tf.attnotnull AS notnull, \n\tpg_catalog.format_type(f.atttypid,f.atttypmod) AS type, \n\tCASE \n\t\tWHEN p.contype = ('p'::char) THEN true \n\t\tELSE false \n\tEND AS primarykey, \n\tCASE \n\t\tWHEN p.contype = ('u'::char) THEN true \n\t\tELSE false\n\tEND AS uniquekey,\n\tCASE\n\t\tWHEN p.contype = ('f'::char) THEN g.relname \n\t\tELSE ''::text\n\tEND AS foreignkey,\n\tCASE\n\t\tWHEN p.contype = ('f'::char) THEN p.confkey\n\t\tELSE ARRAY[]::int2[]\n\tEND AS foreignkey_fieldnum\nFROM pg_attribute f\n\tJOIN pg_class c ON c.oid = f.attrelid \n\tLEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum \n\tLEFT JOIN pg_namespace n ON n.oid = c.relnamespace \n\tLEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey) \n\tLEFT JOIN pg_class AS g ON p.confrelid = g.oid \nWHERE c.relkind = ('r'::char)\n\tAND n.nspname = $1 -- Replace with Schema name \n\tAND c.relname = $2 -- Replace with table name \n\tAND f.attnum > 0\n\tAND f.attisdropped = false\nORDER BY id;"

Implement query hooks (middleware)

What would you like to be added:
Custom logic hooks before a query is run (i.e. middleware)

Why is this needed:
By implementing this, a number of features could be implemented, specially in embedded mode.

  1. Middlewares of all sorts, whose would allow a richer echosystem. With a richer echosystem, the core server wouldn't have to implement so many features (telemetry, logging, redis support, rate limiting, etc.), popularity of the project would raise and future features would be easier to implement.
  2. Data validation in the application server, which is hard to implement currently. Hasura and Postgraphile recommend to do it on the server, but it's very slow, inconvenient and using a terrible language (pl/pgsql). Validation is restrained to database language and environment.
  3. Other forms of authentication (i.e. server id, ldap, ip based, etc.), authorization, logging, metrics, etc. etc.
  4. Query inspecting and rewriting.
  5. Standalone server loadable golang plugins implementing middleware.
  6. What hasura calls "Actions" (custom business logic via webhooks), but it's slow, inconvenient and adds considerable network traffic. In supergraph, this could all be implemented internally in the server, while still supporting webhooks if wanted.

I think this could be another killer feature for supergraph, currently unmatched in other servers, and opening a wide range of features and collaboration.

How could this be implemented?
I suggest using this api, which is extensible in the future, and wouldn't break existing programs:

type QueryOptions struct {
  UserId string
  UserIdProvider string
  // ... (other query/session variables)
  BeforeQuery func(*Ast) error 
  // BeforeRunSql(ast *Ast, sql string)
  // OnMutation()
  // Future, when subscriptions are  #ready:
  //  OnResultsetChange()  (*supergraph.Result, error)
  // ... etc.
} struct 
func supergraph.GraphQL(c context.Context, query string, vars json.RawMessage, options...QueryOptions) (*supergraph.Result, error)

Currently on hasura they implement actions by calling webhooks by sending a struct {type, definition. handler, kind}.

References:
#17
#26
https://discord.com/channels/628796009539043348/628796009539043350/716096650443096164

Some ideas from hasura project

What would you like to be added:

  • custom http handler
  • custom graphql handler
  • postgres trigger handler

Why is this needed:

This is what my project is used with hasura.

image

The goal is to quickly build a single file api service, with easy deploying.

  1. Why custom http handler, because some third party api service will call it, such as payment notification callback. Currently I have to use nginx to dispatch graphql request and http request to hasura or my http handler.

  2. Why custom graphql handler, because login, register, and generate payment url for user. It's bored to write golang http code and redefine it in hasura action.

  3. Why postgres triggert handler, like when a new user signs up, triggered a users table insert event to send a email to user.

I think hasura saves a lot of time for crud situation, but many tools make it easy for crud.

The real troblue one I think is custom business logic and access control. I think super-graph does great job in access control, still missing a way to implement custom business logic.

It's inconvenient to deploy and devolpment now with hasura because I have to make nginx, hasura, golang worked together.

If super-graph can make all this needs on one golang project, and build to single binary service. It's really helpful to quick development and easy delopying.

SG as library: Vars config, filter panic: interface conversion: interface {} is int, not string

Remember the problem solved here: #42, @dosco?

Now I'm using SuperGraph as library with this code:

sgConfig := core.Config{
  DefaultBlock: true,
  Vars: map[string]string{
    "account_id": "sql:select account_id from users where id = $user_id",
  },
  Roles: []core.Role{
    {
      Name: "user",
      Tables: []core.RoleTable{
        {
          Name: "players",
          Query: &core.Query{
            Filters: []string{
              "{ account_id: { _eq: $account_id } }",
            },
          },
        },
      },
    },
  },
}
sg, err := core.NewSuperGraph(&sgConfig, db)

it doesn't work!

This is the error:

panic: interface conversion: interface {} is int, not string

-> github.com/dosco/super-graph/core.(*scontext).argList
->   C:/fred/go/pkg/mod/github.com/dosco/[email protected]/core/args.go:33

github.com/dosco/super-graph/core.(*scontext).resolveSQL
  C:/fred/go/pkg/mod/github.com/dosco/[email protected]/core/core.go:272
github.com/dosco/super-graph/core.(*scontext).execQuery
  C:/fred/go/pkg/mod/github.com/dosco/[email protected]/core/core.go:121
github.com/dosco/super-graph/core.(*SuperGraph).GraphQL
  C:/fred/go/pkg/mod/github.com/dosco/[email protected]/core/api.go:202
main.sgHandler.func1

What is wrong?

Add more integration tests

What would you like to be added:

We currently do not have any integration tests that can test queries right from the HTTP endpoint to the DB. I'm unsure of how to do this at this time. Questions like how would we much the DB or is that even needed will have to be answered.

Why is this needed:

Integration tests will help improve our test coverage over important parts of the codebase that are currently not covered by other tests

Remote Join with GraphQL endpoints (aka schema stitching)

What would you like to be added:

Support remote joins with GraphQL endpoints in addition to REST endpoints (currently supported)

Why is this needed:

This would require adding some config options to defined a remote endpoint as a GraphQL one. Any query that includes this remote join would then cause a request to be made to the remote GraphQL endpoint and that nested part of the original GraphQL query be sent over. At this time I'm unsure how variables should be handled.

Majority of the changes would be in the serv/core_remote.go file as it handles most of the current remote join (REST) related code.

Test Super Graph with Cockroach DB

What would you like to be added:

Document what it would take to get Super Graph working with Cockroach DB a distributed database that is designed to be compatible with Postgres. Super Graph makes use of a bunch of Postgres specific features like querying for database tables and columns, lateral joins, json functions etc. I have never tried using Super Graph with Cockroach if someone can take the time to do this and document what worked and what did not. And maybe what it would take to get this working.

Why is this needed:

Since Cockroach DB is Postgres compatible this would help Super Graph work with a massively scalable distributed database and possibly help the Cockroach DB team see what gaps can be filled in their compatibility layer.

What if it's not a rails app ?

It seems to be graphql a ting as a gateway to REST based systems right ?

So if the Sub system has an openAPI rest interface this could be used to consume any system written in any language ?

Plurals / singulars -> realy needed? BC break

Hi,
first I would like to say this project seems to be very promising. I'm already in testing phase.

One thing which I found in this project which seems to me annoying is the definition of singular and plurar "table" names when I'm calling them from graphql query.

Because I'm not native english speaker and some old projects are using mixed english and national language table names it looks realy weird when I need suffix these "tables" with "s" letter.

Wouldn't be better to return array of results or only base object based on full primary keys usage?

I know that this will create BC break but I'm only opening discussion about this.

(sorry for my english)

Unable to compile as per instructions

What version of Super Graph are you using? super-graph version

Latest source code

Have you tried reproducing the issue with the latest release?

Yes

What is the hardware spec (RAM, OS)?

Ubuntu Bionic

Steps to reproduce the issue (config used to run Super Graph).

$ go version
go version go1.13.6 linux/amd64

There is no gopath and goroot environmental variables

$ echo $GOPATH
$ echo $GOROOT
$ make install
package github.com/GeertJohan/go.rice/rice: cannot download, $GOPATH must not be set to $GOROOT. For more details see: 'go help gopath'
Makefile:33: recipe for target '/home/joev/go/bin/github.com/GeertJohan/go.rice' failed
make: *** [/home/joev/go/bin/github.com/GeertJohan/go.rice] Error 1

Expected behaviour and actual result.

Successful compilation

Add unit tests for Remote Joins

What would you like to be added:

The functions that handle the Remote Join code are not currently covered by unit tests. It would be very helpful to have more code coverage in this area.

Why is this needed:

It will really help going forward and adding more optimizations and innovation around this feature. The tests can help with adding the ability to call remote GraphQL endpoints which is not currently supported.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.