Coder Social home page Coder Social logo

supabase / pg_graphql Goto Github PK

View Code? Open in Web Editor NEW
2.8K 35.0 88.0 9.65 MB

GraphQL support for PostgreSQL

Home Page: https://supabase.github.io/pg_graphql

License: Apache License 2.0

Dockerfile 0.13% PLpgSQL 40.17% HTML 0.13% Shell 0.19% Rust 59.38%
graphql graphql-server postgresql postgres sql api

pg_graphql's Introduction

pg_graphql

PostgreSQL version License tests


Documentation: https://supabase.github.io/pg_graphql

Source Code: https://github.com/supabase/pg_graphql


pg_graphql adds GraphQL support to your PostgreSQL database.

  • Performant
  • Consistent
  • Serverless
  • Open Source

Overview

pg_graphql reflects a GraphQL schema from the existing SQL schema.

The extension keeps schema translation and query resolution neatly contained on your database server. This enables any programming language that can connect to PostgreSQL to query the database via GraphQL with no additional servers, processes, or libraries.

TL;DR

The SQL schema

create table account(
    id serial primary key,
    email varchar(255) not null,
    created_at timestamp not null,
    updated_at timestamp not null
);

create table blog(
    id serial primary key,
    owner_id integer not null references account(id),
    name varchar(255) not null,
    description varchar(255),
    created_at timestamp not null,
    updated_at timestamp not null
);

create type blog_post_status as enum ('PENDING', 'RELEASED');

create table blog_post(
    id uuid not null default uuid_generate_v4() primary key,
    blog_id integer not null references blog(id),
    title varchar(255) not null,
    body varchar(10000),
    status blog_post_status not null,
    created_at timestamp not null,
    updated_at timestamp not null
);

Translates into a GraphQL schema displayed below.

Each table receives an entrypoint in the top level Query type that is a pageable collection with relationships defined by its foreign keys. Tables similarly recieve entrypoints in the Mutation schema that enable bulk operations for insert, update, and delete.

GraphiQL

pg_graphql's People

Contributors

alaister avatar andrew-w-ross avatar bayandin avatar ben-xd avatar bryanmylee avatar chatch avatar dazzag24 avatar dependabot[bot] avatar dthyresson avatar hallidayo avatar imor avatar isaiah-hamilton avatar jirutka avatar jkpace avatar johnta0 avatar kav avatar keanugrieves avatar kilianc avatar mlafeldt avatar olirice avatar pfcodes avatar prikeshsavla avatar robertn702 avatar steve-chavez avatar vadim2404 avatar w3b6x9 avatar workingjubilee avatar yrashk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pg_graphql's Issues

Connection with "last" specified but no "after" ignores ordering

On the demo data set

Correct

{
  allAccounts(first: 1) {
    edges {
      node{
        id
      }
    }
  }
}

returns

{
  "data": {
    "allAccounts": {
      "edges": [
        {
          "node": {
            "id": 1
          }
        }
      ]
    }
  },
  "errors": []
}

Incorrect (possibly... check the spec)

{
  allAccounts(last: 1) {
    edges {
      node{
        id
      }
    }
  }
}

returns

{
  "data": {
    "allAccounts": {
      "edges": [
        {
          "node": {
            "id": 1
          }
        }
      ]
    }
  },
  "errors": []
}

Feature Request: column permissions via SQL comment directives

Feature request

Is your feature request related to a problem? Please describe.

Role based exclusions are nice, but they aren't granular enough for large databases with multiple schemas that contain entities that we still want a particular role to access. Unfortunately I can't go in-depth without revealing sensitive info, but I can point to Postgraphile for an example of the granular control we're in need of.

Describe the solution you'd like

While I'm not a huge fan of "smart tags" (e.g. comments on entities) they do offer some nice granular control over how Postgraphile sees entities. Here's their documentation for their solution: https://www.graphile.org/postgraphile/smart-tags

Describe alternatives you've considered

There aren't alternatives presently

Additional context

For example, today we're performing some database migrations. We're moving tables around to better organize them now that some schemas have grown beyond their original intent. Here's an example migration:

-- +goose Up
ALTER TABLE public.tutorials SET SCHEMA hidden;
CREATE VIEW public.tutorials AS SELECT * FROM hidden.tutorials;
COMMENT ON VIEW public.tutorials is E'@omit all';

-- +goose Down
DROP VIEW IF EXISTS public.tutorials;
ALTER TABLE hidden.tutorials SET SCHEMA public;

We're moving the table, where inserts and updates happen, and leaving a view in place for readonly things (the reasons why are complicated and involve multi-stage migrations and lot's of legacy code that we're trying to remove, slowly and in stages). What that instructs Postgraphile to do is ignore the view we've left behind, and only care about the table in its new home. Without that, Postgraphile would try to create graphql schemas for both, resulting in a collision. Now, we still want our API's role to be able to access both, because we have some ad-hoc legacy code that's still trying to read from the old location. We can't break everything at once, so this kind of control is necessary.

Postgraphile Parity Wishlist

Feature request

Is your feature request related to a problem? Please describe.

First and foremost, I'm excited about this project, and I'd really love to see if it can support our db and API a few months from now. We use Postgraphile heavily, it drives the entirety of our API. I've got a very small wishlist for feature parity with Postgraphile that would allow us to make that move. Without these features, we'll be unable to move because of our reliance on them:

custom directives that run custom code

We use directives prolifically. Directives allow us to move beyond row-level or role-based auth and into some pretty awesome authentication and validation controls before requests ever hit resolvers. Huge time-saver. Because postgraphile's engine can be manipulated at runtime via code, this is possible. Not sure how this would be accomplished with pg_graphql, but it's something we're in need of.

custom routes on the server that do custom things

Or perhaps proxying to another app to handle alternate/custom routes. We piggyback on the express server that Postgraphile runs in order to expose endpoints for things like sitemaps, etc. I'm sure we could figure out some fancy proxying on our end to expose the API on supabase to make it appear seamless with another server exposing other endpoints, but it would be very cool to be able to configure that automagically and not have to tend to that ourselves.

schema injection, custom resolver logic

It looks like this is already being somewhat tracked on #8. But I'd like to add that we're in need of custom resolver logic/code to go with extending a schema. Some of that could be accomplished with federation. If the main supabase API was able to consume federated APIs, would could simply give the supabase API the address of our API and away we go. However, that doesn't allow us to change the resolvers and such for the schema that the API generates automatically for things in the db. I know it might seem bananas that someone would want to change how the resolvers work given that you all have solved the N+1 issue, but we are bananas and the situation has arisen.

Describe the solution you'd like

See above, I kind of lumped that in with the feature descriptions.

Describe alternatives you've considered

Postgraphile :)

Additional context

n/a

Support orderBy aggregates based on child tables

Describe the bug
Support orderBy aggregates based on child tables

To Reproduce
Steps to reproduce the behavior:

  1. just add order by to your query but you don't have an option to order by related tables (child table)

Expected behavior
Based from hasura
Screen Shot 2022-02-21 at 3 49 23 PM

RFC: add mutations

Summary

Mutations are on the roadmap, but are not currently implemented.

Despite being on the roadmap, I would like an issue, because that way I can subscribe to it if and when the feature gets implemented. Feel free to close, as one could argue it may be an abuse of the Issue tracker (given roadmap presence).

Rationale

Mutations are critical for adding and updating data, an essential component of GQL

Design

N/A

Memory leak

fix

diff --git a/src/lib.c b/src/lib.c
index ad1e01b..9fca554 100644
--- a/src/lib.c
+++ b/src/lib.c
@@ -4,6 +4,7 @@
 // clang-format on
 #include "graphqlparser/c/GraphQLParser.h"
 #include "graphqlparser/c/GraphQLAstToJSON.h"
+#include "graphqlparser/c/GraphQLAstNode.h"
 #include "tcop/utility.h"
 #include "miscadmin.h"
 #include "utils/varlena.h"
@@ -56,12 +57,12 @@ parse(PG_FUNCTION_ARGS) {
 		values[0] = (char *) graphql_ast_to_json(node);
 	}
 	values[1] = (char *) error;
-	//graphql_error_free(error);
-	//graphql_node_free(node);
-
 	// convert values into a heap allocated tuple with the description we defined
 	rettuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupdesc), values);
 
+	if (error) graphql_error_free(error);
+	if (node) graphql_node_free(node);
+
 	// return the heap tuple as datum
     PG_RETURN_DATUM( HeapTupleGetDatum( rettuple ) );
 }

Support for Field Alias (part of GraphQL Specification)

Describe the bug
As discovered while trying to use gqty with pg_graphql, it doesn't seem to respect field alias, which is of course a very important part of the graphql specification https://spec.graphql.org/June2018/#sec-Field-Alias

gqty-dev/gqty#520 (comment)

To Reproduce
Steps to reproduce the behavior:

You can re-use the reproduction repo made originally for an issue in gqty: gqty-dev/gqty#520

Expected behavior

To support field aliases

Screenshots

image
image

the graphql request clearly requires that allMfes is served in allMfes0, which is not what it's getting

Bug: Type Name Collision

Bug report

I tried a query on an existing table (named "attribute") in schema "public" (and there is another table named the same in another schema).

I suspect it collides with pg_graphql's schema representation.

Describe the bug

gql.field\n                    join gq
l.type type_\n                        on field.type_ = type_.name\n                where\n                    field.parent_type = 'Query
'\n                    and field.name = gql.name(ast_operation)\n                    and gql.is_visible(\"field\")\" returned more than
one row"

Once the conflicting table was removed, it started complaining about other unrelated tables to the "book" example:

select gql.resolve($$                                                                                              [1372/2149]
query {
  allBooks {
    edges {
      node {
        id
      }
    }
  }
}
$$);
                                       resolve
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
 {"data": null, "errors": ["column \"name\" of relation \"product\" does not exist"]}

I had to remove all the "product" tables and other stuff I had in my schemas.

I'll try to debug it locally and see if I can find the cause.
Thanks for this formidable project :)

Support for `aggregate` operations under child tables e.g. `count`

Describe the bug
Support for aggregate operations under child tables

Expected behavior

query MyQuery {
  GroupContent_connection(order_by: {GroupContentReactions_aggregate: {count: desc}}) {
    edges {
      node {
        id
        GroupContentReactions_aggregate {
          aggregate {
            count
          }
        }
      }
    }
  }
}

Output

{
  "data": {
    "GroupContent_connection": {
      "edges": [
        {
          "node": {
            "id": "WzEsICJwdWJsaWMiLCAiR3JvdXBDb250ZW50IiwgImNrdHpld24zaTAxNjUwMTNkeHBuZjUzcTQiLCAiY2t1MDExZ2h5NzE4MDAxOHl0cHFhZmF6ZCJd",
            "GroupContentReactions_aggregate": {
              "aggregate": {
                "count": 8
              }
            }
          }
        },
      ]
    }
  }
}    

values for array types in inserts/updates are ignored

mutation {
  createXyz(object: {
    id: "35d824a5-e370-4796-ba6e-d453ac961424"
    tags: ["d", "e", "f"]
    numbers: [4, 5, 6]
  }) {
    id
    tags
    numbers
  }
}

results in

insert into xyz as cbe71439044 (id, tags, numbers) values ('35d824a5-e370-4796-ba6e-d453ac961424', NULL, NULL) returning jsonb_build_object( 'id', cbe71439044.id,'tags', cbe71439044.tags,'numbers', cbe71439044.numbers );

Segmentation fault

Segmentation fault at executing query

select graphql('{"query": "{ account(nodeId: $nodeId) { id }}", "variables": {"nodeId": "WyJhY2NvdW50IiwgMV0="}}')

backtrace

#0  0x00005597ca4c10e0 in pg_detoast_datum_packed ()
#1  0x00007fba506b5280 in parse () from /usr/lib/postgresql14/pg_graphql.so
#2  0x00005597ca21fd8c in ?? ()
#3  0x00007fba506ceb0d in ?? () from /usr/lib/postgresql14/plpgsql.so
#4  0x00007fba506d06f6 in ?? () from /usr/lib/postgresql14/plpgsql.so
#5  0x00007fba506d41fb in ?? () from /usr/lib/postgresql14/plpgsql.so
#6  0x00007fba506d47cd in ?? () from /usr/lib/postgresql14/plpgsql.so
#7  0x00007fba506d5072 in plpgsql_exec_function () from /usr/lib/postgresql14/plpgsql.so
#8  0x00007fba506ded4b in plpgsql_call_handler () from /usr/lib/postgresql14/plpgsql.so
#9  0x00005597ca21fd8c in ?? ()
#10 0x00005597ca24fafd in ?? ()
#11 0x00005597ca229bf9 in ?? ()
#12 0x00005597ca223a42 in standard_ExecutorRun ()
#13 0x00007fba620dd7b5 in ?? () from /usr/lib/postgresql14/auto_explain.so
#14 0x00007fba620d47f0 in ?? () from /usr/lib/postgresql14/pg_stat_statements.so
#15 0x00007fba620c0bf5 in pgsk_ExecutorRun () from /usr/lib/postgresql14/pg_stat_kcache.so
#16 0x00007fba620b5fa5 in pgqs_ExecutorRun () from /usr/lib/postgresql14/pg_qualstats.so
#17 0x00005597ca39637c in ?? ()
#18 0x00005597ca397755 in PortalRun ()
#19 0x00005597ca393a43 in ?? ()
#20 0x00005597ca395970 in PostgresMain ()
#21 0x00005597ca31386c in ?? ()
#22 0x00005597ca314691 in PostmasterMain ()
#23 0x00005597ca097ebe in main ()

Minimal `Dockerfile` for production use (and my understanding)

Summary

I would like to implement a multi-stage build that only contains the necessary files to run pg_graphql.

Rationale

  • Smaller images
  • Clear runtime dependencies

Examples

FROM postgres:13 as build

RUN apt-get update
RUN apt-get install build-essential git cmake curl -y
RUN apt-get install postgresql-server-dev-13 -y
RUN apt-get install python2 -y

RUN git clone https://github.com/graphql/libgraphqlparser.git \
    && cd libgraphqlparser \
    && cmake . \
    && make install

RUN \
  git clone https://github.com/supabase/pg_graphql.git && \
  cd pg_graphql && \
  make && make install

# Required by libgraphql
RUN curl https://bootstrap.pypa.io/pip/2.7/get-pip.py --output get-pip.py
RUN python2 get-pip.py
RUN pip install ctypesgen

FROM postgres:13 as main

ENV LD_LIBRARY_PATH="/usr/local/lib:${LD_LIBRARY_PATH}"

COPY --from=build ["/usr/local/include/graphqlparser", "/usr/local/include/graphqlparser"]
COPY --from=build ["/usr/local/lib/libgraphqlparser.so", "/usr/local/lib/libgraphqlparser.so"]
COPY --from=build ["/usr/lib/postgresql", "/usr/lib/postgresql"]
COPY --from=build ["/usr/share/postgresql", "/usr/share/postgresql"]
COPY --from=build ["/pg_graphql/pg_graphql.control", "/usr/share/postgresql/13/extension/pg_graphql.control"]

Drawbacks

During development, layers are harder to find and debug with popular tools.

Alternatives

n/a

Unresolved Questions

Docs mention shared_preload_libraries = 'pg_graphql' but I can't find the reference in the Dockerfile on main and my local build works without it. I see something similar here

I left the build stage untouched but a couple of things looked off:

ENV LD_LIBRARY_PATH="/usr/local/lib:${PATH}"

Should this be:

ENV LD_LIBRARY_PATH="/usr/local/lib:${LD_LIBRARY_PATH}"

Is this a runtime setting?

Commit 71557b03d2ff59074f0823607dfcd8d6bde8773b broke docker-compose REST services

Describe the bug

I did a git bisect, purging all docker state (containers, images), rebuilding images, and launching docker-compose up. 71557b0

At this SHA, the following query fails in GraphiQL

{
  allAccounts {
    edges {
      node {
        id
      }
    }
  } 
}

To Reproduce
Steps to reproduce the behavior:

  1. checkout the commit
  2. docker-compose up (or build)
  3. load graphiql
  4. try the query, observe failure
  5. checkout f1da5ec, the immediate commit before, repeat, observe success

Expected behavior

successful query

Screenshots

image

Versions:
see above

Additional context
n/a

RFC: Naming conventions

Summary

When converting SQL table, columns, and relationships names into a GraphQL schema, transform names with prefixes and suffixes rather than inflection and pluralization.

Motivation

  • Pluralizing rules are not consistent
  • Inflection rules are not consistent with e.g. acronyms, abbreviations, etc
  • Resolving bugs found in pluralization and inflection rules is a SemVer major breaking change

Design

Use prefixes and suffixes to differentiate GraphQL types, rather than casing and inflection

Table

For a given SQL table named public.account_holder

GQL Type Name
Node account_holder
Connection account_holder_connection
Edge account_holder_edge
Filters account_holder_bool_exp
Order By account_holder_order_by
etc

Schema Handling

Schema names are prefixed to tables names with an underscore separator. Entities on search_path do not include the prefix.

When search_path='public'

SQL Table Name GQL Type Name
api.account api_account
public.account account

Drawbacks

  • Many clients of pg_graphql are likely to be javascript, which has differing naming norms (pascal case and camel case) from PostgeSQL (snake case). Those differences could cause friction on the client without middleware to perform the inflection.

  • Controlling schema prefixing via search_path means API compatibility could break with updates to search_path

  • The relay connection spec is prescriptive for some type names. For example, page_info is not a technically compliant replacement for the PageInfo type

  • GraphQL builtin types are pascal case e.g. Int and String

Alternatives

  • Attempting automatic inflection of the schema that is consistent with javascript norms with a naming override system (maybe using comments?) for errors. However, this has the inverse problem of not being erganomic as a SQL replacement (i.e. use in an ORM).

Unresolved Questions

  • How challenging would it be to provide inflection middleware

Feature Request: Schema extensibility

I couldn't find any references to schema extension in the docs, or an acknowledgment of their absence. I think the possibility to employ schema extension in addition to straight reflection is an important consideration for a developer evaluating this library.

Coming from the PostGraphile universe, I'm curious if there are equivalents to (or endorsed recipes to achieve) the following features:

Schema extension
https://www.graphile.org/postgraphile/make-extend-schema-plugin/

Computed columns
https://www.graphile.org/postgraphile/computed-columns/

Custom queries
https://www.graphile.org/postgraphile/custom-queries/

Custom mutations
https://www.graphile.org/postgraphile/custom-mutations/

Can't create extension

Describe the bug
Can't create extention: 'create extension pg_graphql cascade'

To Reproduce
Steps to reproduce the behavior:

  1. Installed latest release of libgraphqlparser 0.7.0
  2. 'make install' on latest commit of pg_graphql
  3. su - postgres
  4. psql
  5. create extension pg_graphql cascade

Expected behavior
Install the extention successfully

Screenshots
Output of create extension:

NOTICE:  installing required extension "pgcrypto"
ERROR:  could not load library "/usr/lib/postgresql/13/lib/pg_graphql.so": libgraphqlparser.so: cannot open shared object file: No such file or directory

Versions:

  • PostgreSQL: 13.5 (Debian 13.5-0+deb11u1)
  • pg_graphql commit ref: 6dcee84

Docker improvement for proxied network

Chore

Describe the chore

The dockerfile in its current state fails at multiple states on VPN

  • curl calls needed -k
  • python pip needs CERT changes
  • git needed skip SSL checks

Additional context

So, I have modified the dockerfile, it is makes sense I can make a PR.

FROM postgres:13

RUN apt-get update
RUN apt-get install build-essential git cmake curl -y
RUN apt-get install postgresql-server-dev-13 -y
# Required by libgraphql
RUN apt-get install python2 -y
RUN apt-get install python3-pip -y # using this to install pip
RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org ctypesgen # trusted hosts

RUN git config --global http.sslVerify false # setting SSL verify to false


RUN git clone https://github.com/graphql/libgraphqlparser.git \
    && cd libgraphqlparser \
    && cmake . \
    && make install

ENV LD_LIBRARY_PATH="/usr/local/lib:${PATH}"

COPY . pg_graphql
WORKDIR pg_graphql
RUN make install

Could not find the public.graphql(query, variables) function

Hi,

We have tried the docker setup and successfully queried in GraphiQL. Then we tried using our existing database and installed the pg_graphql extension there. but while we try the GraphiQL it is returning error like below:-

{
  "hint": "If a new function was created in the database with this name and parameters, try reloading the schema cache.",
  "message": "Could not find the public.graphql(query, variables) function or the public.graphql function with a single unnamed json or jsonb parameter in the schema cache"
}

Thanks.

Feature Request: Subscriptions / Live Queries

Feature request

Is your feature request related to a problem? Please describe.

real-time all the things!

Describe the solution you'd like

Describe alternatives you've considered

Additional context

__typename missing from generated edge nodes

Describe the bug

Access to __typename fails

To Reproduce
Steps to reproduce the behavior:

  1. docker-compose up
  2. open GraphiQL
{
  allAccounts {
    edges {
       __typename # induces failure
      node {
        __typename
        id
      }
    }
  }
}

Expected behavior

__typename returned

Screenshots
n/a

Versions:

Additional context
My GQL client auto adds these fields, ๐Ÿ™„ , which resulted in the biff

pg_config executable not found

Following instructions for the fastAPI example

> python -m venv venv
> source venv/bin/activate
> pip install -e .
Obtaining file:///home/xxx/code/supabase/pg_graphql
Collecting pytest
  Downloading pytest-6.2.5-py3-none-any.whl (280 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 280 kB 2.4 MB/s
Collecting pytest-benchmark
  Downloading pytest_benchmark-3.4.1-py2.py3-none-any.whl (50 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 50 kB 5.5 MB/s
Collecting pre-commit
  Downloading pre_commit-2.16.0-py2.py3-none-any.whl (191 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 191 kB 20.3 MB/s
Collecting pylint
  Downloading pylint-2.12.1-py3-none-any.whl (413 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 413 kB 19.3 MB/s
Collecting black
  Downloading black-21.11b1-py3-none-any.whl (155 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 155 kB 88.3 MB/s
Collecting psycopg2
  Downloading psycopg2-2.9.2.tar.gz (380 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 380 kB 25.1 MB/s
    ERROR: Command errored out with exit status 1:
     command: /home/xxx/code/supabase/pg_graphql/venv/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-y0q72j75/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-y0q72j75/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-hazdn1t4
         cwd: /tmp/pip-install-y0q72j75/psycopg2/
    Complete output (23 lines):
    running egg_info
    creating /tmp/pip-pip-egg-info-hazdn1t4/psycopg2.egg-info
    writing /tmp/pip-pip-egg-info-hazdn1t4/psycopg2.egg-info/PKG-INFO
    writing dependency_links to /tmp/pip-pip-egg-info-hazdn1t4/psycopg2.egg-info/dependency_links.txt
    writing top-level names to /tmp/pip-pip-egg-info-hazdn1t4/psycopg2.egg-info/top_level.txt
    writing manifest file '/tmp/pip-pip-egg-info-hazdn1t4/psycopg2.egg-info/SOURCES.txt'

    Error: pg_config executable not found.

    pg_config is required to build psycopg2 from source.  Please add the directory
    containing pg_config to the $PATH or specify the full executable path with the
    option:

        python setup.py build_ext --pg-config /path/to/pg_config build ...

    or with the pg_config option in 'setup.cfg'.

    If you prefer to avoid building psycopg2 from source, please install the PyPI
    'psycopg2-binary' package instead.

    For further information please check the 'doc/src/install.rst' file (also at
    <https://www.psycopg.org/docs/install.html>).

    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
WARNING: You are using pip version 20.2.3; however, version 21.3.1 is available.
You should consider upgrading via the '/home/xxx/code/supabase/pg_graphql/venv/bin/python -m pip install --upgrade pip' command.

To workaround I did:

pip install psycopg2-binary

and then removed the psycopg line from setup.py

diff --git a/setup.py b/setup.py
index 1cfc574..f9fd8e9 100644
--- a/setup.py
+++ b/setup.py
@@ -25,7 +25,6 @@ DEV_REQUIRES = [
     "pre-commit",
     "pylint",
     "black",
-    "psycopg2",
     "sqlalchemy",
     "pre-commit",
     "flupy",

Thanks

"where" instead of "filterBy"?

Summary

pg_graphql uses "filterBy" keyword while PostgreSQL uses "where" keyword for the syntax.

Rationale

just like the "order by" we should closely align the keywords to PostgreSQL like "filterBy" to "where" to map the name as closely as possible to reserved operations.

Also other tools like hasura and prisma uses "where" instead of "filterBy" kinda like the standard name for it

Speed up schema building

The event trigger responsible for building (caching) the graphql schema's internal representation is executed after each DDL statement

For large schemas, each rebuild may take a few seconds (less than 5)

If several DDL statements are executed in a e.g. a migration, that performance hit may be noticeable and annoying

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.