graphql-python / graphql-core Goto Github PK
View Code? Open in Web Editor NEWA Python 3.6+ port of the GraphQL.js reference implementation of GraphQL.
License: MIT License
A Python 3.6+ port of the GraphQL.js reference implementation of GraphQL.
License: MIT License
This may be controversial, but I'd like to propose changing the name of this package from graphql
to something like graphql_next
.
I'm working on a project that both exposes a GraphQL API, and does complex GraphQL introspection. Exposing an API is most easily done with Graphene, particularly given the packages like graphene-django
, but it depend on graphql-core
which as we know is fairly limited. I think it's reasonable to allow for the packages to be installed alongside each other.
graphql-core
installed alongside each other.graphql-core
.graphql_python.next
, or graphql.next
may be a good approach.graphql-core
is no longer being actively developed. While this may change in the future, it means the maintainers are unlikely to change the name of that package.graphql-core
came first, and it is recommended by PEP 423 that existing package names are not chosen by new packages.graphql-core-next
mostly refers to itself by names along those lines, not by the name "graphql". This could be a chance to add consistency, and at the very least would be less confusing for newcomers than finding the package is in fact called "graphql".PEP 423 suggests a reasonable series of steps, that don't look like they would required a huge amount of work.
https://www.python.org/dev/peps/pep-0423/#how-to-rename-a-project
I've referenced PEP 423 here a lot. It should be noted that this PEP has not been accepted, although it has received discussion on the Python-Dev mailing list, doesn't appear to have many people arguing against it (other than for broken links and typos), and has also been featured in conference talks. Above all though, the document reads to me as a summary of how the Python ecosystem already works for the most part, and a distillation of already accepted practices and the reasons for them, rather than a proposed change. For this reason I believe it's a good starting point for discussion.
I have also suggested several alternative names for the package here. I do not feel strongly about any of them. I think graphql_core_next
, graphql_next
, graphql.core
, or graphql2
, would all be acceptable names, and I am very happy to leave the decision between these or other new options entirely to the maintainers.
When I attempt to run mypy, I get:
ghstack/github_fake.py:1: error: Cannot find module named 'graphql'
ghstack/github_fake.py:1: note: See https://mypy.readthedocs.io/en/latest/running_mypy.html#missing-imports
Now, I know that you guys have types... but the mypy docs seem to suggest that you need to do something extra to get mypy to treat the third party package as having types?
As a temporary workaround, I've symlinked graphql
in my local directory to graphql-core-next/graphql and that convinces mypy to check the ytpes.
Instead of pipenv, we may want to use a different packaging tool like flit or poetry, and replace setup.py and pipfile with a pyproject.toml file. The main reason is that while pipenv is a nice tool, it is actually intended to package apps, not libraries like graphql-core-next.
Would like to hear thougts and recommendations of others regarding this issue.
Currently we use mypy 0.720 because it is the last version that supports the "old" semantic analyzer.
The new semantic analyzer creates a few errors that need to be resolved. Some of these errors are actually mypy issues, like python/mypy#7203. Maybe we should wait until these mypy issues are resolved.
After upgrading to a newer mypy version, the setting "new_semantic_analyzer = False" in mypy.ini should be removed.
Installation instructions the README for pipenv fails with error Could not find a version that matches graphql-core>=3
because pipenv is ignoring pre-release versions.
ERROR: Could not find a version that matches graphql-core>=3
Tried: 0.4.9, 0.4.11, 0.4.12, 0.4.12.1, 0.4.13, 0.4.14, 0.4.15, 0.4.16, 0.4.17, 0.4.18, 0.5, 0.5.1, 0.5.2, 0.5.3, 1.0, 1.0.1, 1.1, 2.0, 2.0, 2.1, 2.1, 2.2, 2.2, 2.2.1, 2.2.1
Skipped pre-versions: 0.1a0, 0.1a1, 0.1a2, 0.1a3, 0.1a4, 0.4.7b0, 0.4.7b1, 0.4.7b2, 0.5b1, 0.5b2, 0.5b3, 1.0.dev20160814231515, 1.0.dev20160822075425, 1.0.dev20160823054952, 1.0.dev20160909030348, 1.0.dev20160909040033, 1.0.dev20160920065529, 1.2.dev20170724044604, 2.0.dev20170801041408, 2.0.dev20170801041408, 2.0.dev20170801051721, 2.0.dev20170801051721, 2.0.dev20171009101843, 2.0.dev20171009101843, 2.1rc0, 2.1rc0, 2.1rc1, 2.1rc1, 2.1rc2, 2.1rc2, 2.1rc3, 2.1rc3, 3.0.0a0, 3.0.0a0, 3.0.0a1, 3.0.0a1, 3.0.0a2, 3.0.0a2
Could be fixed by specifying the LT on the pipenv installation command like so:
pipenv install "graphql-core>=3<4"
I was following "create schema from SDL" approach in https://cito.github.io/blog/shakespeare-with-graphql/ and wanted to use custom scalar types. Looks like in GraphQL.js custom scalar type implementation can be set by simply assigning in type map (https://stackoverflow.com/questions/47824603/graphql-custom-scalar-definition-without-graphql-tools), so equivalent here would be something like this
schema_src = """
scalar DateTime
...
"""
schema = build_schema(schema_src)
schema.type_map["DateTime] = myscalars.DateTime
This doesn't work though as schema.type_map is not consulted when serializing output types or parsing arguments. The following workaround using extend_schema
seems to produce the desired effect of assigning custom scalar type implementation to a scalar name declared in schema and having it used when serializing and parsing:
import typing
import graphql.language
from graphql import GraphQLScalarType
from graphql import GraphQLSchema
from graphql.utilities import extend_schema
def register_scalar_types(schema: GraphQLSchema, types: typing.List[GraphQLScalarType]):
for scalar_type in types:
schema.type_map[scalar_type.name] = scalar_type
#using a name that already exists in the schema is an error
#so need to make something up
dummy_scalar_name = "_extension_dummy"
extended_schema = extend_schema(
schema, graphql.language.parse("scalar %s" % dummy_scalar_name)
)
#this scalar we extended schema with is not used though
del extended_schema.type_map[dummy_scalar_name]
return extended_schema
and then use it like this:
schema = register_scalar_types(schema, [myscalars.DateTime])
Surely this is a hack and there has to be a better way?
Hi, I'm new to GraphQL and have a quick question. I want to add a custom directive to the default directives including skip and included.
schema = GraphQLSchema(query, directives=specified_directives + [custom_directives])
However, the following TypeError occurs:
can only concatenate tuple (not "list") to tuple
I noticed that graph-core returns a list of these predefined directives, but graph-core-next returns a tuple. Is there a reason for this?
Hello!
When testing the subscriptions we've noticed that exceptions raised in subscribe()
are not handled by the query executor, instead they come back up outside of graphql
, when we are consuming the results to create an HTTP response, with Task exception was never retrieved
being logged by Python to console.
Quick fix we did for this was wrapping whole loop in try/except
and handling those exceptions there, but this has obvious downside of not having GraphQLError
with path and location.
Fantastic project BTW!
I'm not sure this is a bug, but it caught me out. Maybe it should just be documented.
The following middleware:
class MyMiddleware:
async def resolve(self, next, root, info, *args, **kwargs):
return await next(root, info, *args, **kwargs)
will break the introspection query as the 'next' handler is not awaitable.
This fixes it:
from inspect import isawaitable
class MyMiddleware:
async def resolve(self, next, root, info, *args, **kwargs):
response = next(root, info, *args, **kwargs)
return await response if isawaitable(response) else response
Also would it be possible to release 1.0.1 on PyPi as it has all the cool middleware and context stuff :)
GraphQL-Core supports a containter_type
callable for input object types which is used for implementing the container
feature of Graphene input object types.
GraphQL-Core-Next should support a similar functionality that can be used by Graphene.
GraphQL spec section 7.1 describes a 3rd response field called extensions
that is intended to be used for custom data in addition to payload data and error responses. This is often used for metadata related to the query response such as performance tracing. Apollo GraphQL implements this on their server via an extensions middleware. We should probably follow a similar pattern here but we will need to support passing the extensions data to the client in the core in order to support middleware like that.
https://graphql.github.io/graphql-spec/June2018/#sec-Response-Format
The response map may also contain an entry with key extensions. This entry, if set, must have a map as its value. This entry is reserved for implementors to extend the protocol however they see fit, and hence there are no additional restrictions on its contents.
@Cito You doing a great job keeping this lib in sync with graphql-js
.
Also big thanks for taking the time to submit PRs back to graphql-js
๐
I was browsing the source code and notice that you already implemented middlewares
:
https://github.com/graphql-python/graphql-core-next/blob/master/graphql/execution/middleware.py
Would be great to have a list of such changes so we can figure out is it something Python specific or we should push it into graphql-js
.
I also interested if you added any new tests that we can mirror in graphql-js
.
Thank you for the hard work porting the reference GraphQL implementation to python3. We are hoping to start using this library once the stable 3.0.0 version is released. Do you have a rough idea of when this will be?
If a directive with no argument declared and an arbitrary argument is passed, the rule check is not applied.
This is because know_args
is an empty list which return false in if directive_node.arguments and known_args:
at Line 63 on /src/graphql/validation/rules/known_argument_names.py
It seems that the logging
library in python assumes that exceptions are hashable, in order to be logged. It'd be great if we could treat GraphQL errors the same way as other builtin exceptions.
See also, a similar issue in the schematics
project:
execute
returns a MaybeAwaitable
but subscribe
doesn't await it if it does.
I wanted to add some custom validation rules in addition to the default graphql.specified_rules
. Unless I'm mistaken, the only way to do so is to manually call validate
with the new rules, which means that I basically have to duplicate all the code inside of async def graphql
, and change the validation step in order to pass in my custom rules. Is it possible instead to just allow clients to pass the rules they want to use into the top-level graphql
call?
In my server, I have the following lines: https://github.com/ezyang/ghstack/blob/973ef7b25a71afa8f813cd8107f227938b3413f1/ghstack/github_fake.py#L288
GITHUB_SCHEMA.get_type('Repository').is_type_of = lambda obj, info: isinstance(obj, Repository) # type: ignore
GITHUB_SCHEMA.get_type('PullRequest').is_type_of = lambda obj, info: isinstance(obj, PullRequest) # type: ignore
I don't know how to put these on the Repository/PullRequest classes, in the same way resolvers can be put on the appropriate object class and then automatically called. The obvious things (e.g., adding an is_type_of
method) do not work.
get_middleware_resolvers
is implemented as a generator:
The generator object is then assigned to self._middleware_resolvers
:
As the generator is not consumed and unrolled into a list at this point, it is exhausted during the reduce()
call of the first field construction:
As the generator is now exhausted, all other calls to reduce()
will immediately receive StopIteration
so no other field is wrapped.
Hi,
I've been thinking a bit about how we could implement the Dataloader pattern in v3 while still running in multi-threaded mode. Since v3 does not support Syrus's Promise library, we need to come up with a story for batching in async mode, as well as in multi-threaded environments. There are many libraries that do not support asyncio
and there are many cases where it does not make sense to go fully async.
As far as I understand, the only way to batch resolver calls from a single frame of execution would be to use loop.call_soon
. But since asyncio
is not threadsafe, that means we would need to run a separate event loop in each worker thread. We would need to wrap the graphql
call with something like this:
def run_batched_query(...):
loop = asyncio.new_event_loop()
execution_future = graphql(...)
loop.run_until_complete(result_future)
return execution_future.result()
Is that completely crazy? If yes, do you see a less hacky way? I'm not very familiar with asyncio
so I would love to get feedback.
Cheers
The asyncio example in README.md raises
TypeError: __init__() got an unexpected keyword argument 'resolve'
Here is the example:
import asyncio
from graphql import (
graphql, GraphQLSchema, GraphQLObjectType, GraphQLField, GraphQLString)
async def resolve_hello(obj, info):
await asyncio.sleep(3)
return 'world'
schema = GraphQLSchema(
query=GraphQLObjectType(
name='RootQueryType',
fields={
'hello': GraphQLField(
GraphQLString,
resolve=resolve_hello)
}))
async def main():
query = '{ hello }'
print('Fetching the result...')
result = await graphql(schema, query)
print(result)
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
finally:
loop.close()
when running pytest with warnings
~~~/lib/python3.7/site-packages/promise/promise_list.py:2
~~~/lib/python3.7/site-packages/promise/promise_list.py:2: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Iterable
~~~/lib/python3.7/site-packages/graphql/type/directives.py:55
~~~/lib/python3.7/site-packages/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
args, collections.Mapping
~~~/lib/python3.7/site-packages/graphql/type/typemap.py:1
~~~/lib/python3.7/site-packages/graphql/type/typemap.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import OrderedDict, Sequence, defaultdict
~~~/lib/python3.7/site-packages/graphql_server/__init__.py:2
~~~/lib/python3.7/site-packages/graphql_server/__init__.py:2: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import namedtuple, MutableMapping
-- Docs: https://docs.pytest.org/en/latest/warnings.html
In GraphQL-core 3.0.0, introspection_query
has been already removed, although 3.0.0 claims to be compatible with GraphQL.js 14. This should only be removed when we release a version compatible with GraphQL.js 15.
Given the following SDL example:
directive @example(
arg: String
) on FIELD_DEFINITION
type Query {
field: String @example(arg: "some value")
}
At execution time (I'm assuming from within a resolver or middleware), when resolving field
, what would be the proper way to know the example
directive is applied to it, and to get "some value"?
I've checked the doc and found graphql.get_directive_values()
but can't figure how to use it.
GraphQLResolveInfo.field_nodes
is a list with only one field node, which has an empty directives
property.
I know providing a framework to define directives is not a goal of this lib, but I just need a bare bones way to access this data.
Edit:
After checking the internals, I get it better: GraphQLResolveInfo.field_nodes
gives access to the selection nodes, and these only carry client query directives.
To access the field definition within the server schema, I have to do the following:
field_def = info.parent_type.fields[info.field_name]
directive = info.schema.get_directive("example")
graphql.get_directive_values(directive, field_def) # {"arg": "some value"}
Let me know if I'm doing something wrong! (and feel free to close this issue. I feel like having some doc about it might make sense.)
Creating executable schemas from SDL with GraphQL.js/graphql-core-next is possible, but grafting resolvers, custom scalars and enums manually into the schema after creating it with build_schema
is cumbersome (see e.g. #20) and has a smell of monkey-patching.
The makeExecutableSchema
function of graphql-tools provides a better solution.
We should consider porting this functionality to Python as well, either as a separate packare or as a subpackage of graphql-core-next.
Is there a standard way to get the schema information (type, directives) for a query node/leaf?
Concretely, I'm looking at GitHub's GraphQL, and there's two things I would like to be able to do:
@preview()
with data that needs to be conveyed in the HTTP headers. I would like to be able to examine a query, get the directives, and pull out the appropriate informationI think the thing I would like is a utility to walk a query (field-by-field) and get the type information for that field.
The bot created this issue to inform you that pyup.io has been set up on this repo.
Once you have closed it, the bot will open pull requests for updates as soon as they are available.
Today when experimenting with 3rd party library for data validation (https://github.com/samuelcolvin/pydantic) I've noticed that when I take map of validation errors from it (which is dict of lists of ValueError
and TypeError
subclasses instances), I need to implement extra step to convert those errors to something else because query executor includes check for isinstance(result, Exception)
. This check makes it raise returned exception instance, effectively short-circuiting further resolution:
The fix for issue was considerably simple to come up with: just write util that converts those errors to a dict as they are included in my result's validation_errors
key, but I feel such boilerplate should be unnecessary:
try:
... run pydantic validation here
except (PydanticTypeError, PydanticValueError) as error:
return {"validation_errors": flatten_validation_error(error)}
Is this implementation a result of something in spec, or mechanism used to keep other feature's (eg. error propagation) code simple? I think we should consider supporting this use-case. Considerable number of libraries use exceptions for messaging (eg. Django with its ValidationError
and bunch of core.exceptions.*
)
I'm using graphiQL for testing my API where it is fairly common practice to comment out a query and write a new one.
{info}
Will run fine.
{info}
#
Will fail with "string index out of range" on line 268 of lexer.py (char=body[position])
I could be wrong - but shouldn't that be a >= rather than >?
The nodes in "language.ast" should probably not be hashable, because they are mutable and currently equal values have different hashes (see also graphql-python/graphql-core/issues/252).
graphene-django 2 passes a lazy translated string as description
if that's what's used on the model. This way the descriptions can be translated to the language current at the introspection time.
But graphql-core 3 asserts the description is an str
, which forces choosing the translation at the definition time.
I couldn't find such type check in GraphQL.js - they just declare descriptions as strings, but don't check that (at least not in definition.js
).
Would it make sense to loosen that restriction a bit? Or even drop the type check, and add an str()
around descriptions in introspection?
I see similar issue would apply to deprecation_reason
- they are both human-readable text.
SourceLocation
, as a NamedTuple, is serialized by json.dumps() to an array instead of an object. Graphql.js keeps SourceLocation
as and object, so it just works for them.
Reproduction:
from graphql import SourceLocation, GraphQLError, format_error, Source
import json
print(json.dumps(format_error(GraphQLError(message="test", source=Source('{ test }'), positions=[2]))))
Expected result: {"message": "test", "locations": [
{"line": 1, "column": 3}
], "path": null}
Actual result: {"message": "test", "locations": [
[1, 3]
], "path": null}
Should SourceLocation
be changed to something that serializes correctly? Or would it make more sense to convert it to dict in format_errors
?
Is there anything in graphql-core-next
like the schema directive visitor pattern mentioned in the graphql-tools
documentation? How does one implement custom logic for a Directive?
Apollo docs:
https://www.apollographql.com/docs/graphql-tools/schema-directives#implementing-schema-directives
Code in question:
https://github.com/apollographql/graphql-tools/blob/wip-schema-directives/src/schemaVisitor.ts#L459
I put together a new version of graphql-ws
specifically designed for graphql-core-next
, and one problem I've run into is that calling aclose()
on an async generator doesn't cancel whatever is waiting on its __anext__()
(https://bugs.python.org/issue28721)
The result is that subscriptions in graphql-core-ws aren't really cancellable, and instead emit the following error:
Task was destroyed but it is pending!
task: <Task pending coro=<<async_generator_asend without __name__>()> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x10ed07d68>()]> cb=[_wait.<locals>._on_completion() at /Users/dfee/.pyenv/versions/3.7.0/lib/python3.7/asyncio/tasks.py:436]>
^
BTW, you can check out that repo (pre-publish) at https://github.com/dfee/graphql-next-ws
I have a custom scalar type for a numpy array:
def serialize_ndarray(value):
return dict(
numberType=value.dtype.name.upper(),
base64=base64.b64encode(value).decode()
)
ndarray_type = GraphQLScalarType("NDArray", serialize=serialize_ndarray)
is_nullish()
does an inequality test on values to work out if they are NaN:
def is_nullish(value: Any) -> bool:
"""Return true if a value is null, undefined, or NaN."""
return value is None or value is INVALID or value != value
Unfortunately, numpy arrays break this because they override __eq__
to return a pointwise equality array rather than a boolean.
The comment for is_nullish()
suggests that value != value
is checking for NaN. If this is the case we could replace it with:
def is_nan(value):
"""Return true if a value is NaN"""
try:
return math.isnan(value)
except TypeError:
return False
def is_nullish(value: Any) -> bool:
"""Return true if a value is null, undefined, or NaN."""
return value is None or value is INVALID or is_nan(value)
Would this be acceptable? I can provide a PR if it is.
Hi,
awesome project, going Python 3.6+ and full async is a really big improvement!
For context: I'm creating a GraphQL binding to Vaex, a Python DataFrame library for large datasets, mostly exposing the aggregations and groupby/binby operations. The aggregation operations can be done async (they will give a Promise, that I can wrap with a Future for compatibility with this library). All operations will then be performed in one pass over the data, and thus all promises/futures will be resolved after that operation. This 'one pass over the data' is required for proper performance for larger than memory datasets.
I thus need to call vaex to compute the aggregation before your library tries to resolve the futures, but after all graph-ql operations are executed. In the previous version of this library, I could do this by calling my computation function in https://github.com/graphql-python/graphql-core/blob/bbbe880673e2574bc418b639f43968f96364873b/graphql/execution/executors/asyncio.py#L55 (by inherting and overriding that method).
However, I cannot find an easy way to do this with the current library. The only way I could do it was to pass my function as the context, and call it just before this line : https://github.com/graphql-python/graphql-core-next/blob/0fb5b81fc0a1909bfe63df31657bc4de631676cf/src/graphql/execution/execute.py#L356
Is there a proper way of achieving this?
cheers,
Maarten
Hello,
not sure if it's an issue or a question, but here it is,
i try to make a field with a non nullable String as an input field. And graphql-core-next is converting it to "None" which is not what i expected.
It seems to me, reading the spec, that it's allowed to pass "" as string argument and should be considered not null ... did i miss something ?
here is a reproducible code to show off the issue :
import asyncio
from graphql import graphql, parse, build_ast_schema
async def resolve_hello(obj, info, query):
print(query)
return f'world ==>{query}<=='
schema = build_ast_schema(parse("""
type Query {
hello(query: String!): String
}
"""))
schema.get_type("Query").fields["hello"].resolve = resolve_hello
async def main():
query = '{ hello(query: "") }'
print('Fetching the result...')
result = await graphql(schema, query)
print(result)
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
finally:
loop.close()
thanks for the help ๐
I've run a very simplistic benchmark, just returning a long list of single-field objects.
It seems graphql-core-next is 2.5x slower than graphql-core: https://gist.github.com/ktosiek/849e8c7de8852c2df1df5af8ac193287
Looking at flamegraphs, I see isawaitable is used a lot, and it's a pretty slow function. Would it be possible to pass raw results around more? It seems resolve_field_value_or_error and complete_value_catching_error are the main offenders here.
Version:
GraphQL-core-next 1.0.1
Python 3.7.1
Current Behaviour:
Exceptions in resolvers are caught by default and the traceback of the original_error doesn't help debugging.
Traceback output of log.exception('', exc_info=result.errors[0].original_error)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/graphql/execution/execute.py", line 664, in complete_value_catching_error
return_type, field_nodes, info, path, result
File "/usr/local/lib/python3.6/site-packages/graphql/execution/execute.py", line 731, in complete_value
raise result
graphql.error.graphql_error.GraphQLError: My exception
Expected Behaviour:
There is a flag raise_exception which switches if exceptions are caught or if they are raised out of the library resolver.
The traceback of the original_errors is meaningful: leads to the source line where the exception occurred.
I see this issue here complaining of lack of traceback, apparently fixed in 1.0.2
#23
My experience on 1.0.5 is that I only get a text representation of the error, and no line number of where the problem lies, which makes debugging very hard.
name 'error' is not defined
GraphQL request (32:2)
31:
32: {info}
I only get the place in the query that it's failed - which doesn't help enough.
Code in graphql-core
https://github.com/graphql-python/graphql-core/blob/master/graphql/execution/values.py#L71-L76
Code in graphql-core-next:
https://github.com/graphql-python/graphql-core-next/blob/master/graphql/execution/values.py#L96-L99
Graphql-core next leaks the inner Python representation of an object.
In general, after reviewing a lot of tests and code, there is a lot of usage of repr
when it should be used just for debugging, not for uniform error messages.
When I validate these queries against this schema (via this code), I'm getting this error:
graphql.error.graphql_error.GraphQLError: Unknown type 'Int'.
/home/astraluma/code/gobuildit/gqlmod/testmod/queries.gql:15:30
14 |
15 | query HeroComparison($first: Int = 3) {
| ^
16 | leftComparison: hero(episode: EMPIRE) {
The results of pip freeze
:
astor==0.8.0
-e [email protected]:go-build-it/gqlmod.git@6fe690f7ffb0634f4522ded57be8f40b21205c52#egg=gqlmod
graphql-core==3.0.0a2
import-x==0.1.0
pkg-resources==0.0.0
I'm pretty new to GraphQL, but as I understand it, Int
is a builtin scalar that should always be available?
This race condition becomes especially apparent when the subscription is being used in an "async-iterable" sort of mode:
received = []
def _receive(subscription):
async for result in subscription:
received.append(result)
task = loop.create_task(_receive())
# ...
await subscription.aclose()
# e.g. ... await asyncio.sleep(.1)
assert not task.done()
# might I suggest using something like a SENTINEL which isn't passed through...
# here, I'm manually setting it, but you could set it, listen for it on EventEmitter or
# EventEmitterAsyncIterator
SENTINEL = object()
subscription.iterator.queue.put_nowait(SENTINEL)
await asyncio.sleep(.1)
assert task.done()
I raise an issue before Out of memory when wrong use graphql with dataloader, but now I find this repo and to_dict
is not exist in ExecutionResult
. Does it will be implemented? If not, please close that issue, and I will remove it in my project.
But GraphQL-Server is using this function, please note.
Thanks so much!
Hello,
Context: Windows 10, ariadne==0.5
, graphql-core-next==1.1.0
Recently upgraded to the latest version of graphql-core-next
, and I'm getting the following exception when using extend_schema
:
File "C:\Anaconda3\envs\structor\lib\site-packages\graphql\utilities\extend_schema.py", line 335, in extend_schema
type_map[existing_type_name] = extend_named_type(existing_type)
File "C:\Anaconda3\envs\structor\lib\site-packages\graphql\utilities\extend_schema.py", line 150, in extend_named_type
return extend_scalar_type(type_)
File "C:\Anaconda3\envs\structor\lib\site-packages\graphql\utilities\extend_schema.py", line 225, in extend_scalar_type
kwargs = type_.to_kwargs()
File "C:\Anaconda3\envs\structor\lib\site-packages\graphql\type\definition.py", line 397, in to_kwargs
if getattr(self.parse_literal, "__func__")
AttributeError: 'function' object has no attribute '__func__'
This new version tries to get the __func__
attribute of the literal parser for a custom scalar, which assumes that the parser is a bound method. There are many libraries, incl. ariande
, that attach such parsers (and resolvers) after the scalars have been initialised. Therefore, these newly attached methods are no longer bound. I see no reason for this constraint, so could you please fix?
Many thanks.
What's the status of graphql-core-next vis-a-vis the graphene tools? Is there a Roadmap or other document tracking compatibility with, say, graphene-django?
Or are they already compatible and I'm missing that in the docs of those tools or graphql-core-next?
It would be great if this was addressed prominently in the README. Assuming it isn't already and I'm just flat out missing it.
Hello!
We have started migrating our GraphQL lib to GraphQL-core-next and things are looking great so far. build_schema
util allowed us to remove our own implementation that did the same thing. However, we've found that if you try to extend existing type using extend
, it will silently skip it and do nothing.
Looking around the code, I see that logic for extending the schema is quite complexy, and so can understand why build_schema
doesn't support it, but perhaps extend_schema
should support differences only
mode, or part of its logic could be extracted to utility function returning diff from schema and Source
?
Apollo-Server for reference of what we are trying to do.
Thank you for all the amazing work!
GraphQL.js comes with a benchmark script - a similar script should be also added to GraphQL-core-next.
Usually I want to use the custom method binded to GraphQLObjectType
, such as:
# use strawberry for example
import strawberry
@strawberry.type
class User:
name: str
age: int
@strawberry.type
class Query:
@strawberry.field
def user(self, info) -> User:
return self.get_user()
def get_user(self):
return User(name="Patrick", age=100)
schema = strawberry.Schema(query=Query)
Unfortunately, the self
would always be None
cause the root_value
in ExecutionContext.execute_operation
would be setted to None
if it is the root node Query. I think modifying it as below is fine:
def execute_operation(
self, operation: OperationDefinitionNode, root_value: Any
) -> Optional[AwaitableOrValue[Any]]:
"""Execute an operation.
Implements the "Evaluating operations" section of the spec.
"""
type_ = get_operation_root_type(self.schema, operation)
if not roo_value:
root_value = type_
Then we can use the custom method of GraphQLObjectType
. And it not leads any problem I think.
GraphQL-Core supports an out_name
for input types which is used by Graphene for passing parameters with transformed names to Python (because Python likes snake_case instead of camelCase).
GraphQL-Core-Next should support a similar functionality that can be used by Graphene.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.