Coder Social home page Coder Social logo

groq's Introduction

GROQ specification

👉🏻 Published versions of the spec can be viewed here.

This is the specification for GROQ (Graph-Relational Object Queries), a query language and execution engine made at Sanity.io, for filtering and projecting JSON documents. The work started in 2015, and the development of this open standard started in 2019. Read the announcement blog post to understand more about the specification process and see the getting started guide to learn more about the language itself.

Go to GROQ Arcade to try out GROQ with any JSON data today!

Tools and resources for GROQ

Development of the specification

The specification is written using spec-md, a Markdown variant optimized for writing specifications. The source is located under the spec/-directory which is then converted into HTML and presented at https://sanity-io.github.io/GROQ/. To ensure that implementations are compatible we write test cases in the GROQ test suite.

The specification follows the versioning scheme of GROQ-X.revisionY where X (major) and Y (revision) are numbers:

  • The first version is GROQ-1.revision0.
  • Later revisions are always backwards compatible with earlier revisions in the same major version.
  • Revisions can include everything from minor clarifications to whole new functionality.
  • Major versions are used to introduce breaking changes.

License

The specification is made available under the Open Web Foundation Final Specification Agreement (OWFa 1.0).

groq's People

Contributors

atombender avatar codebymatt avatar israelroldan avatar j33ty avatar judofyr avatar kmelve avatar nicholasklem avatar sgulseth avatar simen avatar u269c avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

groq's Issues

Match operator

The match operator is currently left undefined. Some concerning questions:

  • How do we define its semantics while preserving flexibility?
  • Is *-wildcard support defined by the spec?
  • Is support for arrays (both on the LHS and the RHS) defined by the spec?

Proposal: Array traversal by only supporting [] on attribute access

Here's a proposal to specify array traversal at a grammar level. This means that the attribute access in *[_type == "user"].name becomes syntactically different from *[_type == "user"][0].name. The rules are as follows:

  • We have two different concept: avalue (array value) and value. avalue is the expressions which are guaranteed to return arrays: *, filtering, and projection on another avalue. value is everything else.
  • [] is not a generic operator, but instead an optional postfix on attribute access: .foo[]
  • This gives four cases for .foo:
    • value "." string: This is a plain attribute access (foo.bar)
    • value "." string "[]": This is also a plain attribute access (foo.bar[]), but the result is (syntactically) an avalue which means that further attributes will be mapped (foo.bar[].baz).
    • avalue "." string: This will map the attribute over the values (*._id).
    • avalue "." string "[]": This is a flat map over the attribute (*.books[]).
  • .foo on a value is always a plain attribute access, while .foo on a avalue is always a mapping.

Implications:

  • This means that:
    • *[_type == "user"].name is a mapping (since *[_type == "user"] is an avalue)
    • *{"firstName": name.firstName} is an attribute access
    • *{"firstName": name[].firstName} is a mapping
    • *{"firstName": name{firstName}} will filter the name-object (and return null if it's an array)
    • *{"firstName": name[]{firstName}} will filter over the entries of the name-array

Example which shows three of the different types:

Query:
  *[_type == "user"]{"a": books[].authors[].name}

AST (of the inner `a` attribute):
  (map_attribute
    (map_attribute_flatten
      (attribute_array (this) "books")
      "authors")
    "name")

Minimal grammar showing the example:

expr ::= avalue | value

value       ::= integer | attribute | wrap_object
attribute   ::= value "." string
element     ::= expr "[" integer "]"
wrap_object ::= value object

avalue   ::= star | filter | traverse | map_attribute | map_object
star     ::= "*"
filter   ::= expr "[" expr "]"
traverse ::= expr "[]"
map_attribute ::= avalue "." string
map_object    ::= avalue object


Support for type casting

Support for, example:

*[_type == $type && string(numberCode) match "133*"]

To match documents where numberCode, while being a number, starts with 133.

identity() function

Should identity() be defined in the GROQ spec? Is it an optional function?

converting string to int with a query.

Hello,

Am I able to convert a string to a number when making a query? Ex i want the field "price" to be a number instead of a string when fetching data for a product.

Please add a license

Hello. Your intro blog post says GROQ is open source, but I couldn't find any license in the repo. Please consider adding one.

Uncaught (in promise) Error: Unknown function: dateTime

Hi,

when using groq.dev the dateTime function doesn't seem to work. Could you please explain what's wrong here?
query:

*[completed == true && userId == 2]{
  title,
    "now": dateTime(now())
}

error in browser console:

86527f614cf25511c4200490dbce8557fd7f1f1f.84029c43eb6a4d244e4a.js:1 Uncaught (in promise) Error: Unknown function: dateTime
    at FuncCall (86527f614cf25511c4200490dbce8557fd7f1f1f.84029c43eb6a4d244e4a.js:1)
    at x (86527f614cf25511c4200490dbce8557fd7f1f1f.84029c43eb6a4d244e4a.js:1)
    at Object (86527f614cf25511c4200490dbce8557fd7f1f1f.84029c43eb6a4d244e4a.js:1)
    at async r._generator (86527f614cf25511c4200490dbce8557fd7f1f1f.84029c43eb6a4d244e4a.js:1)
    at /async https:/groq.dev/_next/static/chunks/86527f614cf25511c4200490dbce8557fd7f1f1f.84029c43eb6a4d244e4a.js:1

Pipe function: transform(select, ...conditions) Arbitrarily replace objects with values or projections

While object projections are useful, they are not easily done with deeply nested objects. You usually are forced into one of the three camps:

  1. The uber-query. You build conditional projections for every single type at every possible injection point. This usually results in a massive query that is near impossible to maintain, if it is at all possible to cover every point. Or you just give up after a certain level of nesting which may be very problematic in front ends.
  2. Post processing. You accept the limitations of groq and instead build an object traversal function that typically involves dereferencing those deeply nested references, which requires N+1 requests depending on how many times your references involve references. You also write transform functions to make things like @sanity/color-input's color type a simple rgba(r,g,b,a) string instead of all the values you aren't going to use.
  3. Do none of the above. You write out code to handle large responses and ignore meta data you don't care about because it's tedious. It's not great, you could be working with more concise data structures, but it's not worth your development time.

I propose a new pipe function to transform arbitrarily nested objects: transform(selector:string|string[], ...conditionals:conditional[])

An example query:

*[_type == "post"]
  | transform(
    // A path or an array of paths to select objects from. In this case, anywhere
    "**",

    // Each part executed in waterfall manner so we can, e.g. deference all refs 3 levels deep before applying additional transforms
    _type == "reference" => deref(3),

    // Use this as a way of replacing objects with things like strings to be easier consumed by frontends
    _type == "color" => format('rgba(#{r},#{g},#{b},#{a})', rgb),

    // And projections which before would have had to known exact paths
    _type == "image" => { "url": asset->url, "id": asset._ref }
  )

Each conditional should be executed in waterfall manner for all objects selected by selector(s) should be processed, in case a conditional returns an object that can be transformed by a later conditional.

The above query should be equivalent to:

*[_type == "post"]
  | transform(
    "**",
    _type == "reference" => deref(3)
  )
  | transform(
    "**",
    _type == "color" => format('rgba(#{r},#{g},#{b},#{a})', rgb)
  )
  | transform(
    "**",
    _type == "image" => { "url": asset->url, "id": asset._ref }
  )

Note that I have also mentioned two potential functions as well in here that may have been mentioned before:

  • deref(level:number): Dereferences the current object up to the given level times. I like #21's thinking.
  • format(pattern:string, value:object): If value is null, return null, otherwise, replace tokens (type: path) in pattern by navigating value with token. Whichever token format wanted (e.g. {path.subpath}, #{path.subpath}, ${path.subpath}) is up to you

Support for behavior like Array.join()?

I think it seems reasonable to consider this because groq already has ways of restructuring (and reformatting) data; i.e. projections, functions, coalescing, and certain kinds of concatenation.

It's pretty common to want to stringify an array with a separator in between (like , ). In Python, that's [...].join(", "). In JS, it's Array.join([...], ", "). (Putting those partly for sake of searches if someone looks for those and groq, that's what I searched for initially.) There's CONCAT_WS in SQL.

I figure this is a Function. I think it'd be consistent with the existing syntax to implement it like join(<array>, ", ") -- but to avoid confusion with database JOINs, maybe it would be better to name it strjoin(<array>, ", ").

Thoughts?

Serialization format for non-JSON types

The JSON serialization of types like ranges and => (pairs) don't seem to be documented, as far as I can see. HyperEngine returns ranges as pairs, while this is an error in the old engine.

Pipe operators return errors, starting from API version 2021-03-25

It seems like any query including an optional pipe operator is seen as a syntax error starting from API version 2021-03-25.
So a query like:

* | []

returns

Query error: object or function expected

* | [
----^^

whereas v1 returns all documents as expected. This isn't much of a problem for any pipeline components other than the map operator which doesn't work given the current behavior.

Edit:
Despite the breaking changes for the map operator, there are still ways of getting specific values as an array with newer API versions:

*[] { one, two } { "_": [@.one, @.two] }._

GROQ: sort by random

It would be handy to have the ability to sort by random entities in GROQ. The syntax would be similar to sort(): [_type=="doc"][0..9] | rand() will return 10 random docs.

How to document the `[]` (postfix) operator

What should it be called?

  • "Array traversal operator": This is what we've always(?) called it
  • "Projection operator": I thought about this earlier and it makes some sense. It's very similar to a projection (in the sense that it changes a stream of data); the only difference is that the output can be any value (not only an object).

How do we specify the semantics

There are three different ways as far as I can see:

1. Syntactical transformation semantics

In the grammar the operator is a regular postfix operator, and we introduce two new functions (map and flatten) and specify the operator in terms of a syntactical transformation which happens before processing:

  • foo[].bar[0] => foo | map(bar[0])
  • (foo[].bar)[0] => (foo | map(bar))[0]
  • foo[].bar[].baz => foo | map(bar) | flatten() | map(baz)

Pros/cons:

  • We need to introduce two new functions.
  • The grammar is trivial: it's a "regular" postfix operator.
  • The LHS.foo-expression is the same type regardless of whether the LHS is a []-expression or a regular expression. (Since it's dissolved after the processing.)
  • We don't need to say anything specific in the filter/projection/slice/etc. expression spec text.

2. Grammar-level semantics

We enforce it inside the grammar:

plainexpr
  := ident
   | "@" | "*" | …

compoundexpr
  := compoundexpr "[" EXPR "]"
   | compoundexpr "." IDENT
   | plainexpr

arrexpr
  := compoundexpr "[]"
   | arrexpr "[]"
   | arrexpr "[" EXPR "]"
   | arrexpr "." IDENT

expr := compoundexpr | arrexpr

Pros/cons:

  • The grammar needs to duplicate filter/slice/attribute/etc. expressions.
  • We end up with duplicate syntactical nodes for array filter vs. regular filter. These will be duplicated in the spec.
  • I'm not 100% sure how to structure the grammar to avoid ambiguities.
  • The exact semantics are more precise (a strict EBNF grammar is far more precise than any specification text we write).

3. Value-level semantics

In the grammar the operator is just a regular postfix operator. Evaluating that operator returns a Mapper-value. The Mapper type is a subtype of Array. The specification of filter expression would then have a part like this:

If the base value (e.g. left-hand side) is a Mapper value, then it creates a new Mapper with [same semantics as previously].

Pros/cons:

  • We need to incorporate this into the filter/slice/attribute/etc. expression spec texts.
  • The grammar is trivial: it's a "regular" postfix operator.
  • It's probably not how you'd want to implement this operator in practice.
  • It's a quite naïve way (i.e: might be intuitive)

Specify array traversal using mappers

Introduction

Today we have two different array traversal semantics:

  • The one in production in Sanity.io today (v1)
  • An improved one that's currently implemented in GROQ-JS and the latest internal version of GROQ in Sanity.io (v2).

This is a third proposal which attempts to make

The main problem with the original proposal for improved array traversal

In v2 this will no longer work: *[_type == "user"]._id. This is because .id accesses an attribute and arrays don't have attributes. Instead, you would have to write *[_type == "user"][]._id. This is a highly breaking change. However, the following still works: *[_type == "user"]{foo{bar}}. Why doesn't this have to be *[_type == "user"][]{foo{bar}}? (The answer is because {…} has special behavior.)

This leads to a strange situation: We are introducing a highly breaking change in one area, but we're not achieving full consistency.

It should also be mentioned that *[_type == "user"]._id doesn't have any other sensible meaning other than "apply ._id to each element". It seems unfortunate that we're discarding a syntax which is unambiguous and clear.

The new proposal

Goals

The goal of this proposal is to

  • try to break as little existing GROQ as possible.
  • be as consistent as possible.
  • enable new use cases
  • be completely determined at compile/parse time.

Overview

The main complication of supporting *[…]._id is knowing why ._id should be mapped inside the array without depending on run time information. In *[_type == "user"]{"a": bar._id} we want ._id to mean "access _id on the bar object" and never treat it as an array projection.

The solution in this proposal is to treat traversal ([]), filtering ([_type == "user"]) and slicing ([0..5]) as array coercing markers. These will coerce the left-hand side to an array (meaning that if it's not an array it will be converted to null), and as such we know that any .foo coming after it makes sense to treat as mapping inside the array.

Details

The core idea behind this proposal is to introduce a concept of a mapper. .foo, [_type == "bar"] and ->name are all mappers. A mapper is an operation which you can apply to a value, but are not valid expressions by themselves. We can group mappers into two categories: Simple mappers and array mappers, with the distinction being that array mappers work on arrays.

We have the following simple mappers:

  • Attribute access: .foo
  • Object projection: {foo} (in *{foo}, not as a standalone expression)
  • Dereferencing: -> and ->foo.
  • Group projection: .(foo).
    This is a new invention which would allow more flexibility in the way you compose mappers. This allows e.g. *[_type == "user"].(count(books)) which would return an array of numbers.

And then we have the following array mappers:

  • Slicing: [0..5]
  • Filtering: [_type == "user"]
  • Traversal: [].
    This is the same as a [true] or [0..-1].
    It acts merely as a way of marking that the value is an array.

Pipes (|) are supported in various places to handle backwards compatibility.

Here's the grammar for composing mappers:

MapSimple ::= …
MapArray  ::= …

MapSimpleMulti ::=
  MapSimple+ MapArrayMulti?

MapArrayMulti ::= 
  MapArray+ MapSimpleMulti?

ArrExpr ::= Star
BasicExpr ::= …

Expr ::=
  BasicExpr | ArrExpr |
  BasicExpr MapSimpleMulti |
  BasicExpr MapArrayMulti |
  ArrExpr MapSimpleMulti |
  ArrExpr MapArrayMulti

Explanation:

  • MapArrayMulti and MapSimpleMulti represents composed mappers. They are mappers which are built on top of other mappers.
  • MapSimpleMulti is quite simple: When applied to a value it will apply the simple mappers and the MapArrayMulti in order.
  • MapArrayMulti is a bit more complicated:
    • When applied to a value it will first coerce the value to an array. If the value is not an array then it returns null immediately.
    • Then it applies all the array mappers (e.g. filtering, slicing) on that array.
    • If there's a MapSimpleMulti it will apply that mapper on each element of the arrry.
  • In addition, * is interpreted as an array expression. The only impact this has is that a MapSimpleMulti applied on an ArrExpr will apply the mapping on each element instead of on the value itself. This casues *{foo} to be interpreted as intended.

Implications

  • *[_type == "user"].id returns the ID of all documents.
  • *[_type == "user"].slug.title returns slug.title.
  • *[_type == "user"].roles[].title returns a nested array of role titles. If there's two users who have the roles (A,B) and (C), then this will return [["A", "B"], ["C"]].
  • In *[_type == "user"]{foo{bar}}, then foo must be an object. If it's an array then it will end up being null.
  • In *[_type == "user"]{foo[]{bar}}, then foo must be an array.

How do we teach this?

Here are some phrases which can be used for explaining the behavior:

  • "When GROQ knows that you're dealing with an array then you can add .foo to get the foo attribute of all of the documents/elements/objects."
  • "We can also dereference array of objects the same way: Just add -> at the end."
  • "Here GROQ doesn't know that it's an array, so we'll have to add []."

How to deal with flattening?

There's never any flattening happening here. I propose that we separately introduce a flat-function (that's what it is called in JavaScript): flat(*[_type == "user"].roles[].title) will flatten it one level.

Using `pt::text()` function throws queryParseError

I'm trying to implement full text search. My query looks like this:

const query = `*[ _type == 'post' && pt::text(content) match "*${val}*" ] {
  title,
}`

const response = await fetch(
  `https://[project-id].apicdn.sanity.io/v1/data/query/production?query=${encodeURIComponent(
      query
    )}`
  )

where val is the current value of the search input field.

But when using the new pt::text() function ( anywhere in the query, I also tried this ) it throws queryParseError for missing parenthesis.

Here is the full response for the query above:

{
  description: "expected ']' following expression",
  end: 31,
  query: "*[ _type == 'post' && pt::text(content) match \"*something*\" ] {
      title,
    }",
  start: 1,
  type: "queryParseError"
}

When emitting the pt::text() part, it works fine.

Is this an issue with URL encoding the query, or am I missing something?

Thanks in advance.

Proposal: Set function for getting distinct values

We could call it array::unique. For example, given an array of values:

array::unique(["a", "a", "a", "b", "c", "a", "c"])

it would return only distinct values:

["a", "b", "c"]

A more concrete use case:

array::unique(*[_type == "testRun"].version)

If might allow piping into functions:

*[_type == "testRun"].version | array::unique()

Changelog:

  • Updated from distinct() to array::unique().

Proposal: Keep null values in objects

The current behavior in Sanity today is to automatically remove null values from objects:

{"a": null} // This returns {}

Note however that this is only done at projection. You can have a null-value in a document and it will be returned:

*[_type == "nullExample"][0]
// => {"_type": "nullExample", "key": null }

*[_type == "nullExample"][0]{_type,key}
// => {"_type": "nullExample" }

This spec (and groq-js) also includes the same "remove all null", but I'm not sure if that is what we want.

This is a proposal to allow null as values in objects.

Some implication:

  • Due to the way defined and attribute lookup is defined there is no way of distinguishing between an attribute being null and not existing. *[!defined(key)] will return both objects where key does not exist and where "key": null.
  • The current spec (and groq-js) had a way of removing attributes: {..., "key": null} would remove key (since setting it to null was equivalent to removing it). This "feature" will then be gone.

Initial specification of functions and operators

Operators

Spec:

  • && operator
  • || operator
  • ! operator
  • Equality operators
  • Ordering operators
  • -> Dereference operator
  • + operator
  • - operator
  • * operator
  • / operator
  • ** operator
  • % operator
  • in operator
  • Unary + operator
  • Unary - operator
  • Precedence

Tests:

  • && operator
  • || operator
  • ! operator
  • Equality operators
  • Ordering operators
  • -> Dereference operator
  • + operator
  • - operator
  • * operator
  • / operator
  • ** operator
  • % operator
  • in operator
  • match operator
  • Unary + operator
  • Unary - operator
  • Precedence

Functions

Spec:

  • coalesce
  • order
  • count
  • defined
  • length
  • references
  • round
  • select

Tests:

  • coalesce
  • order
  • count
  • defined
  • identity
  • length
  • references
  • round
  • select

Auto dereferencing pipe function

It would be really handy to have a pip function for automatic dereferencing. I'm not quite sure what the best approach would be. One possible solution is that it just dereference every reference it comes across with a predefined depth, like *[ _id == "some-document" ] | deref(1) (where 1 is the depth). This would look a lot like the "raw"-prefixed property provided when you're using graphql.

One scenario where this would be super useful is when you're having an array of several different objects, and some of those objects have references you'll want to dereference. This is a little cumbersome when using the dereferencing operator ->.

Ordering does not work on array of attribute values

Hey Sanity team :-)

So I can’t see this mentioned in the documentation but I noticed ordering doesn’t work in combination with an array of attribute values. It’s actually the first time I’ve attempted to use this pattern as I normally only want to order objects but it’s one I can see being helpful, if its possible to accommodate?

For instance this works:

"artists": *[_type == "film"] | order(alphabetical asc) { artist }

But this (my ideal scenario), doesn’t seem to:

"artists": *[_type == "film"].artist | order(alphabetical asc)

I can’t find other mentions of this issue, but apologising in advance if I’m just not using correct terms!

Thank you,

Simon

Dealing time offset (time zone) of datetime in tricky in JavaScript

Situation: The current specification of datetime says that a datetime should maintain its time offset: https://sanity-io.github.io/GROQ/#sec-Datetime. This implies that {"a": dateTime("…+01:00")} should serialize to {"a": "…+01:00"}.

Complication: This is sensible in itself, but causes problems in JavaScript since there's no way of converting a Date to a custom time offset. The only thing you can access is the local time zone (getMinutes(), getSeconds() …) and UTC time zone (getUTCMinutes(), getUTCSeconds(), …). It is possible to store the the offset separate from the Date and then manually adjust for it before formatting it, but this will require some complicated code.

Solution 1: Keep the existing semantic, and require more complicated code in groq-js.

Solution 2: Say that all datetimes are formatted in UTC time zone.

It should be mentioned that this only matters once you (1) need to do calculation on datetimes and (2) you end up returning them back to the user. For instance, in the query *[dateTime(publishedAt) < now()] it doesn't matter what the time offset is because we compare the universal time either way. It also won't matter in *{publishedAt} since this will not invoke any datetimes. It will matter in *{"a": dateTime(publishedAt) + 60}.

Considering the limited use case I'd say that it's simpler to say that all datetimes are represented in UTC (+00:00).

Support highlights in examples

I'm not sure how to make this work with Spec MD, but it would be nice to have support for highlighting (in bold) some part of an example query.

What types of ranges can be constructed?

Currently in the spec we have (https://github.com/sanity-io/GROQ/blame/15f7f28dcccff1fbf9c7f4c784ac7a2433faae39/spec/Section%204%20--%20Data%20types.md#L236-L248) a rule which says that it's not possible to construct a range of different types, but the groq-test-suite (https://github.com/sanity-io/groq-test-suite/blob/3a027e79b82e1a32462f1964fde19f6511b3e952/test/type/range.yml#L35-L53) also disallows creating ranges like true .. false.

I propose that ranges are allowed for any data type which are comparable (that is: number, string, boolean). I don't see that much value in supporting boolean, but it's at least consistent.

All other types of ranges (either ranges which has mixed types at the start/end or ranges between e.g. objects) will return null.

[1]: They are also used for slicing, but that uses a custom syntax which only supports a strict subset and is handled at parse-time)

More consistent array traversal semantics

Array traversals aren't yet defined in the spec (see #2), but the intended semantics have been implemented in e.g. groq-js. As I've been thinking and implementing these semantics I've found that they are a bit inconsistent and hard to work with. This issue is intending to make them more consistent and easy to implement.

These are the new proposed rules:

  • Every expression is (at a syntactical level) has a traverse behavior which is either none, implicit or explicit.
  • * and array constructor ([…]) have explicit traverse behavior .
  • The postfix projection operator (base[]) works as follows:
    • If the base has no or implicit traversal behavior, then it converts it to explicit traversal behavior.
    • If the base has explicit traversal behavior, then it applies a flatten operation.
  • The grouping expression ((…)) has no traverse behavior, regardless of what's inside it.
  • Filtering (base[foo > 1]) and slicing (base[1..5]) preserves the traverse behavior of the base.
  • Attribute access (base.foo), dereferencing (base->) and projection (base{bar}) have the following semantics:
    • If the base has either implicit or explicit traverse behavior, then it's being applied for each element in the base (e.g. *{bar} will project {bar} on each element). The returned expression has the same implicit/explicit traverse behavior as the base.
    • If the base has no traverse behavior, then it's being applied directly to the base.
  • Element access (base[5]) has the following semantics:
    • If the base has explicit traverse behavior, then it's being applied for each element in the base.
    • Otherwise, then it's being applied directly to the base while preserving the traverse behavior.
  • All other expressions have no traverse behavior.

Implications:

  • *[_type == "bar"]{baz} still works as expected
  • *[_type == "bar"]{users[]{name}}: The [] is needed when projecting over arrays.
  • Nested traversals still work as expected: *[_type == "bar"]{"avatars": users[].avatar.url}

Implementations:

How should equality work together with `null`?

After a brief discussion with @simen we made the semantics of == such that it never returns null and that equality against null works as "expected" (1 == null is false and null == null is true). Essentially we're saying that equality is always well-defined for all types. This makes it consistent with most programming language.

I know that this deviates from three-valued logic which we use for the other operators, but I'm failing to see any places where it causes any real issues. Are there any optimizations that we lose because non-null == null returns false instead of null?

"reduce" equivalent?

So far, GROQ has map (projection) and filter covered. How about the missing member of the trinity, reduce?

score() or match is not working with pt::text() when the argument of pt::text() is the item of an array

I’ve found a possible bug with score():

score(
  boost(name match $keyword, 10), 
  pt::text(biography[].text) match $keyword
  pt::text(biography[].entries[].text) match $keyword
)

pt::text(biography[].entries[].text) works in the body of a request, just not in score() – I get: Query error – score() function received unexpected expression.

Annoyingly, I need it to work for both to get the depth of search I’d like…

It also doesn’t seem to work with pt::text(biography[].text) but just didn’t actively throw an error. Although either works perfectly in the body of a request.

I am using API version 2021-03-25 and "@sanity/client": "^2.7.1"

Thank you!

Support sum() as a GROQ function

It would be handy to have the ability to sum numbers in GROQ. The syntax would be similar to count(). It should probably validate that it's a field of the type Number (or just be like JavaScript and sum whatever).

*[_type == 'stats']{
    "pagestats": sum(pageStats[].pageLoads)
}

Type errors for operators?

At present, operators just return null on type failure (e.g.: 3 + "str"). I'd much rather have this blow up with a useful error than have to check all query results for buried nulls.

Specification sections

Let's try to figure out the best way to structure the specification into sections.

Some thoughts/inspirations:

  • Go spec, C++ spec (PDF), ES2019 spec
  • We want a canonical naming scheme for sections (like C++ does). E.g. the + operator could be defined under operator.plus with variants under operator.plus.string and operator.plus.int.
  • I think it's cleaner to describe the syntax of each expression in their section together with their semantic (instead of having duplicated sections under "Syntax"). This means that "Syntax" will just about the common syntax.

Proposed table of contents:

  • General - general
  • Syntax - syntax
    • Introduction - syntax.intro
    • White space - syntax.whitespace
    • Comments - syntax.comments
    • Precedence - syntax.precedence
  • Data types
    • Basic data types
      • Boolean type - type.boolean
      • String type - type.string
      • Number type - type.number
      • Null type - type.number
      • DateTime type - type.datetime
    • Composite data types
      • Array type - type.array
      • Object type - type.object
      • Pair type - type.pair
      • Range type - type.range
    • Subtypes
      • Document type - type.document
      • Reference type - type.reference
  • Simple expressions
    • This expression - expr.this
    • Local attribute expression - expr.local_attr
    • Everything expression - expr.everything
    • Parent expression - expr.parent
    • Parameter expression - expr.parameter
  • Compound expressions
    • Grouping expression - expr.group
    • Attribute expression - expr.attr
    • Filter expression - expr.filter
    • Projection expression - expr.projection
    • Pipe function call expression - expr.pipecall
    • Function call expression - expr.call
  • Operators
    • Unary plus operator - operator.uplus
    • Unary minus operator - operator.uminus
    • Logical operators - operator.logical
    • Relational operators - operator.relational
    • In operator - operator.in
    • Multiplicative operators - operator.multiplicative
    • Additive operators - operator.additive
    • Exponential operator - operator.exponential
    • Ordering operators - operator.ordering
    • Dereference operator - operator.dereference
    • Projection operator - operator.projection

Proposal: add `min` and `max` functions

It would be exceedingly useful to be able to both filter on and retrieve values using minimum and maximum functions.

Example 1

Fetching a value using a max function. This would be useful as an initialValue for an auto-incrementing order number field.

{
  "nextOrderNumber": max(*[_type == "order"].orderNumber) + 1
}

Example 2

Filtering using a min function to retrieve products that have a variant under $20 (e.g. for display in a collection of low-cost add-on items). For clarity, this example assumes the product document has a variants field that is an array of objects that have a priceInDollars field on them.

*[_type == "product" && min(variants.priceInDollars) < 20]

Support access to attribute name to use it as value in projections

There are many cases when you need to convert an attribute name into values. What ma help is to have a keys() function that returns an array of keys.

Use Case 1:
Objects described in Json where ID of the object is the key need to be converted into an array of objects where ID is the property.
For example:
{
“a”: {"prop": "value1"},
“b”: {"prop": "value2"}
}
Convert to:
[
{“_id”: “a”, "prop": "value1"},
{“_id”: “b”, "prop": "value2"}
]

Use Case 2:
Key-value pairs need to be expanded to objects. And it should work for the arbitrary number of attributes.
For example:
{
“name”: "Milk",
“price”: 1
}
Convert to:
[
{“prop”: “name”, "value": "Milk"},
{“prop”: “price”, "value": 1}
]

Proposal: Define pipe operator as function composition + define more syntax as functions

We can define the pipe operator as a function composition operator, where:

expr | f(args)

is syntactic sugar for:

f(args, expr)

This is exactly like the |> operator in Elixir, or the . operator in Haskell. For example, this:

* | count()

could be equivalent to:

count(*)

This explains why order can work as a pipe operator, since:

* | order(foo)

is simply:

order(foo, *)

Of course, right now order takes multiple arguments, which will require supporting "leading varargs", but it's not too bad.

Similarly, we can invent new functions for richer data manipulation, such as:

* | groupBy(_type) | summarize({"count": count(@))

and so on.


As an aside, this opens up the opportunity to explain more of the GROQ syntax as being syntactic sugar for functions. For example,

*[a == 1]{a}

could be equivalent to something like:

allDocuments() | filter(a == 1) | project(object(["a", a]))

which of course is the same as:

project(object(["a", a]), filter(a == 1, allDocuments()))

In other words, this "reformulates" GROQ as a simpler "canonical form", with the syntactic sugar as a layer of conveniences on top of a rich set of pipeline-oriented function calls.

While the practical benefit of doing this, from a user’s perspective, is probably limited, it has an explanatory benefit.

references() should support arrays

Currently, references(…) only works on a string values, but we want it to also support an array of strings. In this case it should return true if the document references any of the given IDs.

Non-normative section about joins

It's not technically required to write specification for joins (since it's just application of the parent operator), but it's nice to have it to make the specification more understandable.

DateTime type and functions

Initial specification:

  • Constructor: dateTime(String) -> DateTime. The string must be formatted as RFC3339.
  • Difference operator: DateTime - DateTime -> Number
  • Addition operator: DateTime + Number -> DateTime (commutative)
  • Ordering: order(dateTime(…))
  • now() function

Date formatting in GROQ queries

Just a thought now that portable text can be returned as plain text, other formatting options could be very useful. Like declaring the format of a date.

Could save loading date formatting libraries client side, also.

Leveraging date-fns would be a nice standard.

GROQ subquery not working as expected

Hello. I'm having trouble some GROQ. I would to query all the posts on my blog with a specific category.

*[_type == "post"
  && publishedAt < now() 
  && categories[]._ref in *[_type=="category" && title=="Seatr"]._id
 ]
|order(publishedAt desc){
  title,
  categories[]->{title}
}

This results in the following error:
Query error - No function in() defined for arguments (array, array)

The following line: categories[]._ref in *[_type=="category" && title=="Seatr"]._id works separately but it does not work in the GROQ above. When pasting the return value from this in there; it does however work. Am I doing something wrong? I have followed the following examples: https://www.sanity.io/docs/query-cheat-sheet#joins-e82ab8c0925b.

Thank you.

PHP Implementation

I have been experimenting with the JS implementation and it is nice to work with. Any plans on a PHP implementation?

Ordering by field returns incorrect order

Howdy! I'm not sure how else to describe this, perhaps it's a data problem I'm not aware of with my fields.

I've paired down my groq query to show only the relevant bits of information, but hopefull it's enough

*[_type == "work"]{
  _id,
  title,
} | order(title asc)

And here's a screenshot of the Vision plugin running that query with the relevant bit of incorrect results:

image

You can see that ALAXSXA | ALASKA is before Aan Yatx'u Saani: Noble People of the Land but that isn't how I'd expect things to be sorted. In fact, what is additionally odd is that ordering by Title in the Studio orders things as I would expect them to be ordered:

image

So what is happening? Where are things going wrong? Apologies if this isn't the correct forum for this question.

GROQ vs GraphQL?

Seems like an interesting tool, but how is it better over GraphQL & the more popular servers for it?

GROQ Search in a nested array doesn't work

It looks like Sanity doesn't support search in a nested array.

I've created GROQ sandbox to demonstrate the issue: https://groq.dev/01tEB1B3ZkvP4uqt2hQLOH. I can run this query and the result is correct.

When I'm querying the Sanity GROQ endpoint there is no result:

image

But, if I specify an index for the first array it works:

image

*[
  _type == "ingredientsList" &&
  ("test" in ingredients[0].names[])
]{
  _id
}

Map expression

Currently there is no specification (or tests or implementation in sanity-io/groq-js) of the general map expression: foo | (bar).

This is because our internal integration test suite didn't have any tests for it (and I based the original parser on the integration tests).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.