Coder Social home page Coder Social logo

plone.restapi's Introduction

image

image

image

image

Introduction

plone.restapi is a RESTful hypermedia API for Plone.

Documentation

https://plonerestapi.readthedocs.io/en/latest/

Getting started

A live demo of Plone 6 with the latest plone.restapi release is available at:

https://demo.plone.org/

An example GET request on the portal root is the following.

curl -i https://demo.plone.org/++api++ -H "Accept: application/json"

An example POST request to create a new document is the following.

curl -i -X POST https://demo.plone.org/++api++ \
    -H "Accept: application/json" \
    -H "Content-Type: application/json" \
    --data-raw '{"@type": "Document", "title": "My Document"}' \
    --user admin:admin

Note

You will need some kind of API browser application to explore the API. You will also need to first obtain a basic authorization token. We recommend using Postman which makes it easier to obtain a basic authorization token.

Installation

Install plone.restapi by adding it to your buildout.

[buildout]

# ...

eggs =
    plone.restapi

…and then running bin/buildout.

Python / Plone Compatibility

plone.restapi 9 requires Python 3 and works with Plone 5.2 and Plone 6.x.

plone.restapi 8 entered "maintenance" mode with the release of plone.restapi 9 (September 2023). It is not planned to backport any features to this version and we highly recommend to upgrade to plone.restapi 9.

Python versions that reached their end-of-life, including Python 3.6 and Python 3.7 are not supported any longer.

Use plone.restapi 7 if you are running Python 2.7 or Plone versions below 5.2.

Contribute

Examples

plone.restapi has been used in production since its first alpha release. It can be seen in action at the following sites:

Support

If you are having issues, please let us know via the issue tracker.

If you require professional support, here is a list of Plone solution providers that contributed significantly to plone.restapi in the past.

License

The project is licensed under the GPLv2.

plone.restapi's People

Contributors

avoinea avatar buchi avatar cedricmessiant avatar cekk avatar csenger avatar davisagli avatar ebrehault avatar ericof avatar erral avatar gagaro avatar gbastien avatar gforcada avatar jaroel avatar jensens avatar jone avatar ksuess avatar lukasgraf avatar mauritsvanrees avatar nileshgulia1 avatar pbauer avatar petchesi-iulian avatar polyester avatar sneridagh avatar stevepiercy avatar sunew avatar tiberiuichim avatar tisto avatar wesleybl avatar witsch avatar wolbernd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

plone.restapi's Issues

Include widget info in JSON Schema produced by @types endpoint

The JSON Schema produced by @types/[portal_type] currently only includes the field type, but no information about which widget should be used. This information should be included in the JSON Schema as well.

A custom widget for a field can be specified using a plone.autoform directive. That information then gets stored on the schema interface using tagged values, which should be easy to access and serialize.

The default widget however for a particular zope.schema field type is currently being determined by z3c.form, which means we might need to replicate some of z3c.form's logic.

/cc @ebrehault

Make the API "explorable".

Create a minimal JS app that makes the API explorable. First this could be a read-only browser. At a later point we could use actions and the schema to auto-generate forms (like Django REST Framework).

CSRF handling in angular frontends

After short discussion on #plone i want to raise this up as an issue here. Currently my post-requests to backend fail after upgrading to plone.protect >= 3.0 because i do not ship a token within the request.

How do you guys handle this?

CORS handling

If you have a frontend app for example running on port 9000 and it requests the plone backend on localhost:8080 you will get:

XMLHttpRequest cannot load http://localhost:8080/plone/@@json. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:9000' is therefore not allowed access.

This is CORS issue. The frontend address has to be registered as an allowed ressouce in the plone backend (zope server).

IMHO a CORS TTW configuration is desirable. Perhaps it is a good idea to have an isolated package plone.cors which offers a configlet and a server response patch.

In the Pyramid world The Cornice REST framework has to deal with this issue and can simply be configured: http://cornice.readthedocs.org/en/latest/api.html
In The Django world a so called middle ware can be registered and configured in settings.py: https://github.com/ottoyiu/django-cors-headers

Background:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
http://www.html5rocks.com/en/tutorials/cors/
http://enable-cors.org/

Implement PUT as well for updating content

Right now only PATCH is implemented, which takes a diff of the object's representation as input for what fields should updated.

We also want to implement PUT, which would take the objects complete representation.

Discuss: What should happen if some fields are omitted by the client?

Batching

{
    @type:
    total_items: 42,
    items: [],
    batch_actions: [
        {
            'name': 'previous',
            'title': 'Previous',
            url: 'http://...'
        },
        {
            'name': 'next',
            'title': 'Next',
            url: 'http://...'
        },
        {
            'name': 'first',
            'title': 'First',
            url: 'http://...'
        },
        {
            'name': 'last',
            'title': 'Last',
            url: 'http://...'
        }
    ]

provide dependency injection for schema serialization

The schema serialization is currently quite hard-coded for the standard fields.
When having custom fields, it is only possible to provide serialization by copying the serialization function and adapting it.

I'd like to change the serialization mechanism so that I'm able to change how a certain field value is represented by registering an adapter and easily provide custom serializer for my custom fields.

Maybe something like that:

class IFieldSerializer(Interface):
    """The field serializer multi adapter serializes the field value into
    a JSON compatible python data. """

    def __init__(context, request, field):
        """Adapts context, request and the field.
        """

    def serialize():
        """Returns JSON compatible python data.
        """

Although the request is not really relevant here, I usually like multi adapters to also adapt the request so that I'm able to easily register a more specific adapter by using a (project-) request-layer as discriminator. Without adapting the request I'm more often forced to customize with an overrides.zcml, which I do not like. This is my personal taste and experience though 😉

Some questions:

  1. Is there already some serialization mechanism which produces JSON-compatible output (!) for dexterity?
  2. Where should this go? Into plone.restapi or into another package?
  3. Any comments about my approach?

Evaluate JSON Schema

  • ietf draft
  • PYTHON: pythonjsonschema (laurence has a branch to work better with colander, default field)
  • JAVASCRIPT: jsonschema

respect dexterity field permission

I think the api should check if attributes are accessible (i.e. check for field-permissions plone.autoform.directives.read_permission)

Error Handling

When the client sends the 'application/json' Accept header and an error occurs, the ZServer should respond with a JSON encoded error message.

Support for nested resources.

We want to integrators to be able to build nested resources (e.g. '/university/1/faculty/3'). In addition we might want to model certain Plone specific resources in a nested fashion as well (e.g. '/document1/comment/23').

@types endpoint specs

Just to keep somewhere what we discussed at Barcelona Sprint with @tisto and @vangheem:

  • basic fields: provided by JSON Schema
  • fieldsets: provided by JSON Schema (nested schemas)
  • validation: partly provided by JSON Schema (simple validation)
  • actions: provided by JSON Schema (hyper-schema links)
  • read-only fields: provided by JSON Schema
  • vocabularies: provided by JSON Schema
  • widgets (just the id): NOT PROVIDED
  • help messages: NOT PROVIDED
  • placeholders: NOT PROVIDED
  • master-slave: NOT PROVIDED
  • subchema (=datagrids): ?

Implement Image API

{
    contentType
    size
    data/download
    filename
    scale: {
        'mini': {
            href: ...
            width: 400,
            height: 200,
        }
    }
}

plone.restapi breaks Plone 5.0.4 File bulk uploads

Having Plone.restapi (plone.restapi 0.1, plone.rest = 1.0a5) installed breaks the file upload functionality in a stock Plone 5 (at least in versions 5.0.3/4). Dragging files into the upload area and pressing upload yields the text "The file upload failed" just below the filename and nothing happens.

my_docs_ _site

I have installed plone restapi with:
`auto-checkout = plone.restapi

eggs =
Plone
Pillow
plone.restapi

[sources]
plone.restapi = git https://github.com/plone/plone.restapi
`
and running buildout -c develop.cfg.

Disabling the restapi egg restores file uploads.

The Plone error log show two errors (after clearing "Ignored exception types"):

`Traceback (innermost last):
Module ZPublisher.Publish, line 127, in publish
Module ZPublisher.BaseRequest, line 523, in traverse
Module ZPublisher.HTTPResponse, line 727, in debugError
NotFound:

Site Error

An error was encountered while publishing this resource.

Debugging Notice

Zope has encountered a problem publishing your object.


Cannot locate object at: http://localhost:8080/testing/my-docs/fileUpload


Troubleshooting Suggestions

  • The URL may be incorrect.
  • The parameters passed to this resource may be incorrect.
  • A resource that this resource relies on may be encountering an error.

For more detailed information about the error, please refer to the error log.

If the error persists please contact the site maintainer. Thank you for your patience.

`

and:

Traceback (innermost last): Module ZPublisher.Publish, line 127, in publish Module ZPublisher.BaseRequest, line 508, in traverse Module ZPublisher.BaseRequest, line 332, in traverseName Module zope.traversing.namespace, line 112, in namespaceLookup Module Products.CMFPlone.traversal, line 39, in traverse Module plone.resource.traversal, line 27, in traverse NotFound

Content negotiation in Plone via dynamic layers

Are there any existing plone.* package that can provide content negotiation? This is needed to provide web-services with the constraints set forth by HATEOAS (which I noted that it is finally included as a consideration, which satisfied some of my original concerns about Plone's approach of web-services).

One way to get around this issue specific to Plone in one of my projects is to have a subscriber to the IBeforeTraverseEvent event for the site root. The subscriber will then dynamically mark the request with layer(s) from matching rule(s) (i.e. the mime-type at hand) as set forth by any of the registered layer utilities. These layer utilities can then be provided by the implementation packages for any of the JSON web-services oriented content types to register their availability within Plone.

Anyway, this issue is mostly a question on what to do, as I noticed that the design documentation haven't really addressed this point yet or any possible approaches to solve this problem, or whether this is already the approach already being considered on some other discussion lists elsewhere.

Implement Basic Search API

{
    @type: PortalRoot,
    global_actions: [
        'name': 'search',
        'title': 'search',
        'href': 'http://nohost/search',
        'schema': our own search schema (schema.org)
    ]
}

Search endpoint - request format

I'd like to start off the discussion about implementing a service that handles searching / querying the Plone site, and hope that we can come to a decision on the fundamental points so I can start with an implementation.

I already took a stab at implementing the way search is currently outlined in docs/source/collections/searching.rst. However, I've encountered a couple issues with this approach. So I evaluated several different possible implementations, each with their own set of issues.

Common pros/cons:

GET:

  • Proper HTTP method for an idempotent read-only operation
  • If query string is used for parameters:
    • Query needs to be serialized in some way
    • URL length limitations (probably a non-issue in practice)
    • Typing of parameters is tricky

POST:

  • Wrong HTTP method / not RESTful
  • No way to build an URL that points to search results (relevant for batching)

Note that in all examples that use a query string the parameters would need to be URL escaped.


1) GET with query params in query string (no type hints)

Example:

search?SearchableText=lorem&path.query=/Plone/my-folder&path.depth=2

This approach is using GET requests with parameters in query strings, without using Zope style type hints (:record, :list, etc.). Parameters that contain a dot in their key will be merged into corresponding dictionaries.

Advantages:

  • Simplified style for query string params - at least somewhat readable
  • Easy to construct for API consumers, at least for trivial cases

Disadvantages:

  • Arguments to index-specific query options can't be typed. This is the big one. Imagine a query like ?path.query=/Plone&path.depth=0. The 0 value for the depth option needs to be an integer when passed to catalog.searchResults(), otherwise the ExtendedPathIndex will fail with a TypeError. Given the query string syntax without the Zope type hints, there is no way for the API consumer to declare a type for these arguments, they'll all end up as a string in request.form.

    This means that, even ignoring validation for now, with this approach the server side needs to do some type conversions. This is tricky, because there's no programmatic way to determine the required types for these type of index-specific options. All you get is a listing of query_options.

    So we'd need to maintain a list of information objects that describe the required types of query options, not unlike to what @vangheem has done in collective/elasticsearch/indexes.py. These would need to cover at least the index types present in a standard Plone, and support some kind of extension mechanism (adapter lookup) to deal with other index types.

    I've gotten an approach like this to work, but it's not pretty, and I wouldn't be looking forward to maintaining that.


2) GET with query params in query string (plus Zope type hints)

This would be the exact same style of query string that Plone currently uses for its @@search view.

Example:

search?SearchableText=lorem&path.query:record=/Plone/my-folder&path.depth:record:int=2

Advantages:

  • Typing is covered

Disadvantages

  • Rather ugly, hard to read
  • Those Zope type hints won't mean anything to anybody outside the Plone/Zope community, they're obscure and cumbersome
  • They're hardly documented. The only actual documentation I could find for them is on old.zope.org. How would we document this and explain it to developers that need to write API consumers?
  • Error handling for the Zope type hints is pretty limited. A TypeError with the value (but no key) is usually all you get

3) GET with an URL encoded JSON doc as single query string param

Example:

search?q={"path": {"query": "/plone/folder", "depth": 0}}

(Obviously would need to be URL encoded)

For the query, there's a single query string parameter q that contains an URL encoded JSON document that, when deserialized to a Python dictionary, contains a query that can be handed off to catalog.searchResults().

Advantages

  • Typing because of JSON
  • Relatively easy to produce for consumers
  • Easy to document

Disadvantages

  • Ugliest of them all (basically unreadable when URL encoded)
  • Two nested escaping contexts: JSON and URL encoding
  • Somewhat uncommon I think

4) POST with query as JSON document in body

Example Request:

POST /Plone/search

{
  "path": {
    "query": "\/plone\/folder",
    "depth": 0
  }
}

Advantages:

  • Ability to retain some typing information because ofthe use of JSON (but not all: datetime)
  • Much more readable than query string parameters, and more in line with the style of the rest API
  • Easier to produce (and less error-prone) for consumers (for anything other than trivial queries). Prepare a Python dictionary with a query that fits searchResults() and send it on its way with requests.post(url, json=query).

Disadvantages:

  • If we consider a search query a read-only and idempotent operation, POST is the wrong HTTP method. This might
    • affect cacheability of responses
    • trigger form resubmission prevention in browsers
  • Cannot generate meaningful prev/next batching links for POST requests

5) GET with query as JSON document in body

I briefly considered this as an approach (with a POST alternative), but the fact is, HTTP clients can't deal with GET + body very well, and ZPublisher can't either. So I think this is already out of the question, just mentioning it here for completeness.

Advantages:

  • Ability to retain some typing information because ofthe use of JSON (but not all: datetime)

Disadvantages:

  • Limited client support
  • Not doable in ZPublisher without some trickery
  • In violation to the HTTP spec, according to this post.

@tisto @bloodbare
So, moving forward, do you guys see any other options that I haven't covered? Should we swallow the red pill and continue implementing search as outlined in docs/source/collections/searching.rst, accepting that we need to maintain a list of index descriptions? Or do you see one of the mentioned approaches as a viable alternative?

Framing

  • object (minimal, default)
  • folder_listing
  • folder_full_view
  • ...

Race conditions with batching

(I'm just dumping this here to not have the conversation in #45 get too convoluted - for now I see this as low to medium priority).

Once we implement some sort of batching / pagination, there's some inherent race conditions that can occur:

Imagine a search query. Because fetching a batch page happens in a separate request, the extent and order of the resultset for a given query can change between retrieving batch pages if another client modified the DB in between. This can lead to either duplicate entries or entries that got dropped between batch pages when a consumer simply iterates over all entries in all batch pages.

ElasticSearch addresses this in a rather elegant way with its Scroll API:

  • The first request just creates a server side, persistent search context that has a certain time to live (TTL).
  • That request is answered with a response that basically just contains a _scroll_id that uniquely identifies the resultset created by the query at that point in time
  • To fetch the results, the client issues subsequent requests to fetch a particular batch page from that search context by referencing it via _scroll_id. On each of those requests the TTL for the search context is reset, so it is kept alive for another $TTL minutes.

I could see a similar concept working for us in order to provide stable resultsets for batched sequences, particularly search results.


I'm just brainstorming here, but maybe something along these lines could work:

POST /Plone/search

{"portal_type": "Document"}

This would create a server side, persistent search context. In terms of search results, this could maybe mean persisting a list of brain RIDs [1] for the resultset that matched the query at that point in time.

Returns a response with a scroll_id:

{"scroll_id": "f40dba5"}

The client then can retrieve result batches via GET requests:

GET /Plone/search?scroll_id=f40dba5&page=1&per_page=20

The link to the first batch page can also be provided in a hypermedia fashion as part of the response to the POST that creates the search context.

Search contexts that exceeded their TTL would be destroyed with the next POST. In addition, they could be actively cleared by the client using DELETE or PURGE.

Compared to a simple, stateless GET implementation, I see these pros/cons:

Advantages:

  • Stable resultsets
  • Appropriate use of HTTP methods (IMHO)
  • Allows for complex queries by using JSON in POST body
  • Still allows for hypermedia batching links because those requests are GET with query string params

Disadvantages:

  • Stateful - REST / HATEOAS?
  • Requires at least two requests for even the most trivial search
  • DB write for search / query operations
  • The returned metadata from the brains would still be up to date (not frozen in time). This could still lead to some surprising results if an object that matched at the time of query is included in the resultset, but has been changed later, and now according to its metadata wouldn't match the query any more.

[1] Is there a way to get the brain RIDs from a catalog resultset (LazyMap) without destroying its lazyness? If not, that would at least partly defeat the purpose of batching 😢

It should be possible to list the content type of items in a container/collection

Current behaviour

If I do an http get on a collection (which I have called "hey") with two items. It returns information about the two items in the "member" array. However for each member I only get somethign like this:

 "member": [
    {
      "@id": "https://mysite.example.com/hello",
      "description": "is it me you're looking for (upspeak)",
      "title": "hello"
    },
    {
      "@id": "https://mysite.example.com/front-page",
      "description": "Congratulations! You have successfully installed Plone.",
      "title": "Welcome to Plone"
    }
  ],

Suggested (Improved behaviour)

I would want the content type added as @type to the metadata.

  "member": [
    {
      "@id": "https://mysite.example.com/hello",
     "@type":"News Item",
      "description": "is it me you're looking for (upspeak)",
      "title": "hello"
    },
    {
      "@id": "https://mysite.example.com/front-page",
      "@type":"Document",
      "description": "Congratulations! You have successfully installed Plone.",
      "title": "Welcome to Plone"
    }
  ],

Full JSON output (for reference)

For completeness I've included the full JSON output below

{
  "@context": "http://www.w3.org/ns/hydra/context.jsonld",
  "@id": "https://mysite.example.com/hey",
  "@type": "Collection",
  "UID": "c650b8b9a05343abaa65cade44150c1f",
  "allow_discussion": null,
  "contributors": [],
  "created": "2016-04-17T14:23:01+00:00",
  "creators": [
    "manager"
  ],
  "customViewFields": [
    "Title",
    "Creator",
    "Type",
    "ModificationDate"
  ],
  "description": "",
  "effective": "2016-04-17T14:23:00",
  "exclude_from_nav": false,
  "expires": null,
  "id": "hey",
  "item_count": 30,
  "language": "en-gb",
  "limit": 1000,
  "member": [
    {
      "@id": "https://mysite.example.com/hello",
      "description": "is it me you're looking for (upspeak)",
      "title": "hello"
    },
    {
      "@id": "https://mysite.example.com/front-page",
      "description": "Congratulations! You have successfully installed Plone.",
      "title": "Welcome to Plone"
    }
  ],
  "modified": "2016-04-17T14:51:22+00:00",
  "parent": {
    "@id": "https://mysite.example.com",
    "description": "",
    "title": "Site"
  },
  "query": [
    {
      "i": "portal_type",
      "o": "plone.app.querystring.operation.selection.any",
      "v": [
        "Document"
      ]
    }
  ],
  "relatedItems": [],
  "rights": null,
  "sort_on": null,
  "sort_reversed": false,
  "subjects": [],
  "text": {
    "content-type": "text/html",
    "data": "<p>here<strong> is</strong> some <em>tect</em> for you</p>",
    "encoding": "utf-8"
  },
  "title": "hey"
}

Support for relation fields

We should support relation fields.
I think we should have a general short-form for representing content which can use when representing relations but also for children, parent and more.
How do we do that?

Check field permissions for GET request

Currently the script doesn't check if the user is permitted to view the fields on a page (custom permission). Is something like this implemented or on the todo list already?

Question about iterating dexterity fields

I'm currently changing the serialization as discussed in #38.

While looking at the current implementation I was wondering why iterating over dexterity fields is implemented that way.
See get_object_schema:

def get_object_schema(obj):

    # Iterate over all interfaces that are provided by the object and filter
    # out all attributes that start with '_' or 'manage'.
    for iface in providedBy(obj).flattened():
        for name, field in getFields(iface).items():
            no_underscore_method = not name.startswith('_')
            no_manage_method = not name.startswith('manage')
            if no_underscore_method and no_manage_method:
                yield name, field

    # Iterate over all behaviors that are assigned to the object.
    assignable = IBehaviorAssignable(obj, None)
    if assignable:
        for behavior in assignable.enumerateBehaviors():
            for name, field in getFields(behavior.interface).items():
                yield name, field

I would have used plone.app.dexterity.utils.iterSchemata with zope.schema.getFieldsInOrder (or zope.schema.getFields).

@tisto what is the advantage of this approach compared to using the dexterity utils iterSchemata ?

First alpha release

As discussed at the Barcelona Sprint, there's going to be a first alpha release of plone.restapi soon.

These are the current blockers that are left:

  • JWT based authentication (PR #109)
  • Batching (PR #99)
  • File / Image download (issues #9 / #10, needs discussion)

Deferred for now:

  • "Framing" the default GET response differently, with fields in their own nested dictionary (issue #6, needs discussion)

For reference, this was the state of the Kanban board was in when we left the sprint yesterday:
kanban_barcelona

@tisto @buchi @sneridagh did I forget anything?

Hypermedia

Make links "discoverable" for a possible client. This will allow a loose coupling between client and server (e.g. we can change the URLs without breaking the client).

A JSON-LD schema allows us to define which parts of the JSON document are actually links. A client can then expand the JSON-LD document and follow links (with something like has_link, follow_link, take_action).

Support for custom media types.

It should be possible to configure custom media types in addition to using 'application/json'.

Not sure if this makes sense without a custom serializer.

Types endpoint that returns JSON Schema

Discussion notes from REST API sprint in Barcelona. (@vangheem @ebrehault @tisto)

JSON Schema currently supports:

  • Base fields
  • Fieldsets (nested schema)
  • Validation (basic validation)
  • Actions (hyper schema links)
  • Read-only fields

In addition we need:

  • Widgets (just provide the name/id for the widget that should be uses)
  • Help messages
  • Placeholders
  • Vocabularies (values & labels)
  • Master-slave fields
  • Subschema

First implementation:

If the client sends a GET request to the "@types" endpoint with the name of the content type and an HTTP header "Accept: application/schema+json" the server will respond with a JSON schema document.

Model parent/children relationship.

There are different ways to model the parent/child relationship for our JSON objects. Possible attribute names:

  • parent / children (pro: parent is already used in Zope/Plone, con: possible name collisions with existing attributes)
  • __parent__ / __children__ (pro: parent is already used in Zope/Plone, no name collisions, con: ugly?)
  • container / items (cons: too generic? not used in Zope/Plone)
  • container / contents (cons: not used in Zope/Plone)
  • collection / member (from hydra) (pro: hydra is somehow a standard, cons: collection has another meaning in Plone, member is very generic)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.