Coder Social home page Coder Social logo

spec's Introduction

OpenFeature Logo

OpenFeature Specification

Roadmap Contributing Slack CII Best Practices

OpenFeature is an open specification that provides a vendor-agnostic, community-driven API for feature flagging that works with your favorite feature flag management tool or in-house solution.

This repository describes the requirements and expectations for OpenFeature.

Design Principles

The OpenFeature specification must be designed with:

  • compatibility with existing feature flag offerings.
  • simple, understandable APIs.
  • vendor agnosticism.
  • language agnosticism.
  • low/no dependency.
  • extensibility.

SDKs and Client Libraries

The project aims to provide a unified API and SDK feature flag evaluation across popular technology stacks. The OpenFeature SDK provides a mechanism for interfacing with an external evaluation engine in a vendor agnostic way; it does not itself handle the flag evaluation logic.

An up-to-date SDK compatibility overview can be found here.

Tooling

This specification complies with RFC 2119 and seeks to conform to the W3C QA Framework Guidelines.

In accordance with this, some basic tooling (donated graciously by Diego Hurtado) has been employed to parse the specification and output a JSON structure of concise requirements, highlighting the particular RFC 2119 verb in question.

To parse the specification, simply type make. Please review the generated JSON files, which will appear as siblings to any of the markdown files in the /specification folder.

Style Guide

  • Use code blocks for examples.
    • Code blocks should be pseudocode, not any particular language, but should be vaguely "Java-esque".
  • Use conditional requirements for requirements that only apply in particular situations, such as particular languages or runtimes.
  • Use "sentence case" enclosed in ticks (`) when identifying entities outside of code blocks (ie: evaluation details instead of EvaluationDetails).
  • Do not place line breaks into sentences, keep sentences to a single line for easier review.
  • String literals appearing outside of code blocks should be enclosed in both ticks (`) and double-quotes (") (ie: "PARSE_ERROR").
  • Use "Title Case" for all titles.
  • Use the imperative mood and passive voice.

spec's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spec's Issues

Evaluation Context Propagation

Summary


Evaluation context may be used to conditionally control the value returned from a flag evaluation. You could, for example, enable a feature for all users with a specific email domain. Using evaluation context propagation, application developers can set evaluation context where it's convenient (i.e. an auth service) and have it persist for the length of the request.

Objective


Define a specification for evaluation context propagation without introducing any third party dependencies in the OpenFeature SDK.

References

Optional Targeting Key Handling

Problem

The spec defines an optional targeting key property in the evaluation context. This can be problematic for some providers that require a consistent value that represents a target (i.e. user). This is typically handled by requiring a targeting key to be provided during a flag evaluation. OpenFeature allows context to be merged and an application developer may choose to set the target key elsewhere in the application (i.e. auth middleware). That means OpenFeature can't guarantee that a targeting key is available when a flag is being evaluated.

Proposal

The issue can be mitigated by:

  • Providers that required a targeting key should clearly state that requirement in their documentation.
  • OpenFeature should promote using a targeting key as a best practice.
  • Providers would throw a targeting key found found error if the targeting key is missing. The OpenFeature would catch this error and return the default value. It would also expose this information in the details response and hooks.

Finally hook errors?

What happens when a finally hook errors?

Here's some example evaluation logic:

try {
  run_before_hooks()
  get_value()
  run_after_hooks()
} catch() {
  run_error_hooks()
} finally {
  run_finally_hooks()
}

If the finally hooks throw errors, they won't be caught which breaks 1.19 from the spec.

Otherwise, our code looks like this (which is very janky).

try {
  try {
    run_before_hooks()
    get_value()
    run_after_hooks()
  } catch() {
    run_error_hooks()
  } finally {
    run_finally_hooks()
  }
} finally {
  run_error_hooks() # again?
}

Define Version Mismatch Behavior

Problem

The global singleton used in OpenFeature provides a number of benefits. However, it may introduce some challenges in the future. For example, if a library is using an older version of OpenFeature it may not be compatible with the provider interface.

Objective

Define the expected behavior in the spec when this situation occurs.

1.4.7 & 1.4.8: reason vs error code

Now that we've changed the wording around reason vs error code, things are a bit foggy for me.

How are they different? Without error code being an enum.. I don't really get how that's different than 1.4.8


Requirement 1.4.7

In cases of abnormal execution, the evaluation details structure's error code field MUST contain a string identifying an error occurred during flag evaluation and the nature of the error.

Some example error codes include: "TARGETING_KEY_MISSING", "PROVIDER_NOT_READY", "FLAG_NOT_FOUND", "PARSE_ERROR", "TYPE_MISMATCH", or "GENERAL".

Requirement 1.4.8

In cases of abnormal execution (network failure, unhandled error, etc) the reason field in the evaluation details SHOULD indicate an error.

backend only?

Hi, this is a really interesting initative! I just briefly read thru the specs, and it seems to me that this is primarily targeted at backend apps, since it relies on sidecar apps and crds etc.
How about frontends that might run direcly off of a cdn coupled with s3 or similar static storage? Or mobile apps?

It would be good if the spec at least mentioned how "non-backend" apps would integrate with this

Expected lifecycle of flags

We think of flags in two ways.

First, temporary flags. These help us ramp traffic onto a feature slowly and safely using targeting rules. This is the main use-case. These flags happen to have an expected end date and we plan to nag people if the flag is still active after that date.

Second, there are "long-lived flags", which are expected to follow a kill-switch use-case. As an example, we have a latency sensitive homepage. If the "recommended products" module is causing us to breach our latency SLAs, we want to toggle the flag and disable that from rendering, so we can get the site back under the latency budget.

From an API standpoint, they are identical.

Is this how others are thinking about flags?

Migrate research

Migrate research to the new research repo. Update any references found in the readme.

Previously created clients should always use currently registered provider

I was making a change to the Node SDK to remove the provider accessor from the API (#84, #93) and in moving things around, I briefly changed behavior in a pretty nefarious way. Basically, I created a situation wherein clients maintained the provider that was registered at the time of their creation:

OpenFeature.setProvider(providerX)
clientA = OpenFeatire.getClient();
Openfeature.setProvider(providerY)
clientB = OpenFeatire.getClient();

clientA.getBooleanValue() // this was still using providerX
clientB.getBooleanValue() // this was using providerY

I think this is not good ™️ . It means authors would have to "keep track" of the creation time of certain clients, and causes a lot of other surprising knock-on effects. I don't think it technically violates our singleton requirement (the API object is still a singleton) but it feels like it violates it practically...

Should we mention the expectation that all clients use the currently registered provider in the spec? Is such a requirement already implied? Can we just add some verbiage to the singleton requirement?

@justinabrahms @davejohnston @beeme1mr

Typing of flag values

Most providers also allow for flags to return some sort of value as well as a boolean.

How these values are typed can be problematic, as you have to be aware of the consistency of types and coercion across languages that are both strongly and weakly typed. How flag values are typed should be part of the spec, I feel.

There is a similar issue around user properties. Many providers allow for user properties/meta data to provide context to the flag engine. Again, how these values are typed needs to be considered.

Provider Specification

The provider spec defines the API a provider must conform to in order to perform flag evaluation in OpenFeature.

[Proposal] Remove Get Provider from the Flag Evaluation API

Proposal

The flag evaluation API requirement 1.3 states that a function to retrieve the provider implementation is required. This was originally defined in the spec to allow hooks to access the provider. However, this causes confusion because the application developer is not meant to interact with the provider directly.

This issue could be solved by replacing Get Provider with a provider name getter.

Reference

CI Processes for spec

I'm interested in creating some CI processes for the spec repo. Some functionality I think would be helpful:

I think we can do all these with some simple packages, enhancements to the Makefile, and github actions.

Serialization Exeptions with Eval Context?

How should we handle serialization/deserializtion exceptions around adding structures to an evaluation context?

I'm thinking: cyclical json, un-serializable type provided, etc.

I'm thinking as a user, I wouldn't want it to silently not add my context. That leaves throwing an exception? Don't love it, tbh.

How are we thinking about PII?

We should be careful with Flag Evalutation Context, so that we don't shove tons of PII data there for cases where that's not needed.

Define OTel Semantic Convention for Feature Flagging

OpenTelemetry has a concept of semantic conventions. This is a form of metadata that can be applied to a span. The attribute names should comply with their spec.

Here's an initial draft of properties that may be valuable.

Attribute Type Description Examples Required
feature_flag.flag_key string The unique identifier of the feature flag. show-new-logo Yes
feature_flag.provider.name string The name of the provider performing the flag evaluation. Flag Manager No
feature_flag.provider.management_url string The URL used to manage the feature flag in the provider. http://localhost:8080/flags/ No
feature_flag.evaluated.variant string The name associated with the evaluated value. [1] reverse See below
feature_flag.evaluated.value string A string representation of the evaluated value. [2] true See below

[1]: A variant should be used if it is available. If the variant is present, feature_flag.evaluated.value should be omitted.

[2]: The value should only be used if the variant is not available. How the value is represented as a string should be determined by the implementer.

Additional span attributes that already exist that may be useful in OpenFeature.

Events Specification

Summary

Many vendors support an event-based API that allows developers to subscribe for events. This can be used to detect when the SDK is ready to perform flag evaluation or a flag configuration changes.

Events

The following events should be considered defining the spec.

Name Description
PROVIDER_STATUS Provider status would emit events around the lifecycle of a provider. This could include events like "ready". This could be used to ensure that the provider has started and is ready to perform flag evaluation.
FLAG_CONFIG_UPDATE Flag config update would emit events when a flag configuration changes. This could be used to detect that a flag configuration has changed and instantiate a new object.

References

flag type in definition

i think it will be better to define the flag type in a definition like this

"myBoolFlag": {
      "state": "ENABLED",
      "type": "BOOLEAN",
      "variants": {
        "on": true,
        "off": false
      },
      "defaultVariant": "on"
    },

it will let us abstract the endpoints at flagd

"/flags/{flag-key}/resolve/boolean"
"/flags/{flag-key}/resolve/numbe"
"/flags/{flag-key}/resolve/object"
"/flags/{flag-key}/resolve/string"

to one endpoint

"/flags/{flag-key}/resolve/"

Hook ordering & state

We need to have a concept of hook ordering. In our case, we have a client hook EmitMetrics. Sometimes, devs don't want to emit metrics, so we want to selectively overwrite it.

client.registerHook(EmitMetrics())

client.getBooleanValues('key', defaultValue) # This emits metrics

client.getBooleanValues('key', defaultValue, FlagOptions(DONT_EMIT_METRICS)) # This doesnt emit metrics.

We're thinking that we'll build this like:

class EmitMetrics:
  def after(ctx, details):
    if not ctx.state['dont-emit-metrics']:
      emitMetrics()

class DONT_EMIT_METRICS:
  def before(ctx):
    ctx.state['dont-emit-metrics'] = True

This means we'll need precise control around when hooks run (with associated APIs to insert at specific ordering) as well as a change to HookContext to hold some state.

Unable to cleanly dispose of a provider instance

The specification doesn't appear to define a way to cleanly dispose of provider instances. Some provider implementations which uses SDKs of the vendor and these SDKs might have event listeners, create watchers, or timers. These should be need to be cleanly disposed off when the client is being dismissed.

I think it would be useful to add a async shutdown-function to the client to allow shutting down and stop any potential leaky objects

Event Tracking API Specification

Summary

More advanced feature flagging use cases often require capturing events that can be related to a particular flag evaluation. That allows teams to more easily track that a new feature had the intended impact.

This concept does not exist in all vendors/providers. However, we could potentially introduce the concept of a tracking provider. The provider could either be a feature flag vendor that supports this concept or another event tracking tool (e.g. Segment).

A general event tracking API is out of scope.

References

Metadata standardization

Metadata can be used to make runtime routing decisions. It's important to document some common properties and how they should be used, while maintaining flexibility for end users.

The following examples could be used as a reference:

Adopting elements of OpenTelemetry's semantic conventions could be beneficial for future interoperability.

Hook Specification

We need to draft a specification for hooks. It must define things such as:

  • basic hook concepts
  • hook parameters, return values
  • how hooks can be attached (globally on API singleton, on client, and on a particular flag evaluation)
  • the order of hook execution
  • the behavior of hooks with respect to error conditions and abnormal execution
  • storing state within a hook
  • whether or not it's appropriate for hooks to mutate and state on the hook context

And possibly more.

Scope

Define initial scope

The purpose of this issue is to document the current status of the project and propose a high-level roadmap for achieving general availability.

Spec

The spec is an important aspect of OpenFeature. We need to reach a level of stability as quickly as possible so that SDKs, provider implementations, and hooks can be created.

Research

Server-side research

Research is nearly complete for the server-side. The outcome from that research can be seen below:

Areas that may still require additional research:

  • Request based context propagation using something like continuous local storage or thread local storage. A proof of concept was added to the playground.

Client-side research

Client-side feature flagging will be a requirement for a GA release. Conceptually, they're similar to server-side feature flags but they introduce additional complexities that need to be considered. However, the majority of our effort so far has been on server-side feature flagging.

Web

Research for feature flagging client-side on the web is currently in progress.

Mobile

Research for feature flagging client-side on mobile has not been started.

Sections of the spec

Flag Evaluation

Status: Experimental
Release target: Alpha

The flag evaluation section is available in an experimental state here.

Hooks

Status: Experimental
Release target: Alpha

The hook section is available in an experimental state here.

There are still some ongoing discussions including:

Provider

Status: Experimental
Release target: Alpha

The provider is available in an experimental state here. It's important that vendors keep a close eye on this part of the spec in order to ensure OpenFeature compatibility with their product.

Evaluation Context

Status: In Progress
Release target: Alpha

The evaluation context is currently in progress. This part of the spec simply provides semantic meaning to some properties on the context object. This will provide consistency between different providers. For example, one vendor may have a userId property and another may have userKey. OpenFeature needs to provide enough structure to providers in order to map context properly.

There's an open PR to add evaluation context to the spec. That PR can be found here.

Event Subscription

Status: Proposal
Release target: Unknown

Event subscription is at the proposal stage. Many vendors provide an event-based API that allows developers to subscribe to events like onReady or flagUpdate.

Bulk evaluations

Status: Proposal
Release target: Unknown

Bulk evaluation is at the proposal stage. This may be useful on the client, or when a developer wants to include the results of multiple flag evaluations on a request.

Tracking Events

Status: Proposal
Release target: Unknown

Tracking events are at the proposal stage. Events can be associated with flag evaluations, which is particularly useful for A/B testing. This concept does not exist in all providers. However, we could potentially introduce the concept of a tracking provider. The provider could either be a provider that supports this concept or another event tracking tool, e.g. Segment.


SDKs

Server-side

Java

Status: In Progress
Release target: Alpha

The Java SDK is under active development. You can see the repo here.

Node JS

Status: In Progress
Release target: Alpha

The NodeJs SDK is under active development. You can see the repo here.

Golang

Status: In Progress
Release target: Beta

The Golang SDK is under active development. You can see the repo here.

Python

Status: In Progress
Release target: Beta

The Python SDK is under active development. You can see the repo here.

.NET

Status: Proposal
Release target: Beta

Development hasn't started on the .NET SDK, but it is expected to begin soon.

Client-side

Web

Status: Proposal
Release target: Beta

The client-side web SDK depends on the spec research listed above. It's unclear at this time how similar this SDK will be to the NodeJS SDK. Code reuse may be an option.

Mobile

Android

Status: Proposal
Release target: Unknown

The client-side Android SDKs depends on the spec research listed above. Its priority still needs to be determined.

iOS

Status: Proposal
Release target: Unknown

The client-side iOS SDKs depends on the spec research listed above. Its priority still needs to be determined.

React Native

Status: Proposal
Release target: Unknown

The client-side React Native SDKs depends on the spec research listed above. Its priority still needs to be determined.


Providers and Plugins

Providers and plugins will need to be developed for each of the supported languages. A contrib repo for each language should be created. It can act as a central location for community managed providers and plugins.

Maintainers and workflows will need to be defined to ensure that only high quality providers and plugins are included.


Documentation

Documentation will need to be created and available online. Docusaurus seems to be the popular choice.


Cloud Native

The cloud native operator and provider should be moved into a separate section. Perhaps a dedicated Special Interest Group (SIG) should be defined for this initiative. The goal is to provide a spec compliant provider that utilizes Kubernetes constructs. Progress can be monitored here.

Add requirements/clarification specifying context can be provided globally, at client creation, and flag evaluation

Similarly to hooks, I think we want to allow setting global, client, and evaluation context. We will also need to describe how they are merged. This should be a fairly minor change to the spec, unless we want to get fancy with how to merge the contexts. We can mention it in the Evaluation Context spec, and link to the new requirements in the Flag Evaluation spec.

I'm proposing a simple merge where the invocation context overwrites properties in the client context, which overwrites properties in the global context:

    // merge global, client, and evaluation context
    const mergedContext = {
      ...globalContext,
      ...clientContext,
      ...invocationContext,
    };

Another option would be to define a merge function so that app-authors could have full control of how the merge works, perhaps with the above as a default.

Flag Evaluation Lifecycle: An Extensibility Proposal

There's been a few questions and thoughts around integrations and extensibility, especially around telemetry, visibility, custom parsing, validation, and logging. OpenTelemetry support, for instance has been a goal from the start. To maintain flexibility, and keep our API dependencies as low as possible (preferably none) I propose a model for extensibility that defines a "life-cycle" for feature flag evaluation, and exposes life-cycle "hooks" for the purposes of extensibility and customization.

"Hooks" might be classes/functions that implement a SDK-defined interface, perform arbitrary logic at well-defined points during flag evaluation, which can be "registered" by the first party feature flag API consumer. For example, Open Telemetry implementation might look something like this, in psuedocode:

class OpenTelemetryHook implements OpenFeatureHook {

private Tracer tracer;

...

  before(Context context, FlagId flagId): Context {
    // optionally handle actions to do before the flag value is retrieved in this life-cycle hook, such as initializing traces/logging.
    // this might not be the API we want, we want to consider the risk associated with mutating the context
    context.span = tracer.startSpan(flagId);
    return context; 
  }

  after(Context context, FlagId flagId, FlagValue flagValue): FlagValue {
    // optionally handle actions to do after the flag value is retrieved in this life-cycle hook, such as parsing, validation, etc.
    // this could be a good place for custom parsing of the flag response, validation, etc
    context.span.end();
    return FlagValue;
  }

  finally (Context context, FlagId flagId) {
    // optionally do some cleanup task regardless of success/failure of flag evaluation.
  }

  error (Context context, FlagId flagId, Error err) {
    // optionally handle errors with flag resolution in this life-cycle hook.
  }

}

Then, in the OpenFeature API, the first party application author would register this hook (note: they did not WRITE the OpenTelemetryHook, they are just consuming it and adding it to the flag evaluation life-cycle):

openFeatureClient.registerHook([ new OpenTelemetryHook() ]);

When the flag is evaluated, the API internals might run the defined hooks within the context of that flag evaluation:

class OpenFeatureClient {

  private Provider provider; 

  getFlagValue(FlagId flagId, Context context) {

    // perform "before" life-cycle hooks
    Context mergedContext = this.hooks.reduce((accumulated, hook) => merge(accumulated, hook.before(context, flagId)));
} 

    // call the provider abstraction to actually get the feature flag value
    var flagValue = this.provider.getFlagValue(flagId, context);
  
    // perform "after" lifecycle hooks
    flagValue = this.hooks.reduce((accumulated, hook) => hook.after(context, flagId, accumulated)));
    this.hooks.each(hook => hook.after(context, flagId, flagValue));

    return flagValue;
  }
}

I think that an optional configuration object with optional before and after callbacks having the same method signature, could also be added to feature flag evaluation calls, so that we could implement hooks on a per-flag basis as well:

getFlagValue('my-flag', context, { before: (Context context, FlagId flagId) => { // do stuff } })

Considerations like thread-safety, mutability of objects, sync vs async, and error handling as well as the overall API should all be carefully considered here.

Add architecture summary

There are some architecture sketches which could be added to the repository. It would be nice to do so

"reasoning" concept for a flag evaluation

When we evaluate a flag, we hand over a pile of context and a key name. In response, we get a value. For many reasons (logging, debugging, etc), we might like to know why we got that value. The spec should provide an API that will allow us to introspect this stuff.

Key info, in my mind, is:

  • was there an error? What kind?
  • is this the default I provided in my API call or something from the evaluator implementation?

Maybe also:

  • which rule lead to this value being chosen?

2.8, curious how we expect folks to communicate back?

So I'm writing a test case for 2.8. I'm not entirely clear exactly what we're trying to accomplish with it.

So, ProviderEvaluation already has a mechanism for providing an error status. It sounds like the thing is saying "If things go bad, you can throw exceptions, but the exceptions must also somehow communicate a string code which the framework will pick up". Is that the expectation? I think this will mean that we will have some custom exceptions that providers can throw with an error code.. but we also can handle any sort of exception.

    @Specification(number="2.8", text="In cases of abnormal execution, the provider MUST indicate an " +
            "error using the idioms of the implementation language, with an associated error code having possible " +
            "values PROVIDER_NOT_READY, FLAG_NOT_FOUND, PARSE_ERROR, TYPE_MISMATCH, or GENERAL.")
    @Disabled("I don't think we expect the provider to do all the exception catching.. right?")
    @Test void error_populates_error_code() {
        AlwaysBrokenProvider broken = new AlwaysBrokenProvider();
        ProviderEvaluation<Boolean> result = broken.getBooleanEvaluation("key", false, new EvaluationContext(), FlagEvaluationOptions.builder().build());
        assertEquals(ErrorCode.GENERAL, result.getErrorCode());
    }

Typed API proposal

A few conversations and issues (#22), as well as the survey of some existing vendor SDKs (https://github.com/open-feature/spec/blob/main/research/api-comparision.md) suggest that typesafe functions/methods are likely a requirement for the OpenFeature SDK. Below I have a very simple proposal, with some in-line considerations, for some typesafe interfaces for getting flag values.

Pseudocode:

interface OpenFeatureClient {

  /**
  * Get a boolean flag value.
  *
  * Note: As discussed, we may want to expose a "friendlier" boolean flag method,
  * called something like "isEnabled()", which would be a shortcut for this method.
  */
  boolean getBooleanValue(string flagId, boolean defaultValue, Context? context);

  /**
  * Get a string flag value.
  */
  string getStringValue(String flagId, string defaultValue, Context? context);

  /**
  * Get a number flag value.
  */
  number getNumberValue(String flagId, number defaultValue, Context? context);

  /**
  * Get a object (JSON) flag value.
  *
  * Note: Generics support is not universal, so we may need other
  * mechanisms to support some degree of type-safety in languages without generics support.
  *
  * Note: Parsing is an interesting question here. Some SDKs simply return stringified JSON,
  * while others return parsed "map-like" objects in the case of JSON.
  * "Hooks" (https://github.com/open-feature/spec/issues/25), configured globally or per flag,
  * may provide a means to configure custom parsing of objects, but we may not want to do any parsing by default at all.
  * This question also has implications on dependencies. To keep the SDK as lightweight as possible, we likely wouldn't
  * want to implement parsing there, but instead require it to be done in the provider implementation.
  */
  T getObjectValue<T>(String flagId, T defaultValue, Context? context);
}

Open questions:

  • If the type returned from the feature flag backend is not of the expected type, should the default be returned?
  • should parsing of JSON be implemented? If so, should it be implemented in the provider? If not, would we stringify JSON values for feature flags SDKs that provide already parsed objects (as in the case of Harness)?

Feel free to poke any holes in this proposal you can, or alternatively, add your own proposal. I'll work on implementing something resulting from our discussion here in the SDK-research repo.

Two spec-compliant SDKs

There are unknown unknowns within the spec. We'll know more through SDK implementation. As such, a barrier to a true alpha release would be to get 2 spec-compliant SDKs. I think java is a reasonable proxy for the typed side of things. Python or Javascript would be a good second candidate.

  • Spec-compliant Java SDK
  • Spec-compliant non-Java SDK (Python? JavaScript?)

[Proposal] Spec Versioning

Problem

The spec in its current form is not versioned. There's a concept of experimental and alpha statuses but it's impossible to say an SDK is spec compliant because the spec is a moving target.

Solution

Adopt semantic versioning, use git tags, and perform a GitHub release. Since we're in an alpha state, it's important that the major version is 0, which means anything MAY change. While breaking changes are still possible, they should be clearly communicated and documented.

SDKs do not need to match the spec version they conform to. However, the SDKs should clearly state what version of the spec they're compliant with.

Questions

  • How can we automate the change log?
  • Do we want to automate versioning (i.e. based on commit messaging)?

Evaluation Context Specification

We need to draft a specification for the Evaluation Context. It must define things such as:

  • standard fields on the context (these must be defined so they can be predictably mapped to standard fields in some SDKs, such as ip-address, email, http-method, etc)
  • where context attributes can be provider (at API level (for static data such as runtime or application name), per client, as a parameter for flag evaluation)
  • how context provided at different levels mentioned above will be merged
  • implicit propagation mechanisms (continuations, thread local storage, etc)

And possibly more.

Related discussions:

Provide split between specification and implementation on Kubernetes

It would be nice to have a split between the specification and an implementation on Kubernetes.

This would allow 3rd party feature management services such as LaunchDarkly and others to be conformant to the specification.

This would be similar to how CloudEvents has approached it.

Ambiguity around hook context and evaluation context.

I'm attempting to implement merging before() hook values.

The logic is:

...
new_hook_ctx = run_before_hooks(hook_ctx, hook_hints)
eval_flag(new_hook_ctx.get_evaluation_context())
run_after_hooks(new_hook_ctx, hook_hints)
...

run_before_hooks will have an odd implementation because we want the before hooks to be able to alter the EvaluationContext. I don't think a before hook should be altering the other things in the HookCtx (flag key, default value, etc).

hook_ctx;
for hook in before_hooks:
  new_hook_ctx = hook(hook_ctx, hook_hints)
  hook_ctx.merge(new_hook_ctx)
return hook_ctx

Should the BeforeHook be returning a HookContext or the EvaluationContext? I'm thinking the latter, given that's the only thing I think we want to change.

That would make the code:

...
new_eval_context = run_before_hooks(hook_ctx, hook_hints)
new_hook_context = hook_ctx.with_context(new_eval_context)
eval_flag(new_hook_context)
run_after_hooks(new_hook_ctx, hook_hints)
...

run_before_hooks will have an odd implementation:

eval_ctx;
for hook in before_hooks:
  eval_ctx = hook(hook_ctx.with(eval_ctx), hook_hints)

return eval_ctx

This has implications for https://github.com/open-feature/spec/blob/main/specification/flag-evaluation/hooks.md#requirement-32

Don't allow exceptions during client creation

I just noticed that the current draft spec has:

Requirement 1.19
No methods, functions, or operations on the client should ever throw exceptions, or otherwise abnormally terminate. Flag evaluation calls must always return the default value in the event of abnormal execution. Exceptions include functions or methods for the purposes for configuration or setup.

I understand the motivation for never throwing exceptions, but I think we're failing to meet the spirit of this for client-side JS if we allow exceptions when creating a client, because of the unique lifecycle of browser-based apps. For a client-side app the client is often created just before an evaluation is done, so an exception thrown during client creation is pretty much just as disruptive as an exception when you call a method.

Because of this, I would propose that the spec not allow exceptions during client creation.

Side-by-side API Comparison

It's difficult to quickly do a side-by-side comparison of all the existing vendors APIs. Create a new page that makes easier.

add flag key to the Glossary

we reference flag key in a few parts of the spec, and it's caused some confusion/discussion in the past. It seems wise to nail down exactly what this is.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.