Coder Social home page Coder Social logo

rfcs's People

Contributors

agracey avatar aidandelaney avatar dependabot[bot] avatar dfreilich avatar dwillist avatar edmorley avatar ekcasey avatar elbandito avatar eswzy avatar foresteckhardt avatar hone avatar iainsproat avatar jabrown85 avatar jjbustamante avatar jkutner avatar joeybrown-sf avatar joshwlewis avatar jromero avatar kvedurmu avatar matthewmcnew avatar mboldt avatar natalieparellano avatar nebhale avatar pbusko avatar samj1912 avatar sampeinado avatar sclevine avatar squee1945 avatar wanjunlei avatar zmackie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rfcs's Issues

Include all layer metadata in final image

Meta

  • Name: Include all layer metadata in final image
  • Start Date: 2020-03-12
  • CNB Pull Request:
  • CNB Issue:
  • Supersedes: N/A

Summary

Currently, the only layer metadata that is included in a final image
is the layer metadata for that image's launch layers. It would benefit
image debugging and build reproducibility to include all layer
metadata in the final image.

Motivation

At the highest level, non-launch layer metadata can help us to know
things about non-launch layers that participated in an image build,
which things we might need to know in order to reproduce a
build. Buildpacks users might want to know things about such
non-launch layers when they discover a problem in an image, or a
problem in a build; some such problems will depend on non-launch
layers that participated in the build.

Having implemented this feature, buildpack users and authors will have
more information with which to debug and reproduce builds.

Consider a build that can only be reproduced if the state of the cache
is reproduced (the @ notation means some git repo at some SHA):

  1. I build some-app @ 000000.
  2. I build some-app @ 111111.
  3. I notice a problem in the build for some-app @ 111111.
  4. I build some-app @ 111111 without a cache, but cannot reproduce
    the problem.
  5. I look at the layer metadata for the image associated with
    some-app @ 111111 and find that all of the layers used in its
    weird build were generated using some-app @ 000000.
  6. I build some-app @ 000000 without a cache (to "seed" the cache).
  7. I build some-app @ 111111 with the cache from some-app @ 000000
    and reproduce the issue.

This example conflates the related issue of knowing what version of an
application a particular layer arose from; however, the first step
towards surfacing that information could be including all layer
metadata in an image. Given that the layer metadata will be preserved,
the buildpack author user or the platform could add the application's
version (e.g. a git SHA) to the layer metadata.

Along similar lines, non-launch layer metadata could include the
SHA256 ID of the non-launch layers that participated in a build, which
could allow users to know precisely which layers particiapted in a
build.

Conceivably, associating a non-launch layer with an application
verison could have a first-class representation. However, as is, the
buildpacks system cannot tell a user all of the layers that were used
in a build. This feature would enable users to address this need, and
unanticipated and specialized needs around non-launch layer metadata,
for themselves.

What it is

The contents of <layer>.toml files will be reflected in resulting
images' labels.

Consider the following <layer>.toml for some layer that participates
in the build of an image sample-app:latest:

build = false
cache = false
launch = false

[metadata]
  [metadata.application.git]
    sha = "b4ddf00d"

As is, the image sample-app:latest would not contain this metadata
in its labels, or in its filesystem, because the metadata does not
contain the top-level .launch = true key/value.

Having implemented this feature, the key
.metadata.application.git.sha, and its value, would exist in
sample-app:latest's labels.

How it Works

Layer metadata for all layers should be included in the existing
io.buildpacks.lifecycle.metadata label, as this is already the place
where launch-layer metadata is stored. Doing so would be mostly
backwards compatible; the layer metadata for all layers is a superset
of the layer metadata for launch layers.

Drawbacks

  • It's not perfectly backwards compatible; users who expect to be able
    to use the value of the io.buildpacks.lifecycle.metadata label to
    enumerate an image's launch layers will get different results;
    having implemented this feature, that enumeration would reveal all
    of the layers that participated in that image's build. Still, it
    should be simple to refactor any such code to further filter by
    launch = true.
  • Image labels will grow; is this an absue of image labels?

Alternatives

One alternative might be to leave the <layer>.toml files on disk; I
don't think that this would result in any technical conflict, but it
might be confusing to see nothing else of those layers.

Even if this feature isn't implemented, some users will want to know
this information, and might be inclined to implement this feature for
themselves; homegrown implementations might take the form of a
buildpack that extracts metadata from all layers, or an additional
step that directly inspects layers after a build.

Unresolved Questions

  • Should Buildpacks have an opinion about what an application version
    is, and should that information be represented first-class instead
    of in arbitrary metadata?
  • Should non-launch layer IDs be exposed at all? If so, should it have
    a first-class representation?

Default value of $PORT

It's a common source of confusion that the containers we produce don't start, because they're waiting for a port to bind to. Should we have a way for there to be a default value of $PORT?

Basically , we always require folks to invoke their container with some version of -e PORT=3000. It would be nice if they would just turn on without this needing to be set, and for the default to be something sane.

My vote is for PORT=8080

RFC: Buildpack Distribution Specification [WIP]

NOTE: Work in progress

Meta

  • Name: Buildpack Distribution Specification
  • Start Date: 2019-04-12
  • CNB Pull Requests: (spec PR to follow)
  • CNB Issues: (lifecycle issues to follow)

Motivation

This proposal enables both decentralized, manual distribution of buildpacks from Docker registries as well as automatic distribution of buildpacks from a centralized buildpack registry. It allows individual buildpacks to be distributed via a Docker registry, and it makes dynamic assembly of builders on a Docker registry efficient. It provides a developer-friendly interface that abstracts away complex buildpack configurations.

What it is

This RFC proposes an official way to distribute buildpacks that conform to the CNB buildpack v3 specification. Changes would consist of a new Buildpack Distribution specification, modifications the lifecycle builder, and modifications to the pack CLI.

It affects all personas that interact with buildpacks.

How it Works

CNB Package Format

A CNB package may exist as an OCI image on an image registry, an OCI image in a Docker daemon, or a .cnb file.

A .cnb file is an uncompressed tar archive containing an OCI image. Its file name should end in .cnb.

Each FS layer blob in the image must contain a single file or single populated directory in one of the following formats:

Buildpack

/cnb/buildpack/<buildpack ID>/<buildpack version>/

Default Buildpack Version Symlink

/cnb/buildpack/<buildpack ID>/default -> <buildpack version>/

Note that this symlink replaces the existing latest symlink.

Order

/cnb/order/<order ID>.toml

In comparsion to the order.toml format, the <order ID>.toml format that replaces it accepts an order ID in the id field of any given entry in [[groups.buildpacks]], provided that the version and optional fields are not specified.

To determine which buildpacks run, the detector processes the default.toml file (or <order ID>.toml file if manually specified) the same as the current order.toml format, with each order ID expanded as such:

Where:

  • O and P are objects containing order IDs
  • A through H are objects containing buildpack IDs
  • L and M are group (row) labels

Given:


O =
\begin{bmatrix}
A_L, & B_L \\
C, & D
\end{bmatrix}


P =
\begin{bmatrix}
E_M, & F_M \\
G, & H
\end{bmatrix}

We propose:


\begin{bmatrix}
E, & O, & F
\end{bmatrix} = 
\begin{bmatrix}
E_L, & A_L, & B_L, & F_L \\
E, & C, & D, & F \\
\end{bmatrix}


\begin{bmatrix}
O, & P
\end{bmatrix} = 
\begin{bmatrix}
A_{LM}, & B_{LM}, & E_{LM}, & F_{LM} \\
A_{L}, & B_{L}, & G_{L}, & H_{L} \\
C_{M}, & D_{M}, & E_{M}, & F_{M} \\
C, & D, & G, & H \\
\end{bmatrix}

(@hone: this is my interpretation of the logic you suggested at summit. Let me know if I misinterpreted.)

Default Order Symlink

/cnb/order/default.toml -> <order ID>.toml

CNB Package Metadata

All supported stacks must be provided in the OCI image metadata.

Label: io.buildpacks.cnb.metadata
JSON:
{
  "stacks": {
    "id": "io.buildpacks.stacks.bionic",
    "mixins": ["mysql"]
   }
}

For a CNB package to be valid, each buildpack.toml must have all listed stacks. Each build ID should only be present once, and the mixins list should enumerate all the required mixins for that stack for all included buildpacks.

User Interface

App Developer

pack build should accept a list of buildpacks and group IDs via the --buildpack flag. Additionally, it should accept a list of labels via the --label flag. Labels filter such that all specified labels must be present on a group for it to be considered for detection.

Buildpack Developer

pack create-cnb will package a selection of:

  • all buildpacks from selected CNB packages
  • all <order ID>.toml files from selected CNB packages
  • additional buildpacks
  • additional <order ID>.toml files that reference any of the above

into a .cnb file, OCI image in a registry, or OCI image in a Docker daemon. [TBD: format for cnb.toml file that specifies the location of these artifacts, default symlinks, default.toml symlinks, and stacks]

Instead of builder.toml, pack create-builder will generate a builder image from a CNB package and stack ID.

Unanswered Questions

  • Should order definitions be versioned?

Drawbacks

Adding multi-group order definitions together is complex.

Alternatives

No other RFCs are proposed.

[RFC #000] - Multi-arch support in CNB Ecosystem

The purpose of this issue is to track the work related to support multi-architecture image in CNB ecosystem, it is a complex project and we will try to divide it in phases. in each phase we will try to provide something valuable for the community and get feedback.

Phase 1

We started with a project developed during the LFX 2023 term, the goal for this phase is to release a set of commands for handling Image Index on pack.

Update May 2024

All the required PR were merged! we are expecting the feature to be included on pack 0.34.0

Update April 2024

Unfortunately we haven't merged the PR developed during the LFX 2023 because we were missing some test coverage, we are working on adding some tests an polish the code a little bit and the expectation is to include these features on pack 0.34.0 or 0.35.0, once this code is merged into main, we will keep working on Phase 2

RFC

Pack

Imgutil

Documentation:

Phase 2

After implementing the primitives to handle a Image Index, we will focus on solving the requirement of packaging builders and buildpacks for different os/arch

The focus of this phase will be:

  • pack buildpack package multi-arch support
  • pack builder create multi-arch support

Update May 2024

We decided to move a little bit our 0.34.0 release date but include this feature in this version!

Update April 2024

This phase requires the code from phase 1 to be available, but during KubeCon EU 2024 we presented a demo with a PoC implementation of the RFC.

RFC

Pack

Documentation:

Phase 3

In this phase, we assume the existence of Builders and Buildpacks addressable by Image Index

We want to focus on solving the problem to make pack build and output applications images for different os/arch.

  • pack build multi-arch support

Update March 2024

We started working on a LFX mentorship to develop a proof of concept to run pack build using buildkit behind de scenes

RFC

  • TODO - create an RFC to discuss this option

Pack

Documentation:

Maintainers: when closing this issue as completed, submit a PR to update the Status of the RFC to Implemented.

[RFC #0125] - Export App Image and Cache Image in Parallel

RFC #0125 - Export App Image and Cache Image in Parallel

Spec:

Lifecycle:

Pack:

Documentation:

Maintainers: when closing this issue as completed, submit a PR to update the Status of the RFC to Implemented.

Idea: provide in-container env var interpolation for direct processes

From RFC 0093 (see https://github.com/buildpacks/rfcs/pull/259/files for the full content):

Using Environment Variables in a Process

One upside to our previous execution strategy was that it enable users to include environment variable references in arguments that were later evaluated in the container. To preserve this feature we can instead adopt the Kubernetes strategy for environment variables interpolation. If a buildpack or user includes $(<env>) in the command or args and <env> is the name of an environment variable set in the launch environment, the launcher will replace this string with the value of the environment variable after apply buildpack-provided env modifications and before launching the process.

How it Works

Buildpack-provided process types

Example 1 - A Shell Process

The Paketo .Net Execute Buildpack may generates shell processes similar to the following:

[[processes]]
type = "web"
command = "dotnet my-app.dll --urls http://0.0.0.0:${PORT:-8080}"
direct = false

NOTE: the buildpack API used by this buildpack (0.5) predates the introduction of default.

Using the new API this process could look like:

[[processes]]
type = "web"
command = ["dotnet", "my-app.dll", "--urls", "http://0.0.0.0:$(PORT)"] # the default value of PORT would need to be provided in a layer
default = true

Things to note:

  • In the above example I have eliminated the dependency on Bash instead of explicitly adding it to the command, because it is likely unnecessary.
  • If the buildpack authors believed that --urls should be overridable they could set move the last two arguments from command to args.

User Provided Processes

Currently if the user can specify a custom process dynamically at runtime by setting the container entrypoint to launcher directly rather than using a symlink to the launcher, the providing a custom cmd. This custom command is executed directly if cmd is an array and the first element is --. Otherwise the custom command is assumed to be a shell process. In the interest of removing complexity we should do away with the special -- argument and execute all custom commands directly.

Example 1 - A Direct process

The follow direct commands:

docker run --entrypoint launcher <image> -- env
docker run --entrypoint launcher <image> -- echo hello '$WORLD' 

will become the following, using the new platform API

docker run --entrypoint launcher <image> env
docker run --entrypoint launcher <image> echo hello '$(WORLD)'

Previously, in the second command in this example, $WORLD would not have been interpolated because this is a direct process; instead the output would include the literal string $WORLD. With the changes proposed, $(WORLD) will now be evaluated, even though the process is direct.

Example 2 - A Shell Process

The follow custom shell command:

docker run --entrypoint launcher <image> echo hello '${WORLD}'
docker run --entrypoint launcher <image> echo hello '${WORLD:-world}'

will become the following, using the new platform API

docker run --entrypoint launcher <image> echo hello '$(WORLD)'
docker run --entrypoint launcher <image> bash -c 'echo hello "${WORLD:-world}"'

The first command in this example needed to adopt the new environment variable syntax to behave as expected with the new API. Previously it was necessary to use a shell process in order to evaluate ${WORLD}. Now, the shell is unnecessary.

If the user wishes, they may explicitly invoke a shell and let Bash handle the interpolation, which provides a richer feature set.

Example 4 - A Script Process in Kubernetes

Because we have adopted the Kubernetes environment variable notation here, users may need to escape some references in their PodSpec in specific situations. This is necessary only if all of the following are true:

  • The user is providing a command or args which contain an environment variable reference.
  • The variable is explicitly initialized in the env section of the PodSpec.
  • The user wishes for the variable to be interpolated after build-provided env modifications have been applied.
apiVersion: v1
kind: Pod
metadata:
  name: env-example
spec:
  containers:
  - name: env-print-demo
    image: bash
    env:
    - name: IN_CONTAINER_1
      value: "k8s-val"
    - name: IN_K8S
      value: "val2"
    command: ["bash", "-c", "echo $$(IN_CONTAINER_1)) $(IN_CONTAINER_2) $(IN_K8S) ${IN_BASH}"]

In the above example the environment variables will be interpolated as follows:

  • $IN_CONTAINER - Interpolated by the launcher after buildpack-provided modifications (e.g. k8s-val:buildpack-appended-val)
  • $IN_CONTAINER_2 - Interpolated by the launcher after buildpack-provided modifications. No escaping is required here because $IN_CONTAINER_2 is not set in env.
  • $IN_K8S - Interpolated by Kubernetes before the container runs. Buildpack-provided modifications will not affect the resulting value.
  • $IN_BASH - Interpolated by Bash.

[RFC #0110] - Deprecate APIs

RFC #0110 - Deprecate APIs

Spec:

  • buildpack/0.X and platform/0.X branches of github.com/buildpacks/spec marked deprecated

Lifecycle:

  • Older APIs marked deprecated in lifecycle version 0.16
  • Older APIs removed in lifecycle version 0.18

Idea: CA Certs

Enable support for CA certs at buildtime and runtime.

Idea: Buildpacks should be able to contributed the equivalent of Dockerfile's EXPOSE to created image

Currently, Gitlab has a feature that automatically starts created images and attempts to connect to them to verify health after build. The port chosen is the first value in ExposedPorts which would traditionally be contributed via the Dockerfile EXPOSE keyword. We don't have this keyword, and there are all sorts of problems with it to start with (strongest among them is that applications are not bound to the ports in that list), but it would help users of buildpacks on Gitlab if a buildpack could contribute a hint.

See discussion with @bhuism for more context.

Idea: Add support for cosign

cosign has quickly become the container signing solution of choice. We should write out an RFC detailing out various integration points with cosign and buildpacks (image signing, sbom attachment, sbom signing) either with pack or the lifecycle directly.

Idea: Support for a single 'fat' binary supporting multiple buildpacks

A common problem across buildpacks is that when combining related buildpacks into a docker image, many small binaries are created, each with almost the same content. The request is for a mechanism to be created for collapsing multiple buildpacks down into a single binary and then have the individual buildpacks link to this binary and be invoked through some heuristic (perhaps detecting based on filename).

Idea: Add support for OCI layout / deprecate daemon support

As we integrate further into the cloud native ecosystem, with features like attached SBOMs, cosign etc. daemon is proving to be harder to support. Daemon support has also proven to be hard to support during our BuildKit poc buildpacks/pack#1314 and https://github.com/EricHripko/cnbp

which requires us to reimplement exporter in order to use it with BuildKit. All of these things would be easier if we supported oci image layout instead. GGCR, the core image manipulation library behind lifecycle, already supports loading and saving to oci layout format. Tools like podman support oci layout natively as well.

Given all of the points above, supporting oci layout in favor of daemon seems to make a lot of sense.

We should write out an RFC detailing how this would work while preserving backwards compatibility for tools like pack.

[RFC #0105] - Dockerfiles

RFC #0105 - Dockerfiles

๐Ÿ“– ๐ŸŒŽ For a brief tour of what has shipped so far, consult these docs

Phase 1 - switching the run image

In this phase of the implementation, image extensions may output run.Dockerfiles in order to switch the runtime base image based on which buildpacks detected.
The detector binary should run /bin/detect for buildpacks and extensions, run /bin/generate for extensions, and determine the new run image from the generated Dockerfiles.

Spec:

Lifecycle:

Phase 2 - extending the build image

In this phase of the implementation, image extensions may output build.Dockerfiles in order to extend the build time base image.
The extender binary, running as root, should apply the Dockerfiles in the order determined during detect, and then drop privileges before executing the build phase.

Spec:

Lifecycle:

Pack:

Samples:

Documentation:

Phase 3 - extending the run image

In this phase of the implementation, image extensions may output Dockerfiles OR run.Dockerfiles in order to extend the runtime base image.
The extender binary, running as root, should apply the Dockerfiles in the order determined during detect, and then provide a reference to the extended run image to the platform, so that the platform can provide this during the export phase.

Spec:

Lifecycle:

Libcnb:

  • TBD
  • Released in libcnb version TBD

Pack:

Samples:

Documentation:

Optimizations

[RFC #0093] - Remove Shell Processes

RFC #0093 - Remove Shell Processes

Spec:

Lifecycle:

Libcnb:

Pack:

Documentation:

Profile Buildpack:

Depends on System Buildpacks RFC implementation (for Profile Buildpack):

  • #209
  • Released in lifecycle version TBD

[RFC #0096] Remove Stacks and Mixins

RFC #0096 - Remove Stacks & Mixins

Spec:

Lifecycle:

Libcnb:

  • TBD

Pack:

Samples:

Documentation:

Enhancements:

Adding definitions to RFCs

Could we add a ## Definitions section to the RFC template? I think this would be helpful for some of the buildpacks project's terminology that may not be easily google-able.

Idea: How to enable Buildpack built images to have a valid PID1 in unix terms

Problem

Background

PID1 on Unix has special responsibilities. Mainly, it needs to take parentship of orphan children, and reap zombie processes.

One important thing to note is that the number of PIDs on a machine is limited, and using all PIDs will prevent new processes from getting created. Reaping zombies helps ensuring that no PIDs are used for nothing.

When working with containers and/or pods, since they usually have their own namespace, the pod/container's PID1 is responsible of this handling.

However, how this is done depends on the platform to which the container is deployed and how it is deployed.

There are mainly two ways to do this:

  1. Using the pause container as part of the pod, and enabled PID sharing for the whole pod. There, the pause container will be PID1 and handle those responsibilities
  2. Having the entrypoint of the container that will be PID1 handling those responsibilities.

What happens if this is not done?

Failing to do so can lead a misbehaving application to own all the PIDs on a node and make this node unsuable/crash.

There are ways of mitigating this, e.g. ensuring each application is limited on the number of PIDs it can create. However this is not ideal.

Our needs

We currently need to deploy to platforms where 1. is not possible for us, and we therefore need to ensure 2. works.

Currently, ensuring this with Buildpacks is tricky:

  • Buildpack's processes cannot be overriden by another buildpack, so we cannot "just add" a new buildpack that would install a valid init as PID1.
  • We do not want users of our buildpacks to have to know about this and do special handling, and would like only the 'builders' owners to have to care about this.

Question

Would CNB want to, and if so, how could we enable owners of buildpack builders to ensure that buildpack built images have a valid PID1 so they can safely be deployed to any platform regardless of how it is configured?

Potential solutions

  1. Having the launch in the lifecycle install itself as PID1 and reaping processes automatically

Request: Stack Removal Migration Plan

Create an RFC describing migration steps and compatibility strategies/limitations for stack removal.

This should include migration paths and any gotchas for the following personas
a) buildpack author
b) stack author
c) platform author

It should also explicitly address:

  1. any new API dependencies (e.g. does the lifecycle need to know about the distribution API?)
  2. any potential limitations to our previous compatibility equation (any builder + any platform + any buildpack = success IFF the lifecycle supports the platform API and the buildpack API of all provided buildpacks).
  3. a discussion of how compatibility will be maintained in the future given a change to any one of the specs. This may involve either:
    a) normalizing the specs to remove any potential for conflict
    b) assigning responsibility (e.g. to the platform or the lifecycle) for translating between specs to ensure compatibility
    c) explicitly accepting some amount of coupling and calling out any implications

This request is motivated by some specific tactical questions I added to the issue tracking stack removal implementation.

Add support for a structured SBOM

Currently since container images built by CNB don't follow a generic file system layout, they are not easily scannable by container scanning tools. We do however provide a BOM which should greatly help sidestep the entire manual scanning process and speed up things like CVE detection by directly providing the CVE scanner with a BOM.

However, since we currently do not impose any standard for what the BOM should look like and since the metadata table in BOM is a freeform table, it is very hard to have consistent BOMs that can be used by CVE scanners.

There are various standards for specifying a SBOM (Software bill of materials). The primary ones include

It might be worth investigating all these formats as a standard recommendation for BOM.

Creating this issue to track the creation of an RFC for this.


From current investigation in buildpacks/community#82, cyclonedx seems to be the front-runner in terms of tooling. We do however want interoperability with SPDX which cyclone dx also seems to provide.

[RFC #0119] - Export to OCI format

RFC #203 - Export to OCI format

Phase 1

The goal for this phase is to deliver an experimental version of the feature, wait for the community to use it and gives feedback to keep improving it

Spec

Imgutil

Lifecycle

Pack:

Documentation:

Phase 2

The initial idea of this phase was to work in the rebase capability in pack for an Image exported on disk in OCI layout format, we are not sure if this is useful and if it worth it. We would like to hear from the community if this is something valuable to work on

Imgutil

Lifecycle

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.