Coder Social home page Coder Social logo

dogma's Introduction

Dogma

Build message-based applications in Go.

Documentation Latest Version Build Status Code Coverage

Overview

Dogma is a toolkit for building message-based applications in Go.

In Dogma, an application implements business logic by consuming and producing messages . The application is strictly separated from the engine, which handles message delivery and data persistence.

Features

  • Built for Domain Driven Design: The API uses DDD terminology to help developers align their understanding of the application's business logic with its implementation.

  • Flexible message format: Supports any Go type that can be serialized as a byte slice, with built-in support for JSON and Protocol Buffers.

  • First-class testing: Dogma's testkit module runs isolated behavioral tests of your application.

  • Engine-agnostic applications: Choose the engine with the best messaging and persistence semantics for your application.

  • Built-in introspection: Analyze application code to visualize how messages traverse your applications.

Related repositories

  • testkit: Utilities for black-box testing of Dogma applications.
  • projectionkit: Utilities for building projections in popular database systems.
  • example: An example Dogma application that implements basic banking features.

Concepts

Dogma leans heavily on the concepts of Domain Driven Design. It's designed to provide a suitable platform for applications that make use of design patterns such as Command/Query Responsibility Segregation (CQRS), Event Sourcing and Eventual Consistency.

The following concepts are core to Dogma's design, and should be well understood by any developer wishing to build an application:

Message

A message is a data structure that represents a command, event or timeout within an application.

A command is a request to make a single atomic change to the application's state. An event indicates that the state has changed in some way. A single command can produce any number of events, including zero.

A timeout helps model business logic that depends on the passage of time.

Messages must implement the appropriate interface: Command, Event or Timeout. These interfaces serve as aliases for dogma.Message, but may diverge in the future.

Message handler

A message handler is part of an application that acts upon messages it receives.

Each handler specifies the message types it expects to receive. These message are routed to the handler by the engine.

Command messages are always routed to a single handler. Event messages may be routed to any number of handlers, including zero. Timeout messages are always routed back to the handler that produced them.

Dogma defines four handler types, one each for aggregates, processes, integrations and projections. These concepts are described in more detail below.

Application

An application is a collection of message handlers that work together as a unit. Typically, each application encapsulates a specific business (sub-)domain or "bounded-context".

Engine

An engine is a Go module that delivers messages to an application and persists the application's state.

A Dogma application can run on any Dogma engine. The choice of engine brings with it a set of guarantees about how the application behaves, for example:

  • Consistency: Different engines may provide different levels of consistency guarantees, such as immediate consistency or eventual consistency.

  • Message delivery: One engine may deliver messages in the same order that they were produced, while another may process messages out of order or in batches.

  • Persistence: The engine may offer a choice of persistence mechanisms for application state, such as in-memory, on-disk, or in a remote database.

  • Data model: The engine may provide a choice of data models for application state, such as relational or document-oriented.

  • Scalability: The engine may provide a choice of scalability models, such as single-node or multi-node.

This repository is not itself an engine implementation. It defines the API that engines and applications use to interact.

One example of a Dogma engine is Veracity.

Aggregate

An aggregate is an entity that encapsulates a specific part of an application's business logic and its associated state. Each instance of an aggregate represents a unique occurrence of that entity within the application.

Each aggregate has an associated implementation of the dogma.AggregateMessageHandler interface. The engine routes command messages to the handler to change the state of specific instances. Such changes are represented by event messages.

An important responsibility of an aggregate is to enforce the invariants of the business domain. These are the rules that must hold true at all times. For example, in a hypothetical banking system, an aggregate representing a customer's account balance must ensure that the balance never goes below zero.

The engine manages each aggregate instance's state. State changes are "immediately consistent" meaning that the changes made by one command are always visible to future commands routed to the same instance.

Aggregates can be a difficult concept to grasp. The book Domain Driven Design Distilled, by Vaughn Vernon offers a suitable introduction to aggregates and the other elements of domain driven design.

Process

A process automates a long running business process. In particular, they can coordinate changes across multiple aggregate instances, or between aggregates and integrations.

Like aggregates, processes encapsulate related logic and state. Each instance of a process represents a unique occurrence of that process within the application.

Each process has an associated implementation of the dogma.ProcessMessageHandler interface. The engine routes event messages, which produces commands to execute.

A process may use timeout messages to model business processes with time-based logic. The engine always routes timeout messages back to the process instance that produced them.

Processes use command messages to make changes to the application's state. Because each command represents a separate atomic change, the results of a process are "eventually consistent".

Integration

An integration is a message handler that interacts with some external non-message-based system.

Each integration is an implementation of the dogma.IntegrationMessageHandler interface. The engine routes command messages to the handler which interacts with some external system. Integrations may optionally produce event messages that represent the results of their interactions.

Integrations are stateless from the perspective of the engine.

Projection

A projection builds a partial view of the application's state from the events that occur.

Each projection is an implementation of the dogma.ProjectionMessageHandler interface. The engine routes event messages to the handler which typically updates a read-optimized database of some kind. This view is often referred to as a "read model" or "query model".

The projectionkit module provides engine-agnostic tools for building projections in popular database systems, such as PostgreSQL, MySQL, DynamoDB and others.

dogma's People

Contributors

dependabot-preview[bot] avatar dogmatiq-automation[bot] avatar jmalloc avatar koden-km avatar web-flow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

dogma's Issues

Use an interface for application definitions.

It's currently a struct with slices for each handler type, which is simple, but in the interest of future-proofing and interoperability, this should be an interface with a Configure() method just like the handlers.

Restrict the types allows to be used as messages.

So far I've been keeping this as broad as possible, but there are some types that simply don't make sense to use as messages (channels, for example), and others where it's simply unclear about how they should be handled.

Thoughts so far:

  • disallow anonymous types (how are we supposed to switch on them anyway_
  • disallow channels
  • disallow functions
  • disallow interfaces
  • disallow pointers-to-pointers

This would leave as valid all scalar types as well as:

  • structs
  • arrays
  • slices
  • maps
  • pointers to any of the above

This really isn't any different to what's likely to actually work right now, but it should definitely be documented somewhere.

Taken to the extreme, I almost feel that we could require all messages to be struct or pointer-to-struct, as in reality this is likely all that will ever be used - but I am reluctant to impose unnecessary restrictions while the cost of supporting these types is low.

Consider including a "sender" interface.

This would be the in-code representation of the "external" column in "the matrix" described in #35.

We would need to include such an interface in order to make integrations portable across multiple engines.

Move fixtures to enginekit repo.

These various "fixtures" packages have been quite handy while developing the other packages in the organisation, but as time goes on I am increasingly less sure that they really belong in the "application-developer-facing" APIs. They are a tool for engine developers, so I wonder if they should instead live in some kind of blahtest repository, similar to the existing sqltest.

Consider allowing handlers to specify their own timeouts / deadlines.

This discussion came up because the inter-replica messaging implementation in verity needs to invoke a gRPC API that in turn invokes a specific Dogma handler.

I was discussing with @koden-km what value the client-side of that gRPC API should use, and we came to the conclusion that the best result would be for the client-side to know what timeout the server-side will be using to compute its deadline, and to configure its own deadline to be slightly later.

For this reason, it's not adequate to allow the handler methods to simply setup their own deadline via a context, as there's no way for the client-side to query that before making the RPC request.

What we're proposing is adding a ComputeHandleTimeout(dogma.Message) timeout.Duration method to all handlers except aggregates. Aggregates do not use a context/timeout as their handle-methods are in-memory, deterministic operations.

If ComputeHandleTimeout() returns 0, the engine must apply some default timeout. We can provide a "behavior" struct that returns 0 unconditionally.

The name of this method is definitely still up for discussion. It needs to be named in such a way that its clear that it relates to the calculation of deadlines, as opposed to being anything specifically to do with "timeout messages" that a process schedules.

Define and document the comparison semantics for identity names and keys.

Current engine implementations use case-sensitive (or more specifically, byte-wise) comparisons. This is probably the only sane approach; otherwise the spec needs to define case-folding and normalisation behavior which I'd rather avoid if possible.

The spec, however, currently nominates that these identifiers must be UTF-8 strings and I'm wondering now if this is a mistake and should instead be treated as opaque byte sequences.

Either way, the recommendation about UUIDs stands, and needs to be clarified to indicate that they should be provided as lower-case, hypen-separated, hex-encoded ASCII strings (whatever this format is called), for example: 1bff04c5-3fc5-4d18-be94-313529c57b99.

@ezzatron I would be keen to get your opinion about this at some point.

Rethink keeping behaviors in `dogma`.

This was "decided" to include these in the dogma module (unilaterally and hastily by me) in #23, but I think I may have made the wrong decision.

Now that I am looking at #11, where I'm adding a IsEqual() and Clone() methods to AggregateRoot and ProcessRoot it becomes obvious that there's no sensible default "behavior" that makes sense to include in dogma itself. For example, any such behavior that was useful in our own applications would need to support protobuf, which I definitely don't want as a dependency of dogma.

Consider adding a Time() method to ProcessTimeoutScope

I think it's pretty reasonable to want to know the time that a timeout is scheduled to occur. However, this hasn't been done yet as the argument is that the message itself should contain the time if it is relevant to the domain. This is certainly true for commands and events, but I wonder if this is just an annoyance for timeouts, which are essentially an implementation detail of the process they belong to.

Define interfaces for implementing stateless message handlers.

These are those handlers that are neither for aggregates nor workflows. "Integration handlers" is the name that @danilvpetrov and I have been using to refer to message handlers that do not implement business logic themselves, but instead serve as a gateway between the message-based domain layer and third party APIs. We've had experience using these handlers to integrate with systems such as Docker, Hashicorp Vault, etc.

I'm not certain that we want to label "generic message handlers" in Dogma as "integration" handlers specifically, but I would like them to have a more specific name than just "Handler", so that we can always distinguish them from AggregateMessageHandler and ProcessMessageHandler (or whatever we end up naming it, see #9).

Update all godoc.org URLs to pkg.go.dev.

This applies to all repositories, not just dogmatiq/dogma.

One thing that's understandable, but kind of anoying about pkg.go.dev is that it doesn't even show repositories that have no tags, as far as I can tell. If we can solve this so we still have linkable docs for projects that are actively working towards their first tag, that would be good.

Add Create and Destroy methods to AggregateScope.

As per #1, we decided that such methods are necessary, as otherwise there would be no way to manage the lifetime of CRUD aggregates (as opposed to those running on an ES engine).

Were we happy with the names Create() and Destroy()? The equivalent methods for a workflow are named Begin() and End().

I feel that Destroy is marginally more applicable to both CRUD and ES implementations, whereas Delete does imply more forcefully to me that the data would be removed, which is not true under ES. I think Create is fine in any scenario.

We might also consider naming the Begin/End just like workflows. Any thoughts?

/cc @danilvpetrov @koden-km

Integration handlers, and the split between domain and integration messages.

We've had a few chats now about the "integration handler", and what kind of messages it can accept. While trying to probe out what we need I came up with a little matrix of which handlers can send/receive which "classes" of messages.

Remember that there is still only one Message interface, so this is largely conceptual -- but it could perhaps be enforced by certain engine implementations.

In the matrix below, I'm experimenting with the idea of splitting the IntegrationMessageHandler concept into separate CommandHandler and EventHandler interfaces.

The columns are the various types of message consumers/producers. "External" refers to any mechanism that could be used to send messages to a Dogma application from the outside. The rows are the various categories of messages.

Aggregate Process Command Handler Integration Event Handler Projection External Sender #40
Domain Command recv send send
Domain Event send recv recv
Non-Domain Command send recv
Non-Domain Event recv send recv (?) send
Timeout send & recv

Consider allowing integration handlers to produce commands.

See also #89

Any code can already produce commands by using the CommandExecutor interface. The question here specifically is about whether it makes sense to allow integration handlers to atomically produce new commands as a result of handling some other command (or event, as of #90).

Comparators, cloners and handler sets.

In my experience implementing the previews Dogma/Gospel prototypes, as well as Ax, I've found it necessary sometimes to compare and clone messages and aggregate/workflow state.

I am tossing up whether a system for defining how to do this belongs in Dogma itself, or whether each engine implementation should have its own system for doing so.

I had considered adding IsEqual() and Clone() methods to the Message, AggregateRoot, etc, interfaces, but I am reluctant to do so for two reasons:

  1. A given engine implementation may not require these operations at all
  2. It may not be convenient to add these methods to these types in all cases. I'm thinking specifically of message types that are generated, like when using protobufs.

We could have IsEqual() and Clone() on separate interfaces, which could be optionally implemented by the message or aggregate root, and fall back to some engine-defined default in their absence.

One such system that will need the ability to compare and clone is the "test engine" that will be bundled with Dogma. Since this "should" be used to test all domain logic, I feel this lends some credibility to the idea that the compare and clone semantics should be defined in the domain layer; but otherwise I think I would prefer to keep it entirely engine-specific.

Specify some basic requirements on the format of handler and app names.

The engine implementation may place its own requirements on the structure of names, but I think we should place some basic requirements in the dogma "spec" itself, something fairly basic like

  • no whitespace
  • non-empty

I don't want to preclude using UTF-8 names or anything like that, but this at least sets a baseline for interoperability.

Consider allowing integration handlers to consume events.

See also #90.

The idea here is that integrations with other systems may need to respond to events that occur, without necessarily involving a business process.

One hypothetical example would be sending a notification to online users about completion of a transaction.

ApplyEvent behavior with event-sourcing and deprecated messages.

The documentation for AggregateRoot.ApplyEvent() currently includes this clause:

// It MUST NOT be called with a message of any type that has not been
// configured for production by a prior call to Configure().

However, this may present problems with in an event-sourcing engine when the event has been deprecated, insofar as it is no longer produced, but remains in the aggregate's history. Just because it's not produced any longer does not mean that it's not relevant to reconstructing aggregate state.

In order to resolve this we could:

  • Change AggregateConfigurer to allow separate control over which events may be produced and which need to be passed to ApplyEvent().
  • Place a hard requirement that ApplyEvent() MUST always support being called with any event the aggregate has ever produced. Seems a bit nonsensical outside of ES.
  • Recommend that ApplyEvent() SHOULD support being called with historical events, but document as engine-specific.
  • Document it as engine-specific and make no recommendation whatsoever.
  • Recommend never removing messages from the "produced" list if they were ever produced. I don't love this idea, but every other approach will make it hard to test for deprecated events in testkit. Perhaps testkit needs support for injecting arbitrary historical events even if they are no longer allowed to be produced.

Counsel against performing writes in processes.

Blocked by #43 - we should finalize naming before working on documentation further.

We should document that process message handlers are given access to the context, and are able to return an error, because they may need to query information from read models, etc. The documentation should strongly discourage the user from performing "writes" of any kind, or implementing integration logic directly inside processes (instead use integration handlers).

We need a way to ignore messages if an aggregate instance does not exist.

In order to properly use Destroy() we need to be able to ignore a message that is handled after an aggregate instance has been destroyed without recreating the instance.

At the moment the only way to tell if an instance exists is to call Create() and then call Destroy() again if it did not already exist:

if s.Create() {
    s.Destroy()
    return
}

actualMessageHandlingLogic()

This works OK, but doesn't express intent very well and may have persistence/cache overhead in some engine implementations (though not so in dogmatiq/infix, FWIW).

There's quite a few different ways we could accomplish this, and I'm not yet sure if we should be trying to tackle this specific problem or simply adding a generic way to test if an instance exists. Such a feature has been omitted from the design so far to try to discourage use of existence/non-existence as a basis actual business logic.

As a side note, this decision may also make sense for processes, as Begin() and End() are direct analogues to Create() and Destroy(), respectively.

Below I list some approaches we could take by modifying the AggregateCommandScope interface.

  1. Add an Exists() bool method. This is probably the simplest solution, which definitely lends it some credibility. BC break for engines.
  2. Change the existing Root() method signature to Root() (r AggregateRoot, exists bool). BC break for engines and applications.
  3. Add a separate TryRoot() (r AggregateRoot, exists) method. BC break for engines.
  4. Add a IgnoreMessageIfNotExists() method. This would need to panic with some value that the engine can catch. BC break for engines.

And one more approach that would involve modifying AggregateMessageHandler

  1. Change RouteCommandToInstance() to allow back-holing if messages if the instance does not already exist. IIRC jmalloc/ax has a feature like this in its saga subsystem. BC break for both engines and applications. This approach might have some advantages in engines like dogmatiq/verity which perform routing on a separate cluster node before dispatching the message to another node for handling.

Consider using a UUID to identify handlers.

One concern we have about using "names" to identify handlers is that those names are often used as keys for persisted data. It seems likely that a given handler may evolve over time such that it retains the original state, but the original name is no longer appropriate.

One solution to this is for the engine to provide the ability to rename a handler, but perhaps a better approach would be to require each handler to be given its own UUID. The UUID would be generated by the developer at "design time" and stay with the implementation as long as it's appropriate that the handler continues to use the same state.

We could still supply a "name" for human consumption, or otherwise use the type name of the implementation.

Missing context parameters.

IntegrationMessageHandler.HandleCommand() and ProjectionMessageHandler.HandleEvent() are missing their ctx parameters.

Changes to projections to take advantage of event sourcing / event streams.

One of the promises of event-sourcing is that projections can be quite simple to implement because you have a total ordering of events from a given source that you can process linearly. Thus, issues related to event order and idempotence can be managed generically.

In order to do this, we need to be able to atomically store the current offset in the event stream atomically with the write to projection's data store.

This is difficult to abstract without requiring the engine to support this kind of "offset-based" message identity, which is potentially nonsensical in engine implementations that do not use event sourcing.

As of now, we've avoided making any additions or decisions by requiring all projection implementations to handle ordering and idempotence themselves. The only helpful information provided to the implementor is the value returned by ProjectionEventScope.MessageKey(), which, failing all else could be used to exclude each message by comparing it to a set of previously handled messages, which is not optimal.

Should Dogma provide "helpers"?

For example, I could imagine it being useful if Dogma provided an embeddable struct like so:

type NoTimeoutSupport struct {}

func (NoTimeoutSupport) HandleTimeout(context.Context, ProcessScope, message) {
  panic(UnexpectedMessage)
}

Which could be embedded into ProcessMessageHandler implementations that do not make use of timeouts, which I imagine will be very common.

This would be the first actual executable code that made its way into the dogma package itself.

Change routing documentation regarding "zero" values.

In the AggregateMessageHandler.RouteCommand() and ProcessMessageHandler.RouteEvent() methods, the documentation mentions that when a routing probe is being performed that the message will be a zero-value.

I think this needs to be relaxed to state simply that the content of the message is undefined, for a few reasons:

  • There's no real reason it has to be a zero-value, since the documentation is instructing the implementor to only use the message's type, not its value.
  • Attempting to enforce zero-values in the engine implementation requires a bunch of unnecessary reflection.
  • How do we deal with pointer values, do we really want to require that the engine pass a nil pointer to a message if the message implementation uses pointer receivers? What if it's not a struct at all?

/cc @danilvpetrov @koden-km thoughts?

Define interfaces for implementing business processes.

We need to define an interface for "workflows", and their associated instances and scopes; much like aggregates.

I'd like to use this opportunity to consider a return to terminology more inline with that of DDD wherein a workflow is actually called a "process".

  • Workflow would become ProcessMessageHandler (to mirror AggregateMessageHandler)
  • WorkflowScope would become ProcessScope
  • WorkflowInstance would become ProcessState, or perhaps ProcessRoot if we wanted to mirror aggregate terminology.

It's also possible to have "stateless processes" - that is, processes that do not own any state, but may look to read-models, etc. It might be a good idea to be able to define these in a standard way, distinct from a generic "message handler" (#10) so that the notion that they only accept events and produce commands can still be enforced by the engine.

Consider disallowing recreation of an aggregate instance during the same message by which it is destroyed.

Currently the spec permits engines to allow AggregateCommandScope.Create() to be called after Destroy() is called during the handling of a single message. This also applies to "re-beginning" a process instance after ending it.

This is rather complicated to implement in some cases (infix), and so I have chosen to disallow this behaviour there. My initial concern is that testkit will permit such logic, as it is allowed by the spec, only to see it fail under the real engine.

Note: this does not affect the ability to create, then destroy within a single message, nor to recreate the same instance in a subsequent message.

Choose a software license.

Everything I've released on GitHub in the past has been done under an MIT license. I don't see any particular reason to change that, but I thought I'd throw it out there.

There's a bunch listed here, there's also the Creative Commons CC0 "no rights reserved" license and the Unlicense.

Move enginekit fixtures to this repo.

I was initially against this idea because the fixtures are typically (always?) used by engine developers, not application developers, but everything in fixtures is an implementation of the interfaces in this repo, after all.

Thoughts?

Consider requiring `Configure()` to specify the produced messages.

The Configure() method on all handler types must currently specify the message types that are to be routed to that handler. This is a necessity for implementing any engine.

This issue proposes adding the requirement that Configure() implementations also specify the types of messages that the handler will produce. That is, the types of events that may be recorded by an aggregate or integration handler, or the types of commands executed by a process handler. Timeout messages should probably not be included in this requirement, as they are "private" to a specific process, and not routed between handlers.

While having this information is not a necessity for any engine implementation, it does provide valuable information such as a definitive list of the events that are produced. This information has a few potential use-cases, such as building message routing tables for inter-app communication, or to enforce requirements that a given events are only produced by a single aggregate.

If we were to add these methods to the interface, my current stance is that it should be mandatory to advertise the produced messages. That means that if any handler were to produce a message of a type that had not been specified in Configure(), the engine should (or perhaps must) treat this as a logic error.

My main concern about whether to implement this at all, is that it appears to be the first time we'd be adding a feature that does not directly serve interoperability, although perhaps the argument could be made that having this information and not using it (in a given engine implementation) opens more doors than not having it at all.

Add a panic value to be used by handlers when a message is invalid.

Generally, handlers should be written expecting pre-validated messages. Nonetheless, they still need a way to signal that a message is invalid and hence that retrying will never work.

@koden-km suggested dogma.MalformedMessage or dogma.InvalidArgument. I think the name mostly needs to communicate what that the message will never work, more so than the reason why.

As Kev rightly pointed out, this panic value needs to be distinct from returning an error value, which will always be retried under the existing engine implementations.

Add the "banking" example.

It would be fantastic if we could do this in such a way that it ran the under in-memory test engine (#15) and implemented a simple API, or even a very basic web front-end, so that we could see the example work without requiring a more substantial backend.

Should testing utilities be in a separate module?

#12, #13, #14 and #15 could be implemented in a separate Go module. This might also affect our decision making for #11.

Pros (arguments for a separate module)

  1. They can be versioned separately. A Dogma API release can be made without having to update the testing tools. This is a pro for the releaser, but possibly a con for the users.
  2. Users that opt not to test, or not to test using our tools do not need to include them in their project. This is a fairly weak argument imo, given that the additional download would be miniscule.
  3. We can make BC breaking changes to the testing tools, without having to make a new major release of the Dogma API.

Cons (arguments to keep in same module)

  1. It's a barrier to entry. If we want to encourage users to test their domain logic using these utilities; they should be easy to access.
  2. They can be versioned in lock-step with any API changes. When a user upgrades they would know they are getting compatible testing tools.
  3. As mentioned in #16, splitting the testing tools would mean that the example code would also need to be moved elsewhere, otherwise we would have a cyclic dependency between the API module (via the example's tests) and the test utilities module.

I would say I'm currently leaning towards keeping the testing tools in the same module, though Pro #3 is quite compelling.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.