pyeventsourcing / eventsourcing Goto Github PK
View Code? Open in Web Editor NEWA library for event sourcing in Python.
Home Page: https://eventsourcing.readthedocs.io/
License: BSD 3-Clause "New" or "Revised" License
A library for event sourcing in Python.
Home Page: https://eventsourcing.readthedocs.io/
License: BSD 3-Clause "New" or "Revised" License
Hi there, I'm currently playing around with the library and I'm hitting the following run time error when running two commands in a row and then attempting to look up the aggregate root id within app.repository
:
python3.7/site-packages/eventsourcing/domain/model/events.py", line 139, in check_hash
raise EventHashError()
eventsourcing.exceptions.EventHashError
The code is available here https://github.com/AlanFoster/eventsourcing and the error is reproducible with:
pipenv shell
pipenv install
pipenv run python main.py
From what I can tell this might be a bug within the library, but I'm not sure just yet! I'd be keen to know your thoughts π
this will raise an exception if nonce is empty ([]
), which is possible since the length of the ciphertext is not checked.
I am currently experimenting with eventsourcing and django and I am getting the following error;
AttributeError: type object has no attribute __qualname__
from the ObjectJSONEncoder, it expects obj.__class__.__qualname__
to exist
It would be nice to be able to inject my own ObjectJSONEncoder/Decoder
I'm still trying to tackle an example app and making progress in some directions but still failing to understand the fundamentals in this library so please know that I'm super appreciative of your patience. Once I figure this out I hope to make the example library very clear and runnable.
I'm trying to understand how best to construct classes using either the WithReflexiveMutator
or TimestampedVersionedEntity
. I've created the same basic class user
with the two modalities and getting different errors.
I'd love to understand the errors, yes. But I'd also like to get a decent representation of both so I'm 100% open to feedback, critique, even eye-rolling and muttered swears.
Here's what I have so far: https://github.com/dgonzo/eventsourcing-kanban. See the README for the errors I'm seeing and the links to the relevant files.
using v2.1.1
When setting up a casandra EventSourcedApplication
, how is the CassandraDatastore
used?
In the readme for SQLAlchemy, it (and in the code) is an argument to SQLAlchemyActiveRecordStrategy
, but this is not part of CassandraActiveRecordStrategy
. As such, I cannot figure out how to setup and use a datastore for cassandra
Rewrite the documentation to follow the structure of the library rather than the list of features.
Firstly, document the core event sourcing persistence mechanism ("given infrastructure is setup, when an event is stored, the event can be retrieved"). Use domain event classes from the library to show how to store and retrieve a sequence of events. The aspects of the persistence mechanism are:
Secondly, describe how each of the DDD concepts can be implemented using the library classes:
And then include some example applications.
Anything else?
detect random mutation in stored records
validate aggregate event sequence
validate application (somehow use application log?)
sequenced item mapper could hash and check individual record?
event class could hash values or check values when constructed with a hash?
event store could get previous event before appending next?
aggregate could hash event with last hash when triggered
aggregate mutator could set last hash (maybe in validate method?)
(*) aggregate could track hash of events, setting last hash in current event, such an event could be validated by an aggregate against the previous event before being applied
Would be great to extend example module with sample ProcessManager
code.
Maybe showcase several aggregates/entities and process manager in a real world scenario.
Might you consider a switch to LGPL licensing for the library (e.g., toward improving/encouraging participation by commercial software developers), or is this something that has already been ruled out on principle?
Best,
K
I'm using pip version 9.01 and I tried using python 3.5, pypy 5.8, and python 2.7
But everytime when I try to install eventsourcing with either cassandra or sqlalchemy, it gives me error like the following:
$ pip install eventsourcing[cassandra]
zsh: no matches found: eventsourcing[cassandra]
Is there a way that I could install eventsourcing with the specified event store?
Any chance you could release the 1.10 version to PyPI? 1.09 doesn't seem to work with MySQL and SQLAlchemy.
sqlalchemy.exc.CompileError: (in table 'stored_events', column 'event_id'): VARCHAR requires a length on dialect mysql
This is already fixed in the main branch.
I'm not sure what I'm doing wrong with either my entity class or my projection policy but I can't get my discard event to remove that entity from the collection.
A simplified version of my entity class looks like this:
class User(WithReflexiveMutator, AggregateRoot):
"""Aggregate root for user.
A user is a namespace for accessing all workflow platform resources.
"""
def __init__(self, user_id, name, password, email, default_domain, **kwargs):
super(User, self).__init__(**kwargs)
self.user_id = self._validate_user_id(user_id)
self.default_domain = self._validate_domain(default_domain)
self.domains = set()
class Created(Event, AggregateRoot.Created):
"""Published when a user is created."""
@property
def user_id(self):
return self.__dict__['user_id']
@property
def name(self):
return self.__dict__['name']
@property
def password(self):
return self.__dict__['password']
@property
def email(self):
return self.__dict__['email']
@property
def default_domain(self):
return self.__dict__['default_domain']
def mutate(self, cls):
entity = cls(**self.__dict__)
entity.domains.add(self.default_domain)
entity.increment_version()
return entity
class Discarded(Event, AggregateRoot.Discarded):
"""Published when a user is discarded."""
@property
def domain_namespace(self):
return self.__dict__['domain_namespace']
def mutate(self, entity):
entity._is_discarded = True
return None
@staticmethod
def create(name, password, email, default_domain, **kwargs):
"""Creates a new user."""
user_id = uuid4()
event = User.Created(
originator_id=user_id,
user_id=user_id,
default_domain=default_domain,
**kwargs
)
entity = event.mutate(cls=User)
publish(event)
return entity
def discard(self):
self._apply_and_publish(
self._construct_event(
User.Discarded,
domain_namespace=self.default_domain
)
)
Here's my projection policy:
class UserProjectionPolicy:
"""Updates user collection whenever a user is created or discarded.
"""
def __init__(self, user_collections):
self.user_collections = user_collections
subscribe(self.add_user_to_collection, self.is_user_created)
subscribe(self.remove_user_from_collection, self.is_user_discarded)
def close(self):
unsubscribe(self.add_user_to_collection, self.is_user_created)
unsubscribe(self.remove_user_from_collection, self.is_user_discarded)
def is_user_created(self, event):
if isinstance(event, (list, tuple)):
return all(map(self.is_user_created, event))
return isinstance(event, User.Created)
def is_user_discarded(self, event):
if isinstance(event, (list, tuple)):
return all(map(self.is_user_discarded, event))
return isinstance(event, User.Discarded)
def add_user_to_collection(self, event):
assert isinstance(event, User.Created), event
domain_namespace = event.default_domain
collection_id = make_user_collection_id(domain_namespace)
try:
collection = self.user_collections[collection_id]
except KeyError:
collection = register_new_collection(collection_id=collection_id)
assert isinstance(collection, Collection)
collection.add_item(event.originator_id)
def remove_user_from_collection(self, event):
if isinstance(event, (list, tuple)):
return map(self.remove_user_from_collection, event)
assert isinstance(event, User.Discarded), event
domain_namespace = event.domain_namespace
collection_id = make_user_collection_id(domain_namespace)
try:
collection = self.user_collections[collection_id]
except KeyError:
pass
else:
assert isinstance(collection, Collection)
collection.remove_item(event.originator_id)
I can verify that when I use User.create
the user is instantiated and added to the collection.
Would be great if you could put a .editorconfig
When trying to get all the entities in a repo, I have been seeing that several domain events were missing. (version 2.1.1, cassandra)
the code in question is:
class FooRepo(EventSourcedRepo):
...
def get_foo_entities(self):
foos_created = filter(
lambda x: isinstance(x, CreatedFoo),
self.event_store.all_domain_events())
return [self.get_entity(f.entity_id) for f in foos_created]
When investigating, the event_store.all_domain_events()
was returning 27 events, while there are 32 in the cassandra table.
switching to filtering on unique entity_ids
seems to give the correct output:
def get_foo_entities(self):
foo_ids = set(f.sequence_id for f in self.event_store.active_record_strategy.active_record_class.objects.value_list('s').distinct())
return [self.get_entity(f) for f in foo_ids]
and getting the entity with get_entity
for the missing events works correctly.
I have no idea why this is happening. Also, as random rows seem to be missing from the all_items calls, I'm leery about using either of these methods. (which is why I'm using the cassandra model class rather than all_items).
The documentation now does a good job at explaining how to build your own custom parts into a toy application, but it doesn't document how to use all of the parts you have created, such as EventSourcedApplication
, and the various built in Event types.
It would be nice to get the documentation into readthedocs, with more in depth docs on usage
With the rise of ES/CQRS with DynamoDB Streams-->AWS Lambda, do you plan to add support for reading events from DynamoDB? ^_^
When using the Combined Persistence policy, there are undocumented assumptions about what the class hierarchy of the events are. This should probably be documented.
(If you haven't noticed, I'm adding notes for the documentation)
When I have snapshotting enabled, I need to set up a mutator for initial state of the replay:
https://github.com/johnbywater/eventsourcing/blob/v2.1.1/eventsourcing/infrastructure/eventplayer.py#L69
which is None. with the default handler, you get an unsupported exception. This should probably added into the docs as it is a per entity type handler
@example_mutator.register(type(None))
def example_none_mutator(event, _):
"""Due to the snapshotting - you need to return the class on
empty"""
return ExampleEntity
Hi @johnbywater! I hope this is an appropriate place to discuss features. :) If not, maybe create a Wiki page?
I am interested in your plans for these forthcoming features:
- Base class for event sourced projections or views (forthcoming)
- In memory event sourced projection, which needs to replay entire event stream when system starts up (forthcoming)
- Persistent event sourced projection, which stored its projected state, but needs to replay entire event stream when initialized (forthcoming)
- Event sourced indexes, as persisted event source projections, to discover extant entity IDs (forthcoming)
I'd like to help out.
d5-kanban-python
projections implementation were you thinking of borrowing?Thanks!
Hi! When I'm use eventsouring against MySQL, it gives an error on migrating the database:
sqlalchemy.exc.CompileError: (in table 'stored_events', column 'event_id'): VARCHAR requires a length on dialect mysql
Should we add length to the library or do we need to handle schema creation manually?
Thanks!
Hi John,
I am learning both your excellent "eventsourcing" library and the event sourcing in general and just came across a potential implementation issue regarding the "amended events"/"point in time database". Unless I am missing something.
As the event data in the event store (database storage) ideally should be immutable, there is seems to be no graceful way to handle the situation when an event handler (class) that for example reconstruct an entity must be amended (due to the error) or extended (due to the change in the requirements).
It is of cause possible to amend the class itself, but then the ability to look "back in time" will be lost as the state/entity will be reconstructed according to the latest code that is running, not the one that was active at the given time. And the "point in time" aspect of event sourcing is of major benefits provided by this approach.
As "topic" is effectively a fully qualified "class path" binding the data record in the data storage (repository) to the event class, it will probably be useful to introduce some type of "topic cut-off point"/"topic version"/"event revision" that will allow to bind an old (existing) data item to the most recent active class implementation by default, or to the other one in a standard/customizeble way.
This will also allow to deal in a standard way with the classes being moved across the modules (which is a lesser issue IMHO, but still).
Thanks.
Regards.
hi
2 questions
Serialising aggregate commands would avoid concurrency errors. Application state could be more easily kept in memory, avoiding lots of reads. Thespian seems promising, since it has "convention" mode that runs across a cluster of nodes. Would like to write an application class that uses an actor framework: the repository could return aggregate actors. Could scale by application partition, with each partition having an application actor.
Hi John!
I'm working on a new storage layer for the library using Django ORM for easy django integration. However, I find it hard to do in the current setup.
Currently when we save a domain event we go through DomainEvent -> StoredEvent -> DB implementation. it works great if all DB implementation uses schema that follows StoredEvent schema. However, if I'd like to have a different schema I need to somehow "deserialize" back to domain event since many DomainEvent fields are combined into one in StoredEvent.
My proposal is to move the abstraction a layer up from StoredEventRepository to EventStore(and maybe rename "EventStore" to "EventRepo" to follow the repo pattern). And the developer can have the flexibility of transcode the Domain event to whatever schema they like. And the downstream can be the default implementation of the repo.
The lib works great in most cases as long as we use it as is and not caring about the actual storage. However, there are cases, we do care about how the data is stored.
Let me know what you think, thank you!
Here is an example of the code I have to write to implement a different schema:
def write_version_and_event(self, new_stored_event, new_entity_version=None, max_retries=3, artificial_failure_rate=0):
domain_event = self.deserialize(new_stored_event) # :(
dt = datetime.datetime.fromtimestamp(timestamp_from_uuid(new_stored_event.event_id))
Event.objects.create(
event_id=domain_event.event_id,
event_type=type(domain_event).__name__,
event_data=new_stored_event.event_attrs,
aggregate_id=domain_event.entity_id,
aggregate_type=id_prefix_from_event(domain_event),
aggregate_version=new_entity_version,
create_date=make_aware_if_needed(dt),
)
# models.py
class Event(models.Model):
event_id = models.CharField(max_length=255, primary_key=True)
event_type = models.CharField(max_length=255)
event_data = models.TextField()
aggregate_id = models.CharField(max_length=255, db_index=True)
aggregate_type = models.CharField(max_length=255)
aggregate_version = models.IntegerField()
metadata = models.TextField()
create_date = models.DateTimeField(db_index=True)
class Meta:
unique_together = ('aggregate_id', 'aggregate_version',)
I think the EventStore is a good line of abstraction. It takes in domain events on write and returns domain events on read.
Bo
Having following aggregate root:
class Order(AggregateRoot):
class SomethingHappened(AggregateRoot.Event):
pass
Subscribing to the event:
@subscribe_to(Order.SomethingHappened)
def on_something(e):
pass
Subscriber never gets called.
After debugging subscribe_to
i see that event_class
is <class 'domain.order.SomethingHappened'>
while event
var is list so event_type_predicate
is returning false.
Hey @johnbywater
First of all a big thanks, this library is awesome!
I've got a question about rebuilding the aggregate root.
I've got this simple hangman web API and I do different calls to that API for guessing letters. I just noticed that every time that I do a call to the API, I will get new Aggregate Root id with the consequence that I never can guess the word, letters etc.
Is there a way to rebuild the aggregate to it's latest state?
Already many thanks in advance!
I've been struggling with the simple use case of needing to store an email lookup alongside the event store.
Maybe I'm thinking of this wrong but in my mind, the simplest way to store an account email would be an "email" table that enforces uniqueness of the email. This would be a plain-jane table with an entity_id primary key and an email column.
However, I'm hitting an escalating complexity trying to either work with the eventsourcing global scoped session or trying to work around it with a separate session (e.g. records are colliding or I'm hitting a locked database.)
What is the recommended way to do something like an email store? The email is stored in the event_store as well but it's in TEXT blog and sorting through versions and then doing a full-text search seems like a huge over-invention when standard rdbms tables do this "for free".
Is there an example somewhere how to use it with EventStore (geteventstore.com) (hydrate aggregate from events stored in EventStore)
I wonder in case of rebuilding all projections is there an easy way for getting all aggregates and rebuild their projections. We do have repository.get_entity()
no repository.get_entities()
.
Specifically, I'm interested in how I would use the app in a django/uwsgi stack. I'm not sure of the thread safety, or where the context manager should be set up.
my ideas are:
Hey @renou @julianpistorius @subokita @yarbelkgazia @rogoman @pouledodue @danielccyr @jellevandehaterd @leonh @lukaszb!
In an attempt to improve communications around this project (perhaps we could organise meetings about this project) there's a new Slack channel: https://eventsourcinginpython.slack.com
There's a sign up page which is available for 7 days. Please sign up here:
https://eventsourcinginpython.slack.com/join/shared_invite/MjA2ODA4MTI0OTYyLTE0OTg5ODM1MDMtZjUyNTI1ZWIyZg
Sorry this has taken so long!
There are five approaches... it might be useful for the library to support them.
The library's Aggregate classes should be renamed to be just "entity" classes. Then the Aggregate classes in the docs should be replicated in the library, so that there is a library class that reflects and perhaps refines the style of saving several pending events introduced in the docs.
Hello
I am new to your library.
I want to create event sourced aggregate with service injected into the constructor of aggregates.
Say I want to create an aggregate that handles a command that contains a password.
I need to hash the password before constructing the event.
Something like this:
class Registration(AggregateRoot):
def __init__(self, *args, **kwargs):
# injected
self._id = kwargs.pop('id')
self._hash_password = kwargs.pop('hash_password')
super().__init__(*args, **kwargs)
# aggregate state
self._hashed_password = None
def set_password(self, **kwargs):
if self._hashed_password:
raise InvalidOperation
self.__trigger_event__(
PasswordSet,
**PasswordSet.validate({
'id': self._id,
'hashed_password': self._hash_password(kwargs.pop('password')),
})
)
def on_event(self, event):
if isinstance(event, PasswordSet):
self._hashed_password = event.hashed_password
as you can see here I inject "hash_password" service which is actually a python method.
However, when I try to construct the aggregate.
test_registration_aggregate = Registration.__create__(
id='abcd',
hash_password=lambda x: x,
)
I got the error
Traceback (most recent call last):
File "/home/runner/app/user_registrations/tests/practice.py", line 20, in setUp
hash_password=lambda: 'new_password',
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/entity.py", line 229, in __create__
return super(EntityWithHashchain, cls).__create__(*args, **kwargs)
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/entity.py", line 63, in __create__
**kwargs
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/events.py", line 167, in __init__
self.__dict__['__event_hash__'] = self.__hash_object__(self.__dict__)
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/events.py", line 145, in __hash_object__
return hash_object(cls.__json_encoder_class__, obj)
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/hashing.py", line 12, in hash_object
cls=json_encoder_class,
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/transcoding.py", line 133, in json_dumps
cls=cls,
File "/usr/local/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/transcoding.py", line 23, in iterencode
return super(ObjectJSONEncoder, self).iterencode(o, _one_shot=_one_shot)
File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/transcoding.py", line 65, in default
return JSONEncoder.default(self, obj)
File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type builtin_function_or_method is not JSON serializable
So I assumed that whatever passed into __create__
should be serializable
but how else do I pass domain services into aggregates.
Hi John! Is there any ability to discard old snaphots and rebuild new ones? In the case of the change of mutator function, the old snapshot will become invalid and needs to be discarded and rebuilt.
I've tried to contribute the project to fix some issues and found that unit tests have fail status
Environment: Windows 10 x64, Python 3.7.0
Run command: python -m unittest
Failed tests:
eventsourcing.tests.core_tests.test_events.TestEventWithTimestamp
AssertionError: Decimal('1563540932.143897') not greater than Decimal('1563540932.143897')
eventsourcing.tests.core_tests.test_events.TestEventWithTimestampAndOriginatorID
AssertionError: Decimal('1563540932.144888') not less than Decimal('1563540932.144888')
And sometimes there blinking assertion in test:
eventsourcing.tests.core_tests.test_utils.TestUtils)
AssertionError: Decimal('1563542114.857965') not greater than Decimal('1563542114.857965')
Is it OK?
I update my domain model state in AggregateRoot by using mutators in events. So my state is consistent in the domain model.
I find quite often that the read model which i rebuild in projections is exactly same as state i have in domain model. But because I am subscribed to an event i have to do kinda the same logic of calculation and assignment of the state.
Ideally my projection would just dump the domain model state into a read store (ES in my particular case) and deal with some versioning conflicts.
Not sure how to structure it.
# domain
class SomethingHappend(Event, AggregateRoot.Event):
def mutate(self, state):
state.calculated = self.data * 100
# projection
@subscribe_to(SomethingHappend)
def on_something(event):
state = get_from_read_db(event.originator_id)
state.calculated = self.data * 100
save_to_read_db(state)
After playing about with Robert Smallshire's kanban example, I was very interested in trying out your more generic event sourcing project.
However I know nothing of the Cassandra db. I ran the test suite be noticed tests were failing due to my missing Cassandra dependency. I tried to split out the cassandra tests and sqlachemy tests into different modules. So that my lack of Cassandra was not hampering my ability to test SqlAlchemy.
leonh@919fad7
I've been reading through the documentation and I'm quite keen to try using the library but one thing caught my eye, which was the references to having to be careful only to instantiate one copy of the application.
"If your eventsourcing application object has any policies, for example if is has a persistence policy that will persist events whenever they are published, then constructing more than one instance of the application causes the policy event handlers to be subscribed more than once, so for example more than one attempt will be made to save each event, which wonβt work."
"When deploying an event sourcing application with Django, just remember that there must only be one instance of the application in any given process, otherwise its subscribers will be registered too many times."
Is this duplication in publish and subscribe the only problem that you are aware of?
From a superficial reading of the code it looks as it would be possible to fix that, as you are using a global subscribe
method and a global publish
method, which are both hooked to a global store of event_handlers. Could these be moved to the Application object? Or is there an architectural reason for having it this way?
It certainly feels like it would make things cleaner and easier by not having these globals, but I realise I may be missing something from the bigger picture (I'm not deeply experienced with Event Sourcing in general).
I'd be happy to make the changes and do a PR if you think its a good idea.
Thanks
Ed
Hi John!
Do you want to start a slack channel for this project? For project discussion. :)
Bo
I noticed that in the code for EntityWithHashChain.Created
, the super
call does not put in the class of self
, but of its parent:
def __mutate__(self, entity_class=None):
# Call super method.
obj = super(EntityWithHashchain.Event, self).__mutate__(entity_class)
# Set entity head from event hash.
obj.__head__ = self.__event_hash__
return obj
This means the __mutate__
of EntityWithHashChain.Event
is skipped. In fact, fixing the super call will make the example code in the quickstart guide fail. Probably the fix should be to remove the method.
With two entity classes and two repositories, one repository for each entity class, the ID of an aggregate of one type will exist in the repository for the other class of entity. It probably shouldn't.
Solution: either map IDs into a namespace (which gets messy), or expand the stored event classes to have a sequence type (backwards incompatible changes).
Compromises scalability at a certain level, but makes getting all events quite simple. Should be an option, and earlier version had this. So reintroduce an active record class that does this. And revisit method to get all events, so make sure it returns everything in order.
Hi, thanks for this excellent library.
I was wondering what I would do if I wanted to restructure my codebase and move a domain entity class from one module to another. As far as I can tell, the eventsourcing
library uses get_topic
to determine the topic used in the DB to store the type of an event or entity.
Is there a way to swap the implementation of get_topic
and retrieve_topic
(maybe via dependency injection) in order to have a custom mapping in place that keeps backwards compatibility?
As an example, in original implementation the entity World
lives in myapp.domain.world.World
and therefore gets the topic myapp.domain.world#World
. Now when relocating to myapp2.world.World
the topic will be myapp2.world#World
which is a mismatch.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.