Coder Social home page Coder Social logo

bakwc / pysyncobj Goto Github PK

View Code? Open in Web Editor NEW
698.0 41.0 110.0 594 KB

A library for replicating your python class between multiple servers, based on raft protocol

License: MIT License

Python 100.00%
raft raft-protocol distributed-systems fault-tolerance replication python

pysyncobj's Introduction

PySyncObj

Build Status Windows Build Status Coverage Status Release License gitter docs

PySyncObj is a python library for building fault-tolerant distributed systems. It provides the ability to replicate your application data between multiple servers. It has following features:

  • raft protocol for leader election and log replication
  • Log compaction - it use fork for copy-on-write while serializing data on disk
  • Dynamic membership changes - you can do it with syncobj_admin utility or directly from your code
  • Zero downtime deploy - no need to stop cluster to update nodes
  • In-memory and on-disk serialization - you can use in-memory mode for small data and on-disk for big one
  • Encryption - you can set password and use it in external network
  • Python2 and Python3 on linux, macos and windows - no dependencies required (only optional one, eg. cryptography)
  • Configurable event loop - it can works in separate thread with it's own event loop - or you can call onTick function inside your own one
  • Convenient interface - you can easily transform arbitrary class into a replicated one (see example below).

Content

Install

PySyncObj itself:

pip install pysyncobj

Cryptography for encryption (optional):

pip install cryptography

Usage

Consider you have a class that implements counter:

class MyCounter(object):
	def __init__(self):
		self.__counter = 0

	def incCounter(self):
		self.__counter += 1

	def getCounter(self):
		return self.__counter

So, to transform your class into a replicated one:

  • Inherit it from SyncObj
  • Initialize SyncObj with a self address and a list of partner addresses. Eg. if you have serverA, serverB and serverC and want to use 4321 port, you should use self address serverA:4321 with partners [serverB:4321, serverC:4321] for your application, running at serverA; self address serverB:4321 with partners [serverA:4321, serverC:4321] for your application at serverB; self address serverC:4321 with partners [serverA:4321, serverB:4321] for app at serverC.
  • Mark all your methods that modifies your class fields with @replicated decorator. So your final class will looks like:
class MyCounter(SyncObj):
	def __init__(self):
		super(MyCounter, self).__init__('serverA:4321', ['serverB:4321', 'serverC:4321'])
		self.__counter = 0

	@replicated
	def incCounter(self):
		self.__counter += 1

	def getCounter(self):
		return self.__counter

And thats all! Now you can call incCounter on serverA, and check counter value on serverB - they will be synchronized.

Batteries

If you just need some distributed data structures - try built-in "batteries". Few examples:

Counter & Dict

from pysyncobj import SyncObj
from pysyncobj.batteries import ReplCounter, ReplDict

counter1 = ReplCounter()
counter2 = ReplCounter()
dict1 = ReplDict()
syncObj = SyncObj('serverA:4321', ['serverB:4321', 'serverC:4321'], consumers=[counter1, counter2, dict1])

counter1.set(42, sync=True) # set initial value to 42, 'sync' means that operation is blocking
counter1.add(10, sync=True) # add 10 to counter value
counter2.inc(sync=True) # increment counter value by one
dict1.set('testKey1', 'testValue1', sync=True)
dict1['testKey2'] = 'testValue2' # this is basically the same as previous, but asynchronous (non-blocking)
print(counter1, counter2, dict1['testKey1'], dict1.get('testKey2'))

Lock

from pysyncobj import SyncObj
from pysyncobj.batteries import ReplLockManager

lockManager = ReplLockManager(autoUnlockTime=75) # Lock will be released if connection dropped for more than 75 seconds
syncObj = SyncObj('serverA:4321', ['serverB:4321', 'serverC:4321'], consumers=[lockManager])
if lockManager.tryAcquire('testLockName', sync=True):
  # do some actions
  lockManager.release('testLockName')

You can look at batteries implementation, examples and unit-tests for more use-cases. Also there is an API documentation. Feel free to create proposals and/or pull requests with new batteries, features, etc. Join our gitter chat if you have any questions.

Performance

15K rps on 3 nodes; 14K rps on 7 nodes; 22K rps on 10 byte requests; 5K rps on 20Kb requests;

Publications

pysyncobj's People

Contributors

aaliddell avatar bakwc avatar betanummeric avatar chadlung avatar cyberdem0n avatar eguven avatar ellipses avatar fabaff avatar fengxuduke avatar gitter-badger avatar justanotherarchivist avatar mcassaniti avatar roninsc2 avatar sandwichs-del avatar schmidtfx avatar tangruize avatar troyhy avatar weii41392 avatar werat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pysyncobj's Issues

Not possible to mix nodes with In-memory and File serializers...

It happens due to some incompatibility between zlib and gzip.
I think it make sense to unify compression algorithm, i.e. either use zlib or gzip when doing serialization during log compaction.
Not sure whether it is a bug or feature though.

I can prepare a fix if you want.

Windows compatibility?

The documentation says the library is supported on Windows, but when I try to install PySyncObj on Windows (Python 2.7.x) I see the following error:

      . . .
      File "pysyncobj\__init__.py", line 1, in <module>
        from .syncobj import SyncObj, SyncObjException, SyncObjConf, replicated, replicated_sync,\
      File "pysyncobj\syncobj.py", line 29, in <module>
        from .pipe_notifier import PipeNotifier
      File "pysyncobj\pipe_notifier.py", line 2, in <module>
        import fcntl
    ImportError: No module named fcntl

My understanding is that fcntl is only supported on Unix-like platforms.

Assertion error when node reconnects

If I run two instances of counter, and kill and restart one while the other is operating, I can fairly reliably get this error:

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 763, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/local/lib/python2.7/dist-packages/pysyncobj/syncobj.py", line 177, in _autoTickThread
    self._onTick(self.__conf.autoTickPeriod)
  File "/usr/local/lib/python2.7/dist-packages/pysyncobj/syncobj.py", line 247, in _onTick
    self.__poller.poll(timeToWait)
  File "/usr/local/lib/python2.7/dist-packages/pysyncobj/poller.py", line 96, in poll
    self.__descrToCallbacks[descr](descr, eventMask)
  File "/usr/local/lib/python2.7/dist-packages/pysyncobj/node.py", line 79, in __processConnection
    assert descr == self.__conn.fileno()
AssertionError

A great project idea though!

Unable to reliably run three counters on localhost

When I try to run three counters.py:

python counters.py 2000 2001 2002
python counters.py 2001 2000 2002
python counters.py 2002 2000 2001

killing and restarting the processes randomly I get:

[EXCEPTION] (/usr/local/lib/python2.7/dist-packages/pysyncobj/syncobj.py, 104):
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/pysyncobj/syncobj.py", line 95, in __initInTickThread
    self.__bind()
  File "/usr/local/lib/python2.7/dist-packages/pysyncobj/syncobj.py", line 398, in __bind
    self.__socket.bind((host, int(port)))
  File "/usr/lib/python2.7/socket.py", line 228, in meth
    return getattr(self._sock,name)(*args)
error: [Errno 98] Address already in use

I'm wondering if it's just that you need to close the listener more reliably?

onReady

Необходим метод onReady, вызываемый после создания SyncObj, когда произошла первичная синхронизация с лидером.

Cannot Create Leader

Hi bakwc,

This may be a stupid question but I have been stuck on this for some time:
I am trying to run the example counter.py.
I am using self_port: localhost:4000
and partner port: localhost:4001 localhost:4002 localhost:4003

However, when I ran the code, it stuck at leader selection part.
When I ran o._getLeader(), it keeps giving me None.

Do you know what's happening here?

Thanks a ton

SSL support

Allowing optional SSL support on top of the existing encryption.

how to scale with the log

hi experts,
as far as I know from experiment, when not set conf.journalfile, the log records are kept in memory, and when peers restart, they are synchronized with peers to recover.

so could I clean the logs periodically in case they are getting too large

regards
Jie

How to use disk to store data

Does pysyncobj support storing data in disk? in the case that data is too big to store in RAM.

I modified the examples code, and use a bsddb object to store data, instead of self.__data, which is a dict.

I works well, when I just read and insert some key-values. But, I am afraid that it can not replicate snapshot if I add a new node. What is the best-practice to implement it by using PySyncObj?

Cluster of machines on different networks

How would I use this to create clusters with machines on different networks? Like for the counter.py example, I'd like to do something python counter.py localhost:8080 123.123.123.123:8080 Also how can I use this to create and join arbitrary clusters?

broken leader re-election after killing most of cluster nodes

I was killing some of a 3 node cluster randomly to verify an issue with re-electing leader and checking the status using

➜  wspace cat clusterstatus.sh 
syncobj_admin -conn 127.0.0.1:6000 -status
syncobj_admin -conn 127.0.0.1:6001 -status
syncobj_admin -conn 127.0.0.1:6002 -status

➜  wspace bash clusterstatus.sh | egrep 'leader:|self:'
leader: localhost:6000
self: localhost:6000
leader: localhost:6000
self: localhost:6001
leader: localhost:6000
self: localhost:6002
➜  wspace OA
zsh: command not found: OA
➜  wspace bash clusterstatus.sh | egrep 'leself:|leader:'  
leader: localhost:6000
self: localhost:6000
leader: localhost:6000
self: localhost:6001
leader: localhost:6000
self: localhost:6002
➜  wspace bash clusterstatus.sh | egrep 'self:|leader:'
leader: localhost:6000
self: localhost:6000
leader: localhost:6000
self: localhost:6001
leader: localhost:6000
self: localhost:6002
➜  wspace bash clusterstatus.sh | egrep 'self:|leader:'
leader: localhost:6001
self: localhost:6001
leader: localhost:6001
self: localhost:6002

Here the leader should've been set to 6001 but got None value instead

➜  wspace bash clusterstatus.sh | egrep 'self:|leader:'
leader: None
self: localhost:6000
leader: localhost:6001
self: localhost:6001
leader: localhost:6001
self: localhost:6002
➜  wspace bash clusterstatus.sh | egrep 'self:|leader:'
leader: localhost:6001
self: localhost:6000
leader: localhost:6001
self: localhost:6001
leader: localhost:6001
self: localhost:6002
➜  wspace bash clusterstatus.sh | egrep 'self:|leader:'
leader: localhost:6002
self: localhost:6000
leader: localhost:6002
self: localhost:6001
leader: localhost:6002
self: localhost:6002
➜  wspace bash clusterstatus.sh | egrep 'self:|leader:'
leader: localhost:6001
self: localhost:6000
leader: localhost:6001
self: localhost:6001

And I reached this very interesting state where the node 6000 has a leader 6001 but that leader isn't even active?

➜  wspace bash clusterstatus.sh | egrep 'self:|leader:'
leader: localhost:6001
self: localhost:6000

it was fixed after launching 6002

➜  wspace bash clusterstatus.sh | egrep 'self:|leader:'
leader: localhost:6000
self: localhost:6000
leader: localhost:6000
self: localhost:6002

Read-only nodes

Need to add nodes that doesn't participate in leader elections, only receiving updates.

recursively calling replicated method.

hi bakwc,
could you help me with two more PRs below:
1.could I call the replicated and nested method?i.e.
@REPLicated
def A():
B()
@REPLicated
def B():
pass
is this legitimate ?
2.how could I get info about the callback and whether it's a synchronous invocation inside a @REPLicated decorated method? is it possible to do that?

regards
Jie

Auto-add new dynamic nodes

For now to add a new dynamic node to a cluster you need explicitly send add command before launching that node (from one of the existing nodes or using syncobj_admin). We need to add ablity to add node to a cluster by just launching it with some initial addresses to connect. Node should connect to a cluster as a readonly one, sync journal and then add itself as a raft member.

change notification

I'd like to have some kind of notification mechanism which I can tap into when synchronised objects are changing. I suppose the PipeNotifier maybe something that does this. Unfortunately there is no documentation or an example available for it.

If the PipeNotifier is not the right way, how would one be able to trigger a mechanism (e.g. a callback function) in case a particular synchronised object managed by the SyncObject is updated? Maybe an example for this (I presume relatively common) use case would be good to provide, too.

separate network connection code from raft proper

I'm trying to build a fuzz checker that would let us run many millions of simulated runs cheaply, I started out with lots of processes and signals (STOP/CONTINUE for network latency, KILL for dead machines). But the network code is a lot of overhead, so I'm wondering whether it would be better to just simulate the network in a single process.

It might also pay off to use UDP given we already assume unreliable transfer?

So it might make sense to pull out a class that provides listen/accept/send/recv.

Dynamic reconfiguration

How about to add ability to change cluster configuration on the fly? I know that raft handles this issue very nice, it uses itself to reach consensus about new configuration of the cluster.

Pre-configured cluster with a single node start

Having a config where we start a cluster node-by-node (or, for example, one node succeeded to start, others failed) we need to configure SyncObj in such a way that it knows all the nodes at the start time. But in case only one node has started it does not fallback to single-node cluster. How to enforce it in case __raftNextIndex is not empty?

Examples:
This is a getState() output after 10 second of a node's operation in case another node was pre-configured at startup (127.0.0.1:9001)

{
  "version": "0.3.3",
  "revision": "1899fe752bde334787dbfa54bb51bbd9fcf2826c",
  "self": "127.0.0.1:9000",
  "state": 0,
  "leader": null,
  "partner_nodes_count": 1,
  "partner_node_status_server_127.0.0.1:9001": 0,
  "readonly_nodes_count": 0,
  "unknown_connections_count": 0,
  "log_len": 1,
  "last_applied": 1,
  "commit_idx": 1,
  "raft_term": 0,
  "next_node_idx_count": 1,
  "next_node_idx_server_127.0.0.1:9001": 2,
  "match_idx_count": 1,
  "match_idx_server_127.0.0.1:9001": 0,
  "leader_commit_idx": null,
  "uptime": 10,
  "self_code_version": 0,
  "enabled_code_version": 0
}

And this is the same 10-seconds later snapshot but initial nodes config was empty at startup

{
  "version": "0.3.3",
  "revision": "1899fe752bde334787dbfa54bb51bbd9fcf2826c",
  "self": "127.0.0.1:9000",
  "state": 2,
  "leader": "127.0.0.1:9000",
  "partner_nodes_count": 0,
  "readonly_nodes_count": 0,
  "unknown_connections_count": 0,
  "log_len": 2,
  "last_applied": 2,
  "commit_idx": 2,
  "raft_term": 1,
  "next_node_idx_count": 0,
  "match_idx_count": 0,
  "leader_commit_idx": 2,
  "uptime": 10,
  "self_code_version": 0,
  "enabled_code_version": 0
}

As you can see the second case leads to successful leadership acquisition as there's no other nodes to connect and produce formal election procedure with.

How could one force a node to fallback to single-cluster mechanics in case other configured nodes are unable to connect to?

logCompaction data restriction

As of https://github.com/bakwc/PySyncObj/blob/master/pysyncobj/syncobj.py#L1282 logCompaction dumps all the properties of a SyncObj instance that are not listed in SyncObj.__properies (note a typo). That dataset, being collected via self.__properies surely includes subclass'es own properties.

Since the base SyncObj class is designed to be subclassed and some business logic to be implemented on top of this, there surely might be many properties related to that logic. In my case they include some data receiving/emitting sockets. Since sockets can't be pickled or serialized another way that leads to logCompaction failure.

  1. Note an attribute typo
  2. Can the data included to logCompaction mechanism restricted to not include some data programmatically? Another (and I think more preferrable) way: restrict logCompaction to collect only the data fields that are specially registered to be included.

For now I have to use an ugly method of self._dict__['_SyncObj__properies'].update({'myfield1', 'myfield2', ...})

Вопросы по использованию библиотеки

Здравствуйте. Я пытаюсь разобраться с использованием проекта на Ваших примерах. Не могли бы Вы прояснить некоторые моменты использования библиотеки:

  1. Как я понял из исходников синхронизация обеспечивается вызовом на всех нодах одинаковых методов с одинаковыми параметрами. Как в таком случае решается проблема начального состояния динамически добавленной ноды. Т.е. если в примере со счетчиком досчитать к примеру до 42, а затем подключить новую ноду, то какое значение счетчика будет на новой ноде - 0 или 42? Если 42, то как она об этом узнает?

  2. В примере kvstorage_http.py я не смог решить проблему: если поднять обе ноды, а затем одну погасить, то при добавлении нового значения оно не будет добавлено в хранилище (что логично), но при этом клиенту отдается ответ 201 Created, хотя по логике код ответа должен быть из серии 50Х. Я попытался применить к методам другой декоратор вместо replicated поставить replicated_sync, но получил ошибку:

Traceback (most recent call last):
  File "kvstorage_http.py", line 13, in <module>
    class KVStorage(SyncObj):
  File "kvstorage_http.py", line 21, in KVStorage
    @replicated_sync
NameError: name 'replicated_sync' is not defined

Правильно ли я применил декоратор?

class TestObj(SyncObj):

    def __init__(self, selfNodeAddr, otherNodeAddrs):
        super(TestObj, self).__init__(selfNodeAddr, otherNodeAddrs)
        self.__counter = 0

    @replicated_sync
    def incCounter(self):
        self.__counter += 1
        return self.__counter

    @replicated_sync
    def addValue(self, value, cn):
        self.__counter += value
        return self.__counter, cn

    def getCounter(self):
        return self.__counter

Подскажите use case как узнать приняла ли нода переданное ей изменение.

  1. Как определить доступность кворума? Я нашел функцию isReady, которая отвечает синхронизирована ли нода. Но как узнать синхронизированы ли более (N + 1) / 2 всех нод кластера, т.е. доступен ли кворум? В примере lock.py не совсем понятно как достигается гарантия блокировки в методе acquire. Вызывая этот метод, если клиент получает True, он полагает, что ресурс заблокирован. Но когда кворум недоступен, acquire не реплицируется на кластер и клиент не узнает об этом (вызов же асинхронный), т.е. получается он ошибочно считает, что владеет ресурсом?

Заранее спасибо за помощь.

Make callers blocking until command has been accepted by quorum

As far as I understand, the callback mechanism for calls to a replicated method, can be used in order to block until the quorum has accepted the change. As an example, the counter.py code uses the callback mechanism to print the updated value. Furthermore, the callback can also being used to do error handling, e.g. if no leader is present.

Currently, I'm achieving the blocking through the following mechanism:

@replicated
def __setValue(self, key, value):
  self.__data[key] = value

def setValue(self, key, value):
  class local:
    result = None
    error = None
  def set_callback(res, err, event):
    local.result = res
    local.error = err
    event.set()
  event = threading.Event()
  self.__setValue(key, value, callback = partial(set_callback, event = event))
  # wait for the callback to be called
  event.wait()
  # callback has been called - now can do post processing
  return local.result

Thinking about making this easier to use, a future-like mechanism would be very helpful. Here is some sample code:

def setValue(self, key, value):
  future = self.__setValue(key, value, waitForQuorum = True)
  res, err = future.get() # potentially blocking call
  return res

Furthermore, it would be nice if it could work with aysncio as well, using a generator style in order to avoid having the caller thread blocking all the time:

def setValue(self, key, value):
  yield from self.__setValue(key, value, waitForQuorum = True)

Write journal to disk

For now journal stored only in-memory. We need to write all entries to file, not only full-dump.

How to replicate dynamic nested dictionaries

I am evaluating pysyncobj for state replication in our cluster. We have a dictionary of resource states where state corresponding to each resource is a dictionary again. For example
resource = { # Sample resource state
'id': 'resourceId1'
'maxCapacity': 100,
'allocatedCapacity': 20
'resourceCapabilities': {
}
}
resources = { # Dictionary of resource states
'id1': resource1,
'id2': resource2
}

Background about the service:
We have a resource allocation service to which the resources register dynamically when they come up providing their capabilities. This state needs to be replicated across the cluster. When one of the servers gets a job, it allocates the least loaded resource that can handle the job and also updates the resource capacity utilization.

I have created resources as an instance of ReplDict. This is added as a consumer while creating the pysyncobj. When a resource registers with the service dynamically, I create a resource object which again is an instance of ReplDict. Then I add this object into the resources dictionary with key being the resource id. When I do this it fails with an exception that looks like this

"/usr/local/lib/python3.6/site-packages/pysyncobj/syncobj.py", line 1387, in newFunc
funcName = self._syncObj._getFuncName((consumerId, func.name))
AttributeError: 'NoneType' object has no attribute '_getFuncName'

To me it looks like you can only update a resource thats been registered as a consumer while creating the SyncObj. Is that the case? Is there a way to replicate a dynamically updated nested dictionary?

Cannot detect tcp address is in use.

The syncobj constructor just start an async thread to poll starting tcp server, and if the tcp port is in use, it just loop forever.

Since in my project, I want to start a cluster (forked processes) in a daemon process, I want them pick propriate port to use automatically, I need an exception if creating tcpserver with problem, so the node may know it should switch to another port.

I wonder if you are going to support this.
Thank you for this wonderful project.

onLeader

Необходим метод onLeader, вызываемый после появления/смены/потери лидера.

Древовидная репликация для read-only узлов

Вместо того, чтобы забирать команды, идущие в журнал у мастера, необходимо автоматически выстраивать древовидную структуру из узлов, основываясь на пинге, и забирать команды у родительских узлов.
Кроме того необходимо иметь опцию для автоматического отключения от мастера, в случае если долго не было команд на запись (по получению новой команды - отключаться).

timeout for async mode

Currently timeout argument for replicated functions works only for a sync mode, we need to add it for async one.

Traceback on tick

On the latest release I'm hitting the following issue after modifying the sync object.

Traceback (most recent call last):
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/Library/Python/2.7/site-packages/pysyncobj/syncobj.py", line 228, in _autoTickThread
    self._onTick(self.__conf.autoTickPeriod)
  File "/Library/Python/2.7/site-packages/pysyncobj/syncobj.py", line 300, in _onTick
    self.__tryLogCompaction()
  File "/Library/Python/2.7/site-packages/pysyncobj/syncobj.py", line 783, in __tryLogCompaction
    cluster = self.__otherNodesAddrs + [self.__selfNodeAddr]
TypeError: can only concatenate tuple (not "list") to tuple

Add encryption

Encryption is required to use PySyncObj not only in internal network (single data center) but over internet too.

Check for corrupted dump files

Seems like there could be a case when dump files corrupted. Need to check.
@schmidtfx could you please provide more details? What file is corrupted, journal or full-dump?

multithread issue

hi bakwc,
could you please help me with the questions below:
1.is the 'replicated' decorated function thread safe, i.e. can it be shared among threads?
2.if syncobj is in async mode(callback is set), could I detect the timeout in someway?
3. if I call A() in sync mode, and call A() in async mode within another thread, do these two invocations conflict,

regards
Jie

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.