Coder Social home page Coder Social logo

libraft's People

Contributors

allengeorge avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libraft's Issues

Fail all outstanding command futures when a leader loses its leadership

Currently command futures issued by a leader are only failed when the log entry they pertain to is known to be overwritten. This does not allow clients to respond to failures quickly. A more pessimistic option is to immediately fail all outstanding futures when the leader loses its leadership so that clients can implement their error-handling logic.

Implement DistributedStore health-check for KayVee

Right now there's no way to determine if KayVee has a connection to the cluster. The health check should submit a NOP to the cluster and return OK/DISCONNECTED depending on whether the command was applied or not.

LeaderElection never ends

For some reasons, libraft nodes repeatedly handle election timeout and change role FOLLOWER->CANDIDATE, even when the term number is 100+. The YAML files used are sxx-kayvee.yml under testenv dir.

KayVee can hang on shutdown

Once in a blue moon it appears that KayVee can hang on shutdown. The problem has been traced down to a failure the underlying Netty NioWorkerPool to shutdown cleanly (it appears to be waiting for a CountdownLatch to reach 0 - a condition that, for some reason, never happens).

The full stack is at: KayVee 0.1.1 Shutdown Hang Stack

Calling RaftNetworkClient.start() after RaftNetworkClient.stop() will fail

Ideally, the caller should be able to start/stop a component multiple times. This may not be possible with RaftNetworkClient because the internal ChannelFactory instances have shutdown called on them. It's unclear to me whether the caller would have to re-initialize the system in order for everything to work.

Generate and publish a "testlib" artifact in libraft-core

libraft-core defines a class called TestLoggingRule that is used in both libraft-agent and kayvee. It would be ideal for libraft-core to define a separate artifact called libraft-core-testlib that is published to maven along with the other artifacts. While I'm able to generate this, I can't publish it because the publishing task in gradle 1.8 fails with an error. Either I'm using the task wrong, or the task itself is incomplete. Either way, my current workaround is to simply create copies of that rule in the other sub-projects, which is not ideal. The build should be fixed so that a separate libraft-core-testlib jar, sources.jar and javadoc.jar is built and published to maven central.

Channels with no activity should be closed

Right now it's possible for an inactive connection to RaftNetworkClient to persist forever. While this isn't a serious issue, it would be better to have a timeout that closes the channel if there is no network activity greater than a timeout..

Javadoc creates build issues

When I run "./gradlew build" I get build errors in the javadoc section. Obviously this doesn't affect much, but it would be nice if the entire build process worked.

I am compiling on Ubuntu 14.04.1 LTS with the Oracle Java 8 JDK (not openjdk). I've been out of the loop with Java for about 4 years, so I suspect this is a situation of PEBCAK.

===========gradlew output=================
:libraft-core:compileJava UP-TO-DATE
:libraft-core:processResources UP-TO-DATE
:libraft-core:classes UP-TO-DATE
:libraft-core:jar UP-TO-DATE
:libraft-agent:compileJava UP-TO-DATE
:libraft-agent:processResources UP-TO-DATE
:libraft-agent:classes UP-TO-DATE
:libraft-agent:jar UP-TO-DATE
:libraft-core:javadoc
/home/bdeetz/libraft/libraft-core/src/main/java/io/libraft/algorithm/Log.java:51: error: bad use of '>'
* @param index index >= 0 of the {@code LogEntry} to get
^
/home/bdeetz/libraft/libraft-core/src/main/java/io/libraft/algorithm/Log.java:54: warning: no @throws for io.libraft.algorithm.StorageException
@nullable LogEntry get(long index) throws StorageException;
...
...
...
75 errors
21 warnings
:libraft-core:javadoc FAILED

FAILURE: Build failed with an exception.

  • What went wrong:
    Execution failed for task ':libraft-core:javadoc'.

    Javadoc generation failed.

  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 11.978 secs

Create hooks to clean up old snapshots

Currently OnDiskSnapshotStore has methods to remove old snapshots and ensure that the snapshot database and the files on the filesystem match. I need to expose these to the user (perhaps using DropWizard ConfiguredCommand instances or DropWizard tasks).

is this project still under development?

I'm curious about this project, it seems promising, but the last commit is 2 years ago, is there anything wrong? or it's simply that this project is abandoned?

Maximum cluster size is 7

RaftAlgorithm enforces a maximum cluster size of 7. I can't remember making any assumptions based on cluster size, so I'm at a loss for the rationale. I suspect that I didn't want an unbounded max, so I was simply enforced a 'reasonable' number. But, just in case, this should be investigated and confirmed before a change is made.

Duplicate heartbeats may prevent a Raft follower from timing out a Raft leader

If you have a really poor network that duplicates a lot of packets (and specifically, heartbeat packets) it's possible for a follower to believe that it's still in communication with a leader. This is because a heartbeat packet does not contain any information that would move time forward. This could mean that a leader failure goes undetected, and could prevent the Raft cluster from making progress.

This is highly unlikely in practice. It's much more likely that the network will drop packets, not duplicate them. Moreover, even a few duplicates don't matter: what matters is that the duplicates continue, which is unlikely.

That said, this should be mitigated. One solution would be to use periodic NOOPs instead of heartbeats to verify that the leader is still alive.

Simplify KayVee member and Raft member configuration

Right now both KayVee's ClusterMember and Raft's cluster member specify host/port separately. They can be combined into a single endpoint/address field.

Moreover, for KayVee's ClusterMember, we actually want the full HTTP address (including whether it's http or https) because that's what we'll use when returning a redirect for a NotLeaderException.

Observing transient timeouts during KayVee distributed-store healthcheck

The KayVee distributed-store health-check has a command-commit timeout of 500ms. This should be more than enough to get consensus and commit log entries to disk for a 3-machine cluster. I notice, however, transient timeouts in running a health-check loop (1 every second). There are sometimes spikes when the check is being run against the leader and the leader has to deal with a faulty machine (see #11).

Add Randomized AppendEntriesReply test

SImilar to the AppendEntries test. It would be useful to come up with a randomized AppendEntriesReply test that exercises:

  1. Out of order replies.
  2. A mix of nack and ack replies.
  3. Varying numbers of added entries (to ensure that the nextIndex doesn't shrink).

...

Processing a large incoming AppendEntriesReply in I/O thread can trigger an election timeout on the Receiver

Not sure the best way to put the trace logging into a github issue so I'll just leave that messiness until the end.

The situation I have encountered is that with a small enough election timeout (I am using 300ms) when a node tries to re-enter the cluster there can be enough entries in the first AppendEntries message it receives that the I/O thread actually blocks long enough for an election timeout. This seems to cause that node to then send AppendEntriesReplies to all of the AppendEntries that backed up (due to heartbeats) but with the new term (since the node started an election). I was able to "fix" this by adding a call to scheduleElectionTimeout() at the end of each iteration of the for(LogEntry entry : entries) loop in onAppendEntries. Not a particularly elegant solution. Changing config params will also fix it but I thought it was worth reporting.

I think a follower could also just ignore AppendEntriesReply RPCs instead of failing on the precondition of being a leader. However, I'm sure you have spent more time with the algorithm than me and may be able to think of a reason why that would be a bad idea.

Here is some evidence of the issue.

The exception on the current leader

[New I/O worker #4] TRACE io.libraft.algorithm.RaftAlgorithm - agent2: RequestVote from agent1: term:2 lastLogIndex:15 lastLogTerm:1
[New I/O worker #4] INFO io.libraft.algorithm.RaftAlgorithm - agent2: changing role LEADER->FOLLOWER in term 2
[New I/O worker #4] INFO io.libraft.algorithm.RaftAlgorithm - agent2: leader changed from agent2 to null
[New I/O worker #4] TRACE io.libraft.algorithm.RaftAlgorithm - agent2: AppendEntriesReply from agent1: term:2 prevLogIndex:6 entryCount:9 applied:false
[New I/O worker #4] ERROR org.jboss.netty.channel.SimpleChannelUpstreamHandler - agent2: uncaught exception processing rpc:AppendEntriesReply{source=agent1, destination=agent2, term=2, prevLogIndex=6, entryCount=9, applied=false} from agent1
java.lang.IllegalStateException: role:FOLLOWER

The follower timing out while processing log entries

[New I/O worker #3] TRACE io.libraft.algorithm.RaftAlgorithm - agent1: AppendEntries from agent2: term:1 commitIndex:15 prevLogIndex:6 prevLogTerm:1 entryCount:9
[New I/O worker #3] INFO io.libraft.algorithm.RaftAlgorithm - agent1: leader changed from null to agent2
[New I/O worker #3] TRACE io.libraft.algorithm.RaftAlgorithm - agent1: add entry:ClientEntry{type=CLIENT, index=7, term=1, command=PrintCommand{commandId=7613830809909165162, toPrint=a string 6}}
[New I/O worker #3] TRACE io.libraft.algorithm.RaftAlgorithm - agent1: add entry:ClientEntry{type=CLIENT, index=8, term=1, command=PrintCommand{commandId=5002286235719145647, toPrint=a string 7}}
[New I/O worker #3] TRACE io.libraft.algorithm.RaftAlgorithm - agent1: add entry:ClientEntry{type=CLIENT, index=9, term=1, command=PrintCommand{commandId=-1668250811023977474, toPrint=a string 8}}
[New I/O worker #3] TRACE io.libraft.algorithm.RaftAlgorithm - agent1: add entry:ClientEntry{type=CLIENT, index=10, term=1, command=PrintCommand{commandId=3591929921586279742, toPrint=a string 9}}
[New I/O worker #3] TRACE io.libraft.algorithm.RaftAlgorithm - agent1: add entry:ClientEntry{type=CLIENT, index=11, term=1, command=PrintCommand{commandId=7008605882065303194, toPrint=a string 10}}
[New I/O worker #3] TRACE io.libraft.algorithm.RaftAlgorithm - agent1: add entry:ClientEntry{type=CLIENT, index=12, term=1, command=PrintCommand{commandId=13354652848121111, toPrint=a string 11}}
[New I/O worker #3] TRACE io.libraft.algorithm.RaftAlgorithm - agent1: add entry:ClientEntry{type=CLIENT, index=13, term=1, command=PrintCommand{commandId=-6877329815755806333, toPrint=a string 12}}
[New I/O worker #3] TRACE io.libraft.algorithm.RaftAlgorithm - agent1: add entry:ClientEntry{type=CLIENT, index=14, term=1, command=PrintCommand{commandId=-5953431672926618152, toPrint=a string 13}}
[New I/O worker #3] TRACE io.libraft.algorithm.RaftAlgorithm - agent1: add entry:ClientEntry{type=CLIENT, index=15, term=1, command=PrintCommand{commandId=7768522539884141905, toPrint=a string 14}}
At index 7 type PRINT
At index 8 type PRINT
At index 9 type PRINT
At index 10 type PRINT
At index 11 type PRINT
At index 12 type PRINT
At index 13 type PRINT
At index 14 type PRINT
At index 15 type PRINT
[Timer-0] INFO io.libraft.algorithm.RaftAlgorithm - agent1: handle election timeout
[Timer-0] INFO io.libraft.algorithm.RaftAlgorithm - agent1: changing role FOLLOWER->CANDIDATE in term 2
[Timer-0] INFO io.libraft.algorithm.RaftAlgorithm - agent1: leader changed from agent2 to null

Remove persistent KayVee database

Applied commands are currently stored twice: once in the Log and once in LocalStore. Now that getNextCommittedCommand is implemented, it's possible to remove the database from LocalStore and simply use an in-memory key-value store.

This may depend on implementing snapshot support first.

Process KayVee reads out of the local database

Right now all KayVee reads are processed only after a GET or ALL command is issued to the Raft cluster. This provides read-after-write consistency for a client and is equivalent to Zookeeper's sync primitive. This prevents fast reads, however.

It would be ideal for GET or ALL to be serviced by any follower out of its local database, unless the client explicitly requests read-after-write consistency. This requires the following enhancements: #1 and #6.

Update KayVee Javadoc

The KayVee javadoc is incomplete, with many public functions undocumented. All public functions (especially in the store package) should be documented to the same level as the other libraft components.

KayVee command timeout can be triggered if a node is too far behind

If a node is extremely far behind and is not responding to heartbeats, it's possible for a busy server to start timing out legitimate requests. This is because:

  • heartbeats are generated frequently
  • RaftAlgorithm creates a heartbeat message with all missing entries
  • heartbeats messages require entryCount calls to Log.get, which results in a large volume of disk accesses

This can cause current requests to be starved of time by multiple pending heartbeats, all of which do a large volume of disk access.

Failed to include maven dependency to libraft

I'm trying to include libraft in my project, by adding the following maven dependency:

<dependency>
  <groupId>io.libraft</groupId>
  <artifactId>libraft-agent</artifactId>
  <version>0.1.1</version>
</dependency>

When I issue mvn install, the project's build fail with missing dependencies to el-api:jar:2.2.+, slf4j-api:jar:1.7.+, guava:jar:14.0.+. I also see the following in the logs:

[WARNING] The POM for org.glassfish.web:el-impl:jar:2.2.+ is missing, no dependency information available
[WARNING] The POM for javax.el:el-api:jar:2.2.+ is missing, no dependency information available
[WARNING] The POM for org.slf4j:slf4j-api:jar:1.7.+ is missing, no dependency information available
[WARNING] The POM for com.google.guava:guava:jar:14.0.+ is missing, no dependency information available
[WARNING] The POM for org.slf4j:log4j-over-slf4j:jar:1.7.+ is missing, no dependency information available

Why those '+' next to the jar files?

It takes too long for a node to catch up if it's seriously behind

Configuration: 3-machine cluster. Machines 1 and 2 were left to run for a long period of time with NOPCommands being applied every 1 second. Machine 3 was left offline. Eventually there was a backlog of over 7000 entries for Machine 3 to apply. On starting up, I observed that Machine 3 was not catching up quickly. This was quickly traced to two factors:

  1. On receiving a negative AppendEntriesReply, a new AppendEntries is not sent immediately. Instead, we wait for the next heartbeat timeout. On KayVee the heartbeats are sent after multi-second intervals, which means it can take forever for the backlog to be cleared.
  2. The leader rolls back its prefix one index position at a time. Perhaps the optimization described in the Raft paper would be useful, where the follower reports information about its log entries.

It's also possible that this will be mitigated through the use of snapshots.

ISE when AppendEntriesReply received from a peer

Reported by @ZymoticB and noticed in testing on AWS.

Trace from #40

[New I/O worker #4] TRACE io.libraft.algorithm.RaftAlgorithm - agent2: RequestVote from agent1: term:2 lastLogIndex:15 lastLogTerm:1
[New I/O worker #4] INFO io.libraft.algorithm.RaftAlgorithm - agent2: changing role LEADER->FOLLOWER in term 2
[New I/O worker #4] INFO io.libraft.algorithm.RaftAlgorithm - agent2: leader changed from agent2 to null
[New I/O worker #4] TRACE io.libraft.algorithm.RaftAlgorithm - agent2: AppendEntriesReply from agent1: term:2 prevLogIndex:6 entryCount:9 applied:false
[New I/O worker #4] ERROR org.jboss.netty.channel.SimpleChannelUpstreamHandler - agent2: uncaught exception processing rpc:AppendEntriesReply{source=agent1, destination=agent2, term=2, prevLogIndex=6, entryCount=9, applied=false} from agent1
java.lang.IllegalStateException: role:FOLLOWER

This situation can be triggered by the following sequence of events:

Initial State:

  • P1 = LEADER
  • P2 = FOLLOWER
  • Term = 1
  1. P1 sends AppendEntries to P2 in term 1.
  2. P2 has an election timeout and advances to term 2.
  3. P2 sends a REQUEST_VOTE to P1.
  4. P1 receives REQUEST_VOTE and transitions to term 2 and becomes FOLLOWER.
  5. P2 receives AppendEntries sent in step 1.
  6. P2 responds with AppendEntriesReply with term 2 and applied = false.
  7. P1 receives AppendEntriesReply and crashes.

Server channels never have their attachment set

All RaftNetworkClient client channels have an attachment (a string) listing the unique id of the server they're connecting to. This makes debugging simpler: it's easier to investigate "Connection to SERVER_02 (localhost:6095) failed" instead of "Connection to localhost:6095 failed".

Unfortunately, server channels do not have their attachment set, making debugging more difficult, especially when an exception is thrown inside the channel pipeline. This should be fixed.

WireConvertor fails if frame length > 1400 bytes

Apparently the default value used for the WireConverter (1400-byte max frame size) is too low, and causes the RaftAgents to fail as follows:

WARN  [2013-11-16 17:43:23,703] io.libraft.agent.rpc.FinalUpstreamHandler: SERVER_02: caught exception - closing channel to null
! org.jboss.netty.handler.codec.frame.TooLongFrameException: Adjusted frame length exceeds 1400: 1428 - discarded
! at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:417) ~[netty-3.6.6.Final.jar:na]
! at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:405) ~[netty-3.6.6.Final.jar:na]
! at org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:370) ~[netty-3.6.6.Final.jar:na]
! at io.libraft.agent.rpc.WireConverter$Decoder.decode(WireConverter.java:65) ~[libraft-agent/:na]
! at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425) ~[netty-3.6.6.Final.jar:na]
! at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) ~[netty-3.6.6.Final.jar:na]
! at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) ~[netty-3.6.6.Final.jar:na]
! at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) ~[netty-3.6.6.Final.jar:na]
! at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) ~[netty-3.6.6.Final.jar:na]
! at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) ~[netty-3.6.6.Final.jar:na]
! at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) ~[netty-3.6.6.Final.jar:na]
! at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) ~[netty-3.6.6.Final.jar:na]
! at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) ~[netty-3.6.6.Final.jar:na]
! at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) [na:1.6.0_65]
! at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) [na:1.6.0_65]
! at java.lang.Thread.run(Thread.java:695) [na:1.6.0_65]

This stack trace describes a follower that is unable to parse a message from the leader. It's unclear to me why only one follower has this happening.

Resolve unresolved addresses on a non-IO thread

AddressResolverHandler resolves unresolved InetSocketAddress instances on the network I/O thread. This can block the I/O thread and starve already-connected channels if the name-resolution service is unavailable. One option is to defer name resolution to a separate executor and fire the connect event down once that executor has resolved the address.

0.2.1 Release

Cleanups, consolidation and basic performance improvements.

Send new AppendEntries with updated prefix immediately on receiving an unapplied AppendEntriesReply

Right now, when we receive a negative AppendEntriesReply, we simply modify the server's nextIndex and wait until the heartbeat timeout to send the next AppendEntries message with the updated prefix. This is one of the causes of #13 because, combined with the long KayVee heartbeat interval and the 7000-message backlog, the follower is not being caught up fast enough. A way to mitigate (but not completely solve) this problem would be to send out a new AppendEntries with the updated prefix immediately on receiving a NACK.

Note: Many of the tests are sensitive to message order, so they will have to be modified.

Implement log compaction

The libraft log can grow without bound. Log compaction (i.e. snapshots) are necessary to address this. With compaction the current state is dumped to disk and log entries subsumed by this state are removed.

There are two high-level options here:

  • Snapshots are triggered without coordination on each server. Logs and snapshots may be different on different servers. Leaders have to send snapshots and followers have to process them. Snapshots have to be loaded on startup to prime the caller's state.
  • Leader triggers snapshot for the cluster. This maintains a high-degree of log coherency, but I don't think it adds much value. It requires coordination, and you still need all the messages and logic required for the first option.

Tasks

  • Take snapshots.
  • Load a mix of snapshots and log entries on startup.
  • Automatically trigger taking a snapshot.
  • Send snapshot to followers (RPCSender API, ordering, tests, etc.).
  • Receive snapshot from leader (RPCReciever API, ordering, tests, etc.).
  • Store new snapshot
  • Truncate log entries on receiving snapshot.
  • Clean up stale snapshots.
  • Truncate log entries on taking snapshot.

RaftNetworkClient.stop() and RaftNetworkClient.start() can be interleaved

Neither stop() nor start() are atomic, which can cause problems if calls to them are made 'at the same time`. Right now this is a non-issue because of the time-lag between start() and stop() and because KayVee:

  1. Does not restart components.
  2. Only stops and starts components once after a significant delay.

Remove specification of machine names in KayVee/libraft cluster configuration

Currently both the KayVee and libraft cluster configuration files require that you specify the names of all machines in the cluster. For example:

members:
    - id: SERVER_00
      kayVeeUrl: http://localhost:6080
      raftEndpoint: localhost:9080
    - id: SERVER_01
      kayVeeUrl: http://localhost:6085
      raftEndpoint: localhost:9085
    ...

This is unnecessary, especially now that each machine does a handshake in which it provides some identifying information.

Create an automated KayVee API test

Currently KayVee testing is manual. I have to spin up a cluster and make HTTP calls to it via curl to ensure that the API and underlying operations have not had any regressions. This should be automated away into a simple test (maybe even a shell script?)

libraft fails to compile using JDK 7

Builds on JDK 1.7.0_45 fail with:

"MockDriver.java:59: error: MockDriver is not abstract and does not override abstract method getParentLogger() in Driver"

RPCException results in massive spamming of the logs

The default action for catching an RPCException is to print the stacktrace. During agent startup this can results in thousands of lines of useless stacktraces because the connection doesn't exist. Perhaps the standard logback.xml config could simply suppress the stacktraces for this class when in production mode.

Implement leader leases

Currently a leader remains so until it receives an AppendEntries informing it of a new leader, or a message with a newer term. During network partitions this is insufficient. Because the old leader does not experience a leadership change, clients that connect to this leader continue to submit commands, believing that they will be processed.

Proxy all client requests to leader

Right now, if issueCommand is called by a client on a follower node, the node throws a NotLeaderException. It would be nice if this request could be auto-forwarded to the master.

Pass TimeoutHandle into TimeoutTask

Right now there's no simple way for a TimeoutTask to tell whether it's been cancelled or not. The TimeoutHandle does not expose an "isCancelled" method because that's not available with java.util.Timer. A reasonable (but incomplete) solution is for the TimeoutHandle to be passed to the TimeoutTask so that it can check the reference for the handle against the one stored in RaftAlgorithm.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.