Coder Social home page Coder Social logo

distributedlock's People

Contributors

askazakov avatar bartleby2718 avatar dependabot[bot] avatar jamesrwaugh avatar kaffeetasse avatar madelson avatar medeah avatar michaelwiedinmyer avatar neuroxiq avatar pedropaulovc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

distributedlock's Issues

Question: How might I wrap the library to provide the same semantics?

Hello,

Wanted to decorate the SqlDistributedLock in such a fashion that the I can run other code to validate (beyond the acquired lock) that I truly have a lock on a business object.

If I can acquired both a lock from SqlDistributedLock and the business object, then I can really lock and return your handle. I like the usage of the using statement with Acquire and TryAcquire and have started with this interface

public interface IAWDLock : IDisposable
{
    IDisposable Acquire();
    IDisposable Acquire(TimeSpan timeout);
}

but, I'm unsure (haven't read enough of your code) if this looks ok, and more importantly that the implementation is sound. The IDisposable handed out from the Acquire method, is that what would trigger the Disposed call on the class? I'm a bit confused as you can tell. Here's what I have so far.

public class AWDSqlLock : IAWDLock
{
    public IDisposable Acquire()
    {
    }

    public IDisposable Acquire(TimeSpan timeout)
    {
    }

    public void Dispose()
    {
    }
}

So at this point I'm reading the docs as a refresher of how the using statement translates into the try/finally.

Thank you,
Stephen

Throw a specific exception for deadlocks

The method SqlApplicationLock.ParseExitCode throws an InvalidOperationException when a deadlock is detected. It would be very useful if a specific exception would be thrown.

Use AwaitableDisposable or ValueTask for all async returns

The idea for this is to avoid letting incorrect code like this compile:

using (@lock.AcquireAsync())
{
    ...
}

This compiles with Task since Task is IDisposable. It does not compile with ValueTask or a custom awaitable since those can't be disposed, which forces you to notice the missing await.

Support within-connection/transaction locks

Right now, each sql lock creates a new connection. We could additionall offer constructors taking in a DbConnection / DbTransaction. In this case, disposal would call release instead of just closing the connection

IMHO: IDistributedLock should be public

IMHO: The IDistributedLock should be public. Because:

  • The caller of the lock doesn't have to know what implementation of lock it's using
  • Unit Testing: if it already contains an interface, I don't have to abstract the DistributedLock any further.
  • The interface is no internal implementation detail, indeed it is the interface of the lock api

Btw, thanks for the good work in this project

Document that SystemDistributedLock does not work on Linux

When running on linux you will get this exception when trying to aquire the lock

System.PlatformNotSupportedException : Access Control List (ACL) APIs are part of resource management on Windows
 and are not supported on this platform.
  Stack Trace:
     at System.Security.AccessControl.ObjectSecurity..ctor()
   at System.Security.AccessControl.CommonObjectSecurity..ctor(Boolean isContainer)
   at System.Security.AccessControl.NativeObjectSecurity..ctor(Boolean isContainer, ResourceType resourceType)
   at System.Security.AccessControl.EventWaitHandleSecurity..ctor()
   at Medallion.Threading.SystemDistributedLock.CreateEvent()
   at Medallion.Threading.SystemDistributedLock.InternalTryAcquireAsync(Int32 timeoutMillis, CancellationToken canc
ellationToken)

I'm still determining a workaround, I only use this for unit tests so I'll probably fall back to a process level lock.

Add tests for TransactionScope.Current

Add tests for presence of TransactionScope (Transaction.Current). In particular Postgres's transaction detection to avoid statement_timeout leaks and Multiplexing (what happens if the shared connection gets caught up in multiple transactions?). Seems like in Postgres this is a connection string option

Remove PDBs for Microsoft.Data.SqlClient.SNI.x64.dll and Microsoft.Data.SqlClient.SNI.x86.dll

When installing NuGet DistributedLock.SqlServer v1.0.0-alpha01 there are two dll's in the output folder that have PDB files:
image

Could you please remove those PDB files? Our pipeline that validates artifacts doesn't expect any third-party assembly to have PDB files and fails (since it seems like those PDBs are also invalid):
image

Update:
This issue is with Microsoft.Data.SqlClient.SNI NuGet itself: dotnet/SqlClient#644 (comment)

Could you please update it to version 2.1.1?

Consider supporting in-memory locking above the distributed layer (composite locking)

Right now, if the same process tries to claim a lock, we push this out to the distributed locking layer (e. g. SQL).

We could reduce resource usage by first checking an internal lock (e. g. a SemaphoreSlim). We have to be very careful about how this interops with modes, though.

Current thinking on this: rather than build this into multi-plexing, we could offer a wrapper lock for any IDistributedLock that would add an in-process synchronization layer.

Some thoughts on composite locks:

  1. This should be more complex than just take lock a, take lock b. For example, let's say we have N local waiters for a the lock and then the first one of those acquires the distributed lock. We shouldn't release that until all those N local waiters have gotten to hold the lock or given up (new local waiters that come in after we got the distributed lock shouldn't get to join in; otherwise we might hog the distributed lock indefinitely). The benefit of this system is that it prevents a service using composite locking from always losing out to a service that doesn't, and also decreases the number of distributed lock operations (more efficient). The downside is that it reduces fine-grained interleaving of lock requests between services. Note that for R/W locks where we have both Write and UpgradeableRead, we have to be careful that the underlying lock is the right type.

  2. We need to be careful with R/W locks if they have writer-jumps-reader behavior. We wouldn't want a scenario where writers are queued up behind a local read lock hold which is then queued up waiting for a distributed write lock hold to be released. We might want to only use the local lock for writes and upgradeable reads in order to prevent this.

  3. We need to be careful with upgradeable reads. If we are holding an upgradeable read lock and try to upgrade, we might succeed locally but fail remotely, leaving ourselves with no way to back out of the local upgrade. To solve this, we can have the upgrade operation be remote-only.

Question: Integration testing competing consumers howto

@madelson It's me again.

Wondering how you might be able to use say, VS 2017 with xUnit to create a multiple consumers who will all try and grab a sqlDistributed lock? This question has some very good advice in it from you, and I'm not opposed to running up LocalDB/SqlExpress, or even reaching out to our development instance.

The behavior that I saw when I tried to do a naive test where everything was contained in the Fixture and Test was that the locking code in the ctor of the Fixture blocked and the actual test never executed... or something like that..

With xUnit you can use a fixture to ensure setup and tear-down via the Constructor and Dispose signatures. In the constructor is where I grabbed the lock, and then wanted to verify in the test method if I could grab the handle and it was real or null.

So what I'm doing right now is running up a console app that acquires the lock and sleeps in the using statement, then I can run up the unit test and try to get the lock too. This has started yielding results and making me more aware of the behavioral differences between Acquire(timeout) and TryAcquire(timeout) .

Maybe I should also explain my use case, which is only one consumer can grab the lock while others are blocked. All the clients of said code are using Polly Retry Policies, in particular retries with back-off and TimeoutException support.

`    
    private IDisposable GetLockInternal(string key, TimeSpan timeout)
    {
        /* Safe(r) by default... but someone can still abuse this if they want too */
        if (timeout <= default(TimeSpan) || timeout > this.DefaultTimeout)
        {
            timeout = this.DefaultTimeout;
        }

        string safeLockName = SqlDistributedLock.GetSafeLockName(key);
        SqlDistributedLock sqlLock = new SqlDistributedLock(safeLockName, this.config.AWDConnectionString);
        
        /*var handle = sqlLock.TryAcquire(timeout); // Does not enforce timeout */
        var handle = sqlLock.Acquire(timeout);

        if (handle != null)
        {
            try
            {
                /* Just in case we do want to run some other business logic */
                if (this.EnsureLock())
                {
                    return handle;
                }
            }
            catch
            {
                handle.Dispose(); // clean up
                /*throw;*/ // propagate the biz logic error?
            }
        }

        // result is neg
        if (handle != null) 
        {
            handle.Dispose(); // clean up
        }

        return null;
    }

`

Comments and or suggestions are truly welcome if you find the time.

Thank you,
Stephen

Lock management across services

I have a scenario where Service A will need to take out a lock on "124578".

Then, 20ish seconds later Service B will get a notification that the lock can be cleared.

Can Service B clear the lock?

From what I am seeing, the lock must be taken and released in the same "instance" (inside a using block preferably).

Is there a way for one system to take the lock and another to release it?

Update With Full Scenario:

Just in case it is useful, here is the steps to my scenario:

  1. Service A gets a notification to create a Container on order 124578.
  2. Service A takes out a lock on order 124578
  3. Service A sends out a message to an external system to create the container. This can take 15 seconds.
  4. The message processing system (Service B) gets a message saying that the container was created.
  5. Service B clears the lock.
  6. Other containers can now be made on order 124578.

Consider adding acquiring/releasing multiple SQL Server locks in batch

Requested by @jsbattig (see #49).

Desired functionality:

  • Specify N (> 0) lock names
  • (Try)Acquire attempts to acquire all. If it fails it acquires NONE
  • Acquire timeouts are cumulative for the process of acquiring all locks, not per-each
  • Locks are acquired in the order specified

Design option 1: static utility method:

public class SqlDistributedLock
{
    public static SqlDistributedLockHandle AcquireMultiple(IEnumerable<SqlDistributedLock>, TimeSpan, CancellationToken);
}

Note that this would only be able to batch the acquisition of locks that had the same approach for connecting. That should be fine, though.

Design option 2: new lock class

public class SqlBatchDistributedLock : IDistributedLock
{
     public SqlBatchDistributedLock(IEnumerable<string> names, ...);
}

Other questions:

  • Which design should we use? I probably favor option 1 because this feels more like a special operation on a set of locks than it does like a new lock type. If you are using this you are probably using the individual locks elsewhere in the application. Another advantage of #1 is that I could imagine a similar API which would allow you to acquire "as many as you can" from the list; this would fit well with the static helper pattern but not with the lock type pattern.
  • Is this safe to apply with multiplexing? What if one of the locks was already acquired on that connection?
  • How do we make sure we unwind safely if canceled part-way through the acquire?

Provider interface for SQL locks

When using the library with IOC patterns it's often desirable to inject connection information separately from the type of lock or lock name. A centralized provider API would simplify this:

class SqlDistributedLockProvider
{
    // factory methods
    SqlDistributedLock CreateLock(string name);
    SqlDistributedLock CreateLockWithExactName(string name); // does not use GetSafeName()
    SqlDistributedReaderWriterLock CreateReaderWriterLock(string name);
    ...

    // we may also want to offer convenience methods for locking directly
    IDisposable TryAcquireExclusiveLock(string name, TimeSpan timeout, CancellationToken cancellationToken);
}

Assembly is not strong-named

Repro steps:

  1. Download the latest version from NuGet
  2. Unpack the .nupkg file
  3. Run sn -vf DistributedLock.1.5.0\lib\net45\DistributedLock.dll

Expected:
The sn.exe tool would output

Microsoft (R) .NET Framework Strong Name Utility  Version 4.0.30319.0
Copyright (c) Microsoft Corporation.  All rights reserved.

Assembly 'DistributedLock.1.5.0\lib\net45\DistributedLock.dll' is valid

Actual:
The sn.exe tool outputs

Microsoft (R) .NET Framework Strong Name Utility  Version 4.0.30319.0
Copyright (c) Microsoft Corporation.  All rights reserved.

DistributedLock.1.5.0\lib\net45\DistributedLock.dll does not represent a strongly named assembly

Observation:
This blocks strong-named .NET assemblies from depending on DistributedLock. See Why strong-name your assemblies?

Connections leak with SqlDistributedLockConnectionStrategy.Azure

SqlDistributedLock.Try aquired with Azure strategy creates instance of AzureSqlDistributedLock and invokes it's TryAquire/TryAquireAsync with no contextHandle argument thus opening a new connection using provided connectionString. However when the lock is released and component is disposed, connection opened by it is not disposed thus leading to a connecion leakage.

Since outside connection vs. opening new connection case is determined anyway by keeping "ownsKeepalive" field way to go could be passing an connection instance to the LockScope if connection is being opened and having it disposed together with LockScope instance.

Consider implementing destructor for guaranteed cleanup-on-abandonment

Today, an abandoned lock handle will cause the underlying connection to be GC'd, but in a pooling scenario this doesn't necessarily lead to the connection being closed (it returns to pool until someone needs that connection again).

We could implement a finalizer which guaranteed proper closing. We must be careful when doing so since we can't access managed fields in a finalizer. Here's an example of how this might work:

void Main()
{
	var foo = new Foo();
	foo = null;
	GC.Collect();
	GC.WaitForPendingFinalizers();
}

private static readonly ConditionalWeakTable<Foo, object> t = new ConditionalWeakTable<Foo, object>();

private class Foo
{
	public Foo() 
	{
		var key = Guid.NewGuid().ToString();
		Console.WriteLine("create " + key);
		t.Add(this, key);
	}
	
	~Foo()
	{
		if (!Environment.HasShutdownStarted
			&& !AppDomain.CurrentDomain.IsFinalizingForUnload()
			&& t.TryGetValue(this, out var key))
		{
			Console.WriteLine("destroy " + key);
		}
	}
}

How can I mock IDistributedLockProvider method AcquireLockAsync in XUnit Unit Test?

From .NET Core 3.1 projects, I'm using the following to create a singleton in the dependency injection container:

var blobContainerClient = new BlobContainerClient(storageConnectionString, locksContainerName);

builder.Services.AddSingleton<IDistributedLockProvider, AzureBlobLeaseDistributedSynchronizationProvider>((provider) =>
    new AzureBlobLeaseDistributedSynchronizationProvider(blobContainerClient)
);

I'm then injecting this into my target class's private field like so:

private readonly IDistributedLockProvider _distributedLockProvider;

My class then uses code such as this to utilize the Distributed Lock:

using (await _distributedLockProvider.AcquireLockAsync(lockName)) {
    // stuff to be protected ...
}

My question is, how can I mock the call to AcquireLockAsync in my Moq powered XUnit unit test? The problem I have is that this method is an extension method, and Moq is unable to mock it. Is there an alternative means of doing this and do you have any examples? Note that the unit tests must pass as part of the CI build pipeline on Azure DevOps, so using the storage emulator is not really an option, and besides, that would make it not a real unit test.

For the moment I've had to remove the unit tests, but I'd like to reinstate them as soon as I can, if there's a viable solution.

Thanks
Graham

Add mechanism for tracking when a lock's underlying connection dies

There is some discussion of this idea here: #5

The most natural way to expose this is through the returned handle:

SqlDistributedLock myLock = ...
using (var handle = await myLock.AcquireAsync())
{
    handle.GetConnectionBrokenToken().Register(() => Console.WriteLine("oh no, the connection died!");
}

This can be implemented using a cancelable WAITFOR under the hood which should hopefully fail if the SPID dies. It will need to be merged with keepalive in cases where we do that.

Potentially this could be a function on all LockHandles, not just SQL. Some would return CancellationToken.None or throw `NotSupportedException, potentially.

However, this would be a breaking API change (right now handles are simply IDisposable), so this should be left for V2.

Use default isolation level in transactions

It appears that transactions "leak" isolation level, so we should use the default throughout to avoid any weirdness (at least make it the same as if the client had issued a simple transaction call).

Expose a method to force a lock to be released.

We have a scalable application that makes tens of thousands - perhaps hundreds of thousands - of calls to DistributedLock every hour from dozens of processes running in parallel. For the most part it has been robust, but we have encountered a couple of instances where a lock is acquired and never released. When this happens, there is a cascading effect and several other processes hang or fail because they can never get the lock to that resource.

We've run this app for months, and the problem has only happened twice. So far, we have no clear way to intentionally replicate the problem. We will continue to investigate, but in the meantime, as a workaround, would it be possible for you to expose a method that would force all locks on a given path resource to be released? I realize this is a dangerous tool that would have to be used carefully. However, we rarely hold a lock for more than a couple of seconds, so it would only be used in a case where a process could not access a lock after an hour or so.

Clarification - when to create a lock?

Hi

I've been using this fine package for a while, and is working great. I have one question though: How often should I be creating a lock? Right now I create a lock each time I need to protect a piece of code, like this:

        public async Task SomeMethod()
        {
            var myLock = new SqlDistributedLock("Lockkey"+tenantId, GetSystemDbConnectionString(), SqlDistributedLockConnectionStrategy.Azure);
            using (var handle = await myLock.TryAcquireAsync(TimeSpan.FromMilliseconds(_settings.SharepointSyncLockTimeout)).ConfigureAwait(false))
            {
            }
        }

This lock is needed per tenantId, and I'm wondering if I should only create one lock per tenantId, and reuse the same lock for the specific tenant to aquire the lock? This code is called more than 30 times per second across different tenants. Does a new SqlDistributedLock require a separate sqlserver connection?

I can't find any best practice regarding this in the documentation or examples. Please advise

UPDATE:
After looking through the code, it seems that I'm right about the one connection per created lock, when using the Azure strategy, and that I should be using OptimisticConnectionMultiplexing. But I need the Azure strategy, because sometimes one call to the method may run for an hour or more. Would it be possible to combine the two strategies?

Support for semaphore

This may be a bit out there for a "lightweight" library but have you thought of implementing a distributed counting semaphore?

In my scenario, I have a sql azure database with many customers. Customers can have a lot of data.
The .net code is hosted in several azure web instances.

I regularly want to process every customer data. However, I don't want to process all customers at the same time because it would overload the database.
I also don't want to process each customer serially (1 by 1) because it would take too long.
Ideally I would use a distributed semaphore with a count of N to process N customers at a time...

It could be built on top of SqlDistributedLock and I wonder if you've come across this scenario before?
Do you see that fitting within the vision of your library?

Azure Blob Lease implementation

Looking at distributed locking purely from an Azure perspective (I have an Azure subscription).

After looking at what mechanisms were available for distributed locking on Azure, one thing that came up was Azure Blob Leases.

It seems "Azure Web Jobs" actually uses this mechanism behind the scenes for singleton jobs that shouldn't be run in parallel - mentioned on answer here:
http://justazure.com/azure-blob-storage-part-8-blob-leases/

Perhaps there is room for an Azure Blob Lease based implementation in this repo in future?

I mention this because I think Azure Blob Leases is probably the simplest way to start using distributed locking (ahead of sql server) for azure subscribers, because all it requires is an azure storage account, which you often have to create anyway to get started.

SQL is the probably the next easiest option then - as that then requires you to add Sql Azure to your azure subscription which adds an extra fee. Not all apps may require Sql server, but most apps do require disk / file storage.

Lastly, third party services like Zookeeper and others are probably the hardest as they require setup of that third party system first.

With that in mind - adding an Azure Blob Lease based distributed lock implementation might be a great addition to this repo.

How to dispose SqlDistributedLock asynchronously?

The using pattern is nice and reads very well.

However, the implicit call to Dispose is doing a synchronous db call and blocking the calling thread so I'd love to see an option to dispose asynchronously, even if it means I can't use the using keyword in my calling code.

I guess SqlDistributedLock .TryAcquireAsync could return an interface like this:

IAsyncDisposable : IDisposable
{
    Task DisposeAsync();
}

What do you think?

2.0 Release Miscellaneous tasks

Solution restructuring:

  • remove old target frameworks
  • remove System.Data.SqlClient
  • separate out System locks and rename to EventHandle or similar
  • separate out SqlLocks
  • separate out Core
  • set up DistributedLock umbrella package

Things to revisit

  • efficient sync-over-async using ValueTask and flowing context through AsyncLocal
  • safe naming by default
  • remove any obsolete APIs
  • look at dotnet/SqlClient#663
  • use latest Microsoft.Data.SqlClient (1.1.1 or 2.0)
  • namespace audit (add test)
  • add test that checks for sealed
  • clean up namespaces in tests
  • waithandles semaphore
  • waithandle permissioning: https://stackoverflow.com/questions/1784392/my-eventwaithandle-says-access-to-the-path-is-denied-but-its-not
  • API review test: check static APIs like GetName/MaxNameLength, check that IInternal APIs are explicitly implemented, etc
  • ConfigureAwait(false) review test
  • review all try-finally's and disposes for ordering issues
  • revisit handles nulling things out. Use RefBox where needed
  • consider getting rid of reentrancy for everything (e. g. by checking the type)
  • after cancellation of SQL lock, do a release to avoid races
  • look into CI tests
  • [ ] add tests for presence of TransactionScope (Transaction.Current). In particular Postgres's transaction detection to avoid statement_timeout leaks and Multiplexing (what happens if the shared connection gets caught up in multiple transactions?). Seems like in Postgres this is a connection string option
  • update third-party dependencies to latest versions (need to consider npgsql/npgsql#3443 and npgsql/npgsql#3442)
  • review for places to use SafeTaskCreate
  • revisit cases where public API idempotent disposal doesn't block on the second call
  • Add test for lingering console usages
  • interpackage dependency version bounds

Final steps:

  • resolve all in-code todos
  • all tests passing (release mode)
  • update docs
  • FileSystem docs
  • Postgres docs
  • Redis docs
  • SqlServer docs
  • WaitHandles docs
  • Locks docs
  • Other topics docs
  • Reader-writer locks docs
  • Semahpores docs
  • update release notes
  • update versions
  • all issues closed
  • publish!

SqlDistributedLock timeout issue?

Hi there.

Thank you for creating a cool library!
Good code too...

I use it in my web app with sql azure to make sure that certain long running operations can only be run once at a time.
It's been working fine but I just had a weird case and I thought maybe you may know something about it.

I have a block like this:

using (var @lock = new SqlDistributedLock("hello", connectionString).TryAcquire())
 {
         if (@lock == null)
               return;

         await doLongRunningOperationAsync()
}

doLongRunningOperationAsync can take quite a long time and I had a case in production where 2 requests managed to both acquire the exclusive lock!
The second one acquired the lock about 45 minutes after the first one had acquired the lock (it can take over an hour for this operation to complete...)

I use version 1.1.0.0 and as I understand it, when providing a connection string, an new connection and transaction will be created and the transaction will be the owner of the lock.

So it got me thinking. Maybe transactions have a timeout after which they are automatically closed? Maybe even the connection? In particular, sql azure can suffer transient issues and it is always a best practice to retry operations since they may fail because of temporary internal azure stuff.

So long story short, is it ok to use the SqlDistributedLock for long running operations???

Where is DistributedLock.WaitHandles (>= 1.0.0-alpha01)

Installing 2.0 returns the following error:

Severity	Code	Description	Project	File	Line	Suppression State
Error	NU1102	Unable to find package DistributedLock.WaitHandles with version (>= 1.0.0-alpha01)
  - Found 1 version(s) in nuget.org [ Nearest version: 0.0.0-alpha000 ]
  - Found 0 version(s) in Microsoft Visual Studio Offline Packages

Implement ability to recusively acquiring SqlDistributedLock lock

In some scenarios it would be handy to acquiring recursively same lock instance (according to this article it is quite possible to implement https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-getapplock-transact-sql?redirectedfrom=MSDN&view=sql-server-ver15).
F.e.:

using (var handle = taskProcessingLock?.TryAcquire(TimeSpan.FromSeconds(0)))
{
	....
	using (var handle2 = taskProcessingLock?.TryAcquire(TimeSpan.FromSeconds(0)))
	{ 
		// handle2 is always null here
	}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.