Coder Social home page Coder Social logo

haidra-org / ai-horde Goto Github PK

View Code? Open in Web Editor NEW
985.0 985.0 117.0 17.11 MB

A crowdsourced distributed cluster for AI art and text generation

License: GNU Affero General Public License v3.0

Python 94.76% HTML 4.80% JavaScript 0.38% Dockerfile 0.06%
ai distributed-computing flask-api gpt opt stable-diffusion volunteer-computing

ai-horde's People

Contributors

7flash avatar aessedai avatar cubox avatar danparizher avatar db0 avatar ebolam avatar eltociear avatar evguu avatar executivebees24601 avatar fripe070 avatar jamdon2 avatar johnnypeck avatar junland avatar lostruins avatar maikotan avatar motolies avatar mrseeker avatar natanjunges avatar ndahlquist avatar phineas-pta avatar pyroserenus avatar residentchief avatar rondlite avatar scenaristeur avatar tazlin avatar tijszwinkels avatar whjms avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ai-horde's Issues

Show worker names and IDs conditionally on user GET

  1. When the user's API key is in the GET header
  2. When the user has flipped a switch to show it. The switch will be set with a PUT on the username and can only be done by the user themselves. When this switch is flipped, workers will also show which user owns them on their own info

auto bench mark and assign power levels

the idea is to, when first setting up a worker, run a bench mark test on the card

when benchmarking generate a set of images using different sizes. (standard prompt, )

NOTE: its important to generate a few extra test images first, as some cards have ~20 sec extra time on the first generation

find the max size and of image that can be generated in a reasonable time-frame (maybe 20 seconds)

automatically assign a recommended power level for the card.

this would allow us to help place GPUs at a relevant power level that would make it so most generation calls resolve in around 20 seconds, instead of possibly taking minutes if someone has set there card to a power level that can only handle 0.1 it/s

it could also allow us to notify the user if there card is generally just now powerful enough to do things a timely manner, and possibly set some option to make there card low priority, and allow users to use low priority cards as an option

Incorrect kudos error message for "too many steps"

Request: 512x512, Steps: 51

ERROR: Error received from the Stable Horde: For requests over 1024x1024 or over 100 steps, the client needs to already have the required kudos. This request requires 857.4 kudos to fulfil

Client: v0.10.0

Seems it will error on steps > 50 but describes the error as 100.

Docker container?

Could a docker container be made to make it easier to contribute your processing power?
Everything I run is containerized and so I tend to avoid things that aren't.

page to list what kudos are used for

from reddit

One idea that springs to me is the possibility of adding a community projects page where people can say what they're using donated kudos for - would be cool to enable people who are creating community resources, like all those style guides starting to pop up where they've procedurally generated a series of prompts with varying keywords, settings, etc and compiled them into a guide that we can use to learn how to more effectively prompt. It would work like an upvote system for good ideas giving those that are producing the most interesting results more power to create more and by being tied to kudos it would make it so people with more interest in the project are the ones who decide so it's almost impossible for spammers and bot accounts to bias it.

Add an additional API endpoint to view information about pending requests

It would be nice to be able to view the sort of requests that are pending in the queue.

Information that might be exposed could included;

  • Number of requests pending for supported models
  • Number of requests pending for different power levels
    etc

This would be useful for both visualizing the current state of the horde, as well as providing those who run workers information as to how to configure their workers, or if additional workers of a particular model/strength are required.

Prevent txt2img requests asking for the stable_diffusion_inpainting model

I'm currently getting a crash when the stable_diffusion_inpainting model is enabled, along with voodoo. This is due to receiving the following job:

DEBUG      | 2022-11-02 04:43:20 | __main__:bridge:189 - txt2img (stable_diffusion_inpainting) request with id d017d82f-be95-4274-941d-b9fcd6dd2ed3 picked up. Initiating work...

The code then crashes in load_from_plasma:


ValueError: 'object_refs' must either be an object ref or a list of object refs.
Traceback (most recent call last):
  File "bridge.py", line 447, in <module>
    bridge(args.interval, model_manager, bd)
  File "/home/crypticwit/code/SD-Horde-Server/conda/envs/linux/lib/python3.8/site-packages/logu  ru/_logger.py", line 1220, in catch_wrapper
    return function(*args, **kwargs)
  File "bridge.py", line 247, in bridge
    generator.generate(**gen_payload)
  File "/home/crypticwit/code/SD-Horde-Server/nataili/util/voodoo.py", line 14, in wrapper
    return torch.cuda.amp.autocast()(torch.no_grad()(f))(*args, **kwargs)
  File "/home/crypticwit/code/SD-Horde-Server/conda/envs/linux/lib/python3.8/site-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
    return func(*args, **kwargs)
  File "/home/crypticwit/code/SD-Horde-Server/conda/envs/linux/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/crypticwit/code/SD-Horde-Server/nataili/inference/compvis/txt2img.py", line 85, in generate
    with load_from_plasma(self.model, self.device) as model:
  File "/home/crypticwit/code/SD-Horde-Server/conda/envs/linux/lib/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/home/crypticwit/code/SD-Horde-Server/nataili/util/voodoo.py", line 59, in load_from_plasma
    skeleton, weights = ray.get(ref)
  File "/home/crypticwit/code/SD-Horde-Server/conda/envs/linux/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
    return func(*args, **kwargs)
  File "/home/crypticwit/code/SD-Horde-Server/conda/envs/linux/lib/python3.8/site-packages/ray/_private/worker.py", line 2268, in get
    raise ValueError(
ValueError: 'object_refs' must either be an object ref or a list of object refs.

This might need to be fixed in the natalia codebase, but it would be good if the horde protected us from the invalid request until then.

[REQUEST] Easy Setup guide for a Horde node, under linux

I suspect we'd have a lot more nodes if we had a quick/easy guide on how to take that machine with the older 2G/3G nvidia GPU, and bring it online to help the horde. Can we get a guide, and if one already exists, can we link it on the README?

Assume Ubuntu 20-22, since that seems to be the most common now?

Create discord bot which rewards accounts with kudos based on the reactions their posts receive

Should have a bot that looks for specific reactions. When it detects them picks the owner's ID and transfers X amount of kudos from the one issuing the reaction.

This could work by default for users who have signed-in using discord, but the bot should also accept a command for a discord user to set an API key for sending, and a username for receiving. This will allow even users with pseudonymous accounts to send/receive

Error when downloading SD model

I get this error when I try to download the SD model

requests.exceptions.ConnectionError: HTTPSConnectionPool(host='96ekachfg', port=443): Max retries exceeded with url: /?[email protected]/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001CAD04FBAC0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
Traceback (most recent call last):
File "D:\nataili-main\conda\envs\windows\lib\site-packages\urllib3\connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "D:\nataili-main\conda\envs\windows\lib\site-packages\urllib3\util\connection.py", line 72, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "D:\nataili-main\conda\envs\windows\lib\socket.py", line 918, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\nataili-main\conda\envs\windows\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "D:\nataili-main\conda\envs\windows\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "D:\nataili-main\conda\envs\windows\lib\site-packages\urllib3\connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "D:\nataili-main\conda\envs\windows\lib\site-packages\urllib3\connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "D:\nataili-main\conda\envs\windows\lib\site-packages\urllib3\connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000001CAD04FBAC0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\nataili-main\conda\envs\windows\lib\site-packages\requests\adapters.py", line 439, in send
resp = conn.urlopen(
File "D:\nataili-main\conda\envs\windows\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "D:\nataili-main\conda\envs\windows\lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='96ekachfg', port=443): Max retries exceeded with url: /?[email protected]/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001CAD04FBAC0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "bridge.py", line 789, in
bridge(args.interval, model_manager, bridge_data)
File "D:\nataili-main\conda\envs\windows\lib\site-packages\loguru_logger.py", line 1220, in catch_wrapper
return function(*args, **kwargs)
File "bridge.py", line 425, in bridge
bd.check_models(model_manager)
File "D:\nataili-main\conda\envs\windows\lib\site-packages\loguru_logger.py", line 1220, in catch_wrapper
return function(*args, **kwargs)
File "bridge.py", line 377, in check_models
if not mm.download_model(model):
File "D:\nataili-main\nataili\model_manager.py", line 475, in download_model
self.download_file(download_url, file_path)
File "D:\nataili-main\nataili\model_manager.py", line 391, in download_file
r = requests.get(url, stream=True, allow_redirects=True)
File "D:\nataili-main\conda\envs\windows\lib\site-packages\requests\api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "D:\nataili-main\conda\envs\windows\lib\site-packages\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "D:\nataili-main\conda\envs\windows\lib\site-packages\requests\sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "D:\nataili-main\conda\envs\windows\lib\site-packages\requests\sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "D:\nataili-main\conda\envs\windows\lib\site-packages\requests\adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='96ekachfg', port=443): Max retries exceeded with url: /?[email protected]/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001CAD04FBAC0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

Allow horde to have trusted users

These users would be allowed to bypass the blacklist due to the understanding that they will not use these concepts for evil. This is a feature that would be opt-in for the workers for some extra reward during uptime.

Add user proprty which gives monthly kudos

This property can be set by admins and mods

It gives that amount of kudos when first set, then exactly 1 month later on the same date.

This will be used to reward constant contributions, such as moderators or patreons

Crash when nsfw = false and censor_nsfw = true and toggles defined

When invoking:

curl -X 'POST'
'https://stablehorde.net/api/v2/generate/async'
-H 'accept: application/json'
-H 'Content-Type: application/json'
-H 'apikey:0000000000'
-d '{
"prompt": "parrot dancing in room",
"params": {
"sampler_name": "k_lms",
"toggles": [
1,
4
],
"realesrgan_model_name": "string",
"ddim_eta": 0,
"batch_size": 1,
"cfg_scale": 5,
"seed": "string",
"height": 512,
"width": 512,
"fp": 512,
"variant_amount": 0,
"variant_seed": 0,
"steps": 50,
"n": 1
},
"nsfw": false,
"trusted_workers": true,
"censor_nsfw": true,
"workers": [
"string"
]
}'

Get an error: {"message": "Internal Server Error"}

Client interface does not work under Firefox

The client interface at https://dbzer0.itch.io/stable-horde-client seems to load on Chrome but not on Firefox for me.

Using Firefox 105.0 under Linux.

Here is an error in the console:

Uncaught (in promise) ReferenceError: SharedArrayBuffer is not defined
    Godot https://html.itch.zone/html/6508475-635713/index.js:9
    Engine https://html.itch.zone/html/6508475-635713/index.js:625
    promise callback*SafeEngine/init/doInit/Engine< https://html.itch.zone/html/6508475-635713/index.js:623
    doInit https://html.itch.zone/html/6508475-635713/index.js:622
    init https://html.itch.zone/html/6508475-635713/index.js:639
    startGame https://html.itch.zone/html/6508475-635713/index.js:738
    <anonymous> https://html.itch.zone/html/6508475-635713/index.html:224
    <anonymous> https://html.itch.zone/html/6508475-635713/index.html:244
[index.js:9:8444](https://html.itch.zone/html/6508475-635713/index.js)
    Godot https://html.itch.zone/html/6508475-635713/index.js:9
    Engine https://html.itch.zone/html/6508475-635713/index.js:625
    (Asynchrone : promise callback)
    Engine https://html.itch.zone/html/6508475-635713/index.js:623
    doInit https://html.itch.zone/html/6508475-635713/index.js:622
    init https://html.itch.zone/html/6508475-635713/index.js:639
    startGame https://html.itch.zone/html/6508475-635713/index.js:738
    <anonyme> https://html.itch.zone/html/6508475-635713/index.html:224
    <anonyme> https://html.itch.zone/html/6508475-635713/index.html:244

Rating system for workers

Another tool in the toolbox to deal with potential malicious workers which might appear.

  • People can rate generations in the GUI.
  • Each rating will provide 1 kudos to the user
  • If a worker accumulates enough negative ratings from users, they are automatically paused temporarily
  • Each time a worker is paused again by bad ratings, the pause period becomes longer
  • The threshold to pause a worker is directly proportional to the age of the API key used. New API keys can be paused faster
  • If an API key has more than X amount of workers disabled due to negative ratings, it's put in permanent pause mode. This might also be proportional to how many bad servers it deploys.
  • People who negatively rate workers that everyone else is rating positively (and vice-verse) silently lose rating privileges

This is not a perfect solution, as a dedicated malicious actor can create a bot to new generate accounts and malicous workers every time they're paused. If that happens the only solution is a worker whitelist. But let's hope it never comes to that.

Switch the AI Horde from in-memory dictionaries, to a DB

With the rapid growth of the Stable Horde, this might become necessary sooner than later.

The current build is just storing users and workers in memory as python dicts. But as our users reach thousands or more, this might become unfeasible.

So I want to plan ahead and start developing the way to store user and worker objects to a Redis DB instead. I have a lot of other features that have more priority though, so I want some help!

If someone sends me a PR for this, there's a 500K Kudos bounty attached: https://discord.com/channels/781145214752129095/1028974305326932020/1028974305326932020

Cancel requests which are not possible

Currently, if you request an image using a model,
and this model becomes unavailable (last worker proposing it disconnected),
then you will stay at an estimated time varying between ~5 or 10 seconds,
without any indication that you're waiting for nothing.

Feature requests:

  • Directly refuse any request with a model which is not available
  • Cancel all the requests with a model if that model become unavailable

Have evaluating workers pool

New workers would go the eval pool, but all clients by default use the trusted pool. Users can opt to the eval pool if they want to use their GPU power, but they will accept that there may be malicious workers who slipped in the Countermeasures. After an X period, evaluating workers turn into trusted workers

Evaluating workers will not receive uptime kudos, and slightly less normal kudos per generation.

Allow to set a worker in a special mode which allows only requests where this worker is requested explicitely

I tried to see which power I could allow with my RTX3080 10GB with the inpainting model but if I start accepting requests for the horde, inevitably it starts generating images from other people and will crash if the image requested is too big.

For such experimentations, it would be nice if we could set the worker in a special mode where it would only accept request destined to this worker explicitely (if it is present in the workers field in the API)

Joining the horde with a 4GB GPU

My GPU is a NVIDIA 1050Ti with 4GB. My PC has 16GB memory. I can run some, but not all, stable diffusion projects.

Can I join the horde as a worker with those specs? Is there a "minimum requirements" to run the worker?

Create Application APIs

Normally IP can only be used by 3 user accounts at the same time. We should deploy an endpoint where a user can register an App Name and get an API key for it to increase this limit.

The application will be stored in the DB and will track how many users are using it. It will default to 1, but can be increased by admins and mods of the horde. Setting it to -1 will allow infinite amounts of accounts from the same IP

When this app key is sent along with the payload, it count as just 1 of the accounts per IP. So any number of requests from different accounts using an app, can be used and the app will count as just 1 IP. However only 1 app can be active per IP, so other ones might need to wait their turn, or figure out how to arrange for a common app account.

Achievements

Idea for adding achievements to users of the horde. Could just be a string on a list in their user info

Ideas

  • Early Adopter: Generated X amount of kudos/mps in the first month of the Horde's operation
  • name_tbd: Generated X amount of kudos/mps (Plus various enhances/repeats of this)

Would it make sense to use hivemind for distributed training/generation?

Splitting this out from the unrelated issue:

    Not sure if the implementation/etc would be compatible, but here's another distributed StableDiffusion training project a friend recently linked me to:

Originally posted by @0xdevalias in #12 (comment)

Basically, I stumbled across the hivemind lib and thought that it could be a useful addition to AI-Horde. I'm not 100% sure how the current distributed process is implemented, but from a quick skim it looked like perhaps you had rolled your own.

Not sure if it's something you already considered and decided against, but wanted to bring it to your attention in case you hadn't seen it before.

Make prompts cost proportional to parallelism

Currently requesting dozens of prompts at the same time has the potential to clog the queue.

I want to make it so that each job in parallel, increases the kudos cost.

The calulcation I'm thinking is kudos cost = normal_cost * (1 + (0.1 * n)). Which means that the cost for each request will increase by 10% for each other concurrent request found when it was deployed. This should also take into account other prompts, to avoid people splitting their request manually to get around it.

So a standard request of 512x512x50 costs 10 kudos
2x of that in parallel is 11 + 11 = 22 kudos
20x is 30*20 = 600 kudos (instead of 200 as it is now)

This will also help provide some deflationary pressure to the overall Kudos economy, as due to the users not going negative, it keeps increasing which might be a problem, not sure.

This will incentivize people sending smaller batches, which will allow other users of the horde to get their own generations handled in between

In addendum to this, we need to prevent people making sockpuppets to parallelize without the extra cost. By default this will not bring any benefit to 0 kudos accounts, as they will still be at the bottom of the priority so there's no added benefit.

However a potential abuse vector is someone making multiple sockpuppet accounts and gifting them kudos from their main worker account. Then using those sockpuppets to parallelize without penalty. To avoid this I will need to prevent multiple accounts from the same IP as in #43

Inpainting on cliRequests.py

i've tried various types of masks but no luck.

i assume the mask image needs to be transparent where it is to be inpainted.

it just seems to create a new image

New responsive frontend now available at diffusionui.com

I implemented the Stable Horde async rest api on my diffusion-ui frontend.

Advantages over the current web client:

  • works under Firefox
  • you can save the images, they are stored in the browser (until you close your browser) in a gallery in the right panel
  • it's a responsive design, allowing you to use it on your smartphone
  • regenerating same images (to try with a parameter a bit changed for example) is very easy

To use it:

  • go to https://diffusionui.com
  • open the panel on the left
  • select Stable Horde in the dropdown at the top
  • optionally you can set your api key in the model info tab (โ“˜ icon)

Send me a GitHub star if you like it!

CORS issue

I'm trying to implement the Stable Horde API for the diffusion-ui frontend.

Using the https://stablehorde.net/api/v2/generate/sync endpoint, my POST data looks like this:

  {
    "payload": {
        "sampler_name": "k_lms",
        "batch_size": 1,
        "cfg_scale": 7,
        "seed": -1,
        "height": 512,
        "width": 512,
        "steps": 50,
        "n": 1
    },
    "prompt": "a horde of cute stable robots in a sprawling server room repairing a massive mainframe",
    "nsfw": false,
    "censor_nsfw": false
  }

It seems I was able to receive a few images, but now sometimes I receive the following error:

Access to fetch at 'https://stablehorde.net/api/v2/generate/sync' from origin 'http://localhost:5173' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.

Prevent more than 3 accounts from the same IP

This is to prevent potential attack vectors through sockpuppet client accounts. This is already handled on the bridge-side, but it makes sense on the client side as well.

Default should be 3 to allow family to use it together on their own accounts.

This limit will be lifted when a user has been marked as "trusted". Trusted users will not be counted for multiple IPs.

However it also means that if there is a legitimate reason for dozens or hundred of accounts on the same IP (such as some sort of webui which people can use with their own API keys) each user will need to be trusted. To avoid trusting everyone, people can request an app name as described in #42

Selecting the 'streamlit' option when starting the webui ignores the --bridge option.

When launching the sd-webui with webui.sh/bat, the user is now queried as to whether they wish to launch the gradio or streamlit version. It seems that the streamlit version (webui_streamlit.py) of the sd-webui doesn't support the '--bridge' flag.

The documentation might need a quick update in the 'Join the Horde' section to note this is the case, or the webui_streamlit.py script updated to properly launch in bridge mode.

Using stable horde to generate dreambooth models

Google Dreambooth has come to SD but it currently requires 24gb of vram, making it only accessible to people who pay for a server.
As SD itself runs on even 2gb cards now, wouldn't this be the perfect additional use for a server farm like this?

Worker Rewards

Sorry, I just got here. Beyond Kudos, could there be some kind of cryptocurrency token system built on top of an existing blockchain as an additional monetary incentive and/or charging users (perhaps for bulk requests)? This idea came to me from earning Banano (readily exchanged for Nano) for getting points for the Banano team for Folding@Home. I don't think this is critical, but it'd definitely draw more people into the fold. This could also be a good idea since I remember a similar project charging small amounts of Bitcoin on Discord for using it. Is anything like this in the cards? I think you have an excellent platform here for this kind of thing (scalable like Folding is) so this could make it quite popular.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.