rbbrdckybk / dream-factory Goto Github PK
View Code? Open in Web Editor NEWMulti-threaded GUI manager for mass creation of AI-generated art with support for multiple GPUs.
License: MIT License
Multi-threaded GUI manager for mass creation of AI-generated art with support for multiple GPUs.
License: MIT License
Similar to !MIN_SCALE
- !MAX_SCALE
, please allow for a variable setting for the Step count.
Also, consider if it would be simpler to represent random range settings in the same syntax as in the [prompts 1-3]
. There could be a single !SCALE =
variable that would allow a fixed value or a range in the form of !SCALE = [6.0-11.0]
I just set up Dream Factory on top of my automatic1111 and noticed that Lora files seem to not be working correctly. Using Auto1111 on the same machine I can use the same prompts and seeds with a lora model and get drastically different results from what dream factory produces. It appears that the loras are not being applied at all?
I have tried inserting them in the prompt in the following ways:
<lora:[filename]:[strength]> ((the way it works in Auto1111 by default & the way it's copied out of the clipboard))
<[filename]:[strength]> ((as suggested in issue #28 ))
lora:[filename]:[strength] ((just to test))
Also noticed that in the gallery when you are reviewing the prompts the lora part is missing from the preview like it has been commented out in some way as well. I have verified that the files & prompts are working correctly in Auto1111 but not for dream factroy, is there a config that needs to be added for dream factory for loras to work properly?
Any help is appreciated, really enjoy being able to use dreamfactory so far
Because this is my first time using dream-factory, I'm not sure if I'm on the right track.
Trying to run it in Kaggle Notebook without using setup.py because I already had the dependencies installed before which I usually use it to run Automatic1111.
And then run dream-factory.py with python in a Mamba environment.
At first , I was stuck in 'Waiting for SD instance to be ready"
[controller] >>> reading configuration from config.txt...
[controller] >>> starting webserver (http://localhost:8080/) as a background process...
[controller] >>> webserver listening for external requests...
[controller] >>> detected 2 total GPU device(s)...
[controller] >>> initialized worker 'cuda:0': Tesla T4
[controller] >>> initialized worker 'cuda:1': Tesla T4
[cuda:0] >>> starting new SD instance via: /kaggle/working/stable-diffusion-webui/df-start-gpu-0.sh
[cuda:0] >>> waiting for SD instance to be ready...
As a result, I must manually run the automatic1111 webUI via threading.
So it works.
[controller] >>> reading configuration from config.txt...
[controller] >>> starting webserver (http://localhost:8080/) as a background process...
[controller] >>> webserver listening for external requests...
[controller] >>> detected 2 total GPU device(s)...
[controller] >>> initialized worker 'cuda:0': Tesla T4
[controller] >>> initialized worker 'cuda:1': Tesla T4
[cuda:0] >>> starting new SD instance via: /kaggle/working/stable-diffusion-webui/df-start-gpu-0.sh
[cuda:0] >>> waiting for SD instance to be ready...
[cuda:0] >>> SD instance finished initialization; ready for work!
[cuda:1] >>> starting new SD instance via: /kaggle/working/stable-diffusion-webui/df-start-gpu-1.sh
[cuda:0] >>> passing initial setup options to SD instance...
[cuda:1] >>> waiting for SD instance to be ready...
[cuda:0] >>> querying SD for available samplers...
[cuda:0] >>> querying SD for available models...
[cuda:0] >>> querying SD for available hypernetworks...
[cuda:0] >>> received sampler query response: SD indicates 19 samplers available for use...
[cuda:0] >>> received hypernetwork query response: SD indicates 0 hypernetworks available for use...
[cuda:0] >>> received model query response: SD indicates 1 models available for use...
[controller] >>> No more work in queue; waiting for all workers to finish...
[controller] >>> All work done; pausing server - add some more work via the control panel!
hi, question is: i can use dream factory to generate images using stable diffusion webui only?
i want to use stable diffusion UI to generate images, but use dream factory to multi thread GPU support, like here
https://github.com/NickLucche/stable-diffusion-nvidia-docker
dream factory can act as multigpu layer only to stable diffusion UI? or i need to use internatl web ui to generate images?
If I try to run the dream-factory with both commands it shows me this error. My Stable Diffusion works just fine but if I try to run the dream-factory it shows me an error that something is missing (image). Am I doing something wrong?
(dream-factory) C:\Users*\dream-factory>python dream-factory.py --prompt_file prompts/example-standard.prompts
Traceback (most recent call last):
File "C:\Users*\dream-factory\dream-factory.py", line 30, in
from torch.cuda import get_device_name, device_count
File "D:.coding\anaconda\envs\dream-factory\lib\site-packages\torch_init_.py", line 139, in
raise err
OSError: [WinError 126] Das angegebene Modul wurde nicht gefunden. Error loading "D:.coding\anaconda\envs\dream-factory\lib\site-packages\torch\lib\shm.dll" or one of its dependencies.
The JPEG compression artifacts are too extreme. Is there a way to generate and keep the uncompressed PNG file?
In addition, having control over the level of JPG compression would also help.
Sorry I'm sure this is strange to ask, but it would be amazing to run this on Kaggle. However file persistence does not work too well there, so it has to be reinstalled every time. Is there anything I / you can do to improve installation times? Installing PyTorch and all other components with Automatic1111 only take like 1/6 of the time.
Thank you!
job shuts down on me, webserver isn't starting as well.
(dream-factory) PS C:\dream-factory> python .\dream-factory.py
[controller] >>> reading configuration from config.txt...
[controller] >>> starting webserver (http://localhost:8080/) as a background process...
[controller] >>> detected 1 total GPU device(s)...
[controller] >>> initialized worker 'cuda:0': NVIDIA GeForce RTX 2070 SUPER
[controller] >>> No more work in queue; waiting for all workers to finish...
[controller] >>> All work done; pausing server - add some more work via the control panel!
[controller] >>> Server shutdown requested; cleaning up and shutting down...
Did i do something wrong? couldnt find an option to make process idle or something
Hi, so I got through all the steps until the last one, testing to make sure it works.
When I enter python dream-factory.py --prompt_file prompts/example-standard.prompts
I get the following
(dream-factory) C:\Users\blade\dream-factory>python dream-factory.py --prompt_file prompts/example-standard.prompts
Traceback (most recent call last):
File "C:\Users\blade\dream-factory\dream-factory.py", line 19, in
import scripts.utils as utils
File "C:\Users\blade\dream-factory\scripts\utils.py", line 22, in
from PIL import Image
ModuleNotFoundError: No module named 'PIL'
As far as I know, I did the setup correctly so I'm kind of at a loss.
I'm using Windows 10 and appreciate any help I can get.
If possible please explain any fixes simply, I'm very new at this so my knowledge is quite limited. Thanks in advance!
Hello, love the project. Are there any plans to support AMD GPUs like the Radeon RX580 in the future or should I finally go ahead and buy an NVIDIA GPU?
When I attempt to use the low memory card option, it returns
*** WARNING: prompt file command not recognized: SD_LOW_MEMORY (it will be ignored)! ***
I know in ai-art-generator I had to specify !PROCESS = stablediff as well, but "Process" isn't recognized either. I've tried [config] and [prompts]
Tried setting the working GPU to GPU 1, but still getting the same error. could not get a simultaneous generation going.
This is the error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument index in method wrapper__index_select)
this is what I've found online about it, but I do not know where to begin to troubleshoot the issue
https://discuss.pytorch.org/t/runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0-when-checking-argument-for-argument-index-in-method-wrapper-index-select-while-using-dataparallel-class/143566/8
I feel like I'm getting a bit out of the program-specific realm now, which is why I'm first wondering if there's some sort of discussion board for Dream Factor users? Just to share experiences, for example in terms of formatting prompts.
I'm stuck with a problem where the higher the resolution, the more duplicates there will be for example portrait. I've tried different samplers, now also upscale (which doesn't give satisfactory quality, like any AI increase), different scale and steps values - all without results. Does anyone have any good advice on wording a prompt? Maybe someone has found a suitable negative prompt wording to avoid multiplications? I'd be interested in getting a 1088px with ONE object, like in 512px, not the bigger the image, the more versions of the same object.
First off, thank you so much.
I am trying to set up my 6x3080 rig. got everything installed and running fine with just 1 GPU connected, tested. DF ran good.
running windows 10.
When i added in the second GPU i get this error when i try to launch DF:
(base) C:\Users\ryan\dream-factory>python dream-factory.py
Traceback (most recent call last):
File "C:\Users\ryan\dream-factory\dream-factory.py", line 30, in
from torch.cuda import get_device_name, device_count
ModuleNotFoundError: No module named 'torch'
Here is my config file info:
Hi, I stumbled upon this a couple of minutes ago and got everything set up, but...
[controller] >>> configuration file 'config.txt' doesn't exist; using defaults... [controller] >>> starting webserver (http://localhost) as a background process... [controller] >>> detected 0 total GPU device(s)... [controller] >>> ERROR: unable to initialize any GPUs for work; exiting!
I have an Nvidia RTX 3060 Ti, why isn't dream factory finding it? Do I need to specify it somewhere?
Thanks
I have a standard prompt with the following fields
!CKPT_FILE = all
!SEED = 1178461960
I want to use the same seed for all prompts across all models however I am noticing the seed incrementing by 1 with each new model.
if I switch to random mode the seed increments between images when batch or samples is used. Is it possible to get a flag to keep the seed, increment, decrement, or randomize the seed for each generation?
Hello,
I'm attempting to run dream-factory on a ubuntu machine, it appears to be running - the web interface comes up and all GPUs show status dreaming
, however after some time the following errors start to get thrown. It appears to be something wrong with the folder creation.
After a bit more time only two of the devices (which seem to be random), continue running. The other devices get stuck in the following +exif data
state. While two of the devices keep dreaming
and generating files. The files however are not stored/copied into the proper directories e.g. gpu_0
, they remain in the root of the output/<date-config_name>
directory.
I will attempt to debug this and make changes, however if you have any idea or guidance what could cause this that would be helpful. I really like this project and would like to contribute.
Please see the error below, following screenshot and system info.
Exception in thread Thread-17:
Traceback (most recent call last):
File "/home/lol/anaconda3/envs/dream-factory/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/home/lol/dream-factory/dream-factory.py", line 143, in run
new_files = os.listdir(samples_dir)
FileNotFoundError: [Errno 2] No such file or directory: 'output/2022-11-28-example-standard/gpu_4'
Exception in thread Thread-15:
Traceback (most recent call last):
File "/home/lol/anaconda3/envs/dream-factory/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/home/lol/dream-factory/dream-factory.py", line 143, in run
new_files = os.listdir(samples_dir)
FileNotFoundError: [Errno 2] No such file or directory: 'output/2022-11-28-example-standard/gpu_2'
Below are the system specs
(dream-factory) lol@lol-H110-D3A:~/dream-factory$ lsb_release -a
...
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
(dream-factory) lol@lol-H110-D3A:~/dream-factory$ sudo lshw -short
...
H/W path Device Class Description
========================================================
system H110-D3A (Default string)
/0 bus H110-D3A-CF
/0/0 memory 64KiB BIOS
/0/3d memory 20GiB System Memory
/0/3d/0 memory 16GiB DIMM DDR4 Synchronous 2133 MH
/0/3d/1 memory [empty]
/0/3d/2 memory 4GiB DIMM DDR4 Synchronous 2133 MHz
/0/3d/3 memory [empty]
/0/43 memory 128KiB L1 cache
/0/44 memory 512KiB L2 cache
/0/45 memory 2MiB L3 cache
/0/46 processor Intel(R) Celeron(R) CPU G3930 @ 2.9
/0/100 bridge Xeon E3-1200 v6/7th Gen Core Proces
/0/100/1 bridge 6th-10th Gen Core Processor PCIe Co
CUDA devices
(dream-factory) lol@lol-H110-D3A:~/dream-factory$ nvidia-smi
Mon Nov 28 15:32:21 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A |
| 0% 46C P0 35W / 180W | 386MiB / 8192MiB | 5% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... Off | 00000000:03:00.0 Off | N/A |
| 0% 49C P8 24W / 220W | 6MiB / 8192MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA GeForce ... Off | 00000000:04:00.0 Off | N/A |
| 0% 35C P8 6W / 180W | 6MiB / 8192MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA GeForce ... Off | 00000000:05:00.0 Off | N/A |
| 0% 37C P8 7W / 180W | 6MiB / 8192MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 4 NVIDIA GeForce ... Off | 00000000:06:00.0 Off | N/A |
| 0% 49C P8 23W / 240W | 6MiB / 8192MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 5 NVIDIA GeForce ... Off | 00000000:07:00.0 Off | N/A |
| 0% 45C P8 17W / 240W | 6MiB / 8192MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
in script/util.py,
+ " --n_iter " + str(command.get('samples')) \
+ " --n_samples " + str(command.get('batch_size')) \
The code above is wrong and needs to be fixed accordingly.
Hey!
How that upscale should work?
When I have:
!USE_UPSCALE = yes
!UPSCALE_AMOUNT = 2.0
and original is 768x768, should the final result be 1536x1536 (it does not happen)? Or is it about something else?
Hello,
I want to set the Web UI behind a Reverse Proxy so I can Access it remotely. But I always get 504 Gateway Error. Is there any Spezial configuration needed to set this Up?
The Reverse Proxy is the "NGINX Proxy Manager" which works with a bunch of other Websites just fine.
If more Informations needed for this just ask.
BR
Terrorwolf
As most of my images are generated with euler_a, this would be an awesome addition! Thanks.
I mainly use AUTO1111 for it's API, but I need more images faster. Unfortunately I can't generate the prompts beforehand so can't use DF for this task.
If only it had some kind of API that I can query and it manages the generation on my multi GPU server.
:)
I want to be able to re-produce images in auto1111 that dream factory creates
I was trying to figure out why I wasnt able to reproduce images from dream factory in auto1111. It looks like loras are not being applied the same way. Without loras I am able to reproduce the same image, however with loras I cannot.
The prompt:
(takara miyuki, 1girl, glasses, pink hair, long hair, purple eyes), clean lines, (best quality), rich colors, saturated colors, ((Lucky Star anime)), 1girl, (looking at viewer, looking at camera, eye contact),(beautiful yukata), (anime aesthetic), standing in a forest at night, lora:luckystarv2:1.0
negative prompt: ((worst quality, low quality, photo, photorealistic, photo realistic)), (cropped head), (close up:1.6), (3d:1.6)
size: 768x512 | model: AnythingV5_v5PrtRE.safetensors | sampler: DPM++ 2M Karras | steps: 30 | scale: 7.0 | seed: 410192554
Produces a very similar image in dream factory with or without the lora, however in auto1111 with the lora its a totally different image.
The latest version is throwing errors when loading models
Exception in thread Thread-201:
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/mnt/data/AI/dream-factory/dream-factory.py", line 169, in run
print(control.model_trigger_words.get(self.command.get('ckpt_file')))
TypeError: unhashable type: 'dict'
I reverted to commit c160045 and all is well again.
Installing Pytorch (this may take a few minutes):
Traceback (most recent call last):
File "C:\Users\undisclosed\anaconda3\envs\dream-factory\dream-factory\setup.py", line 152, in <module>
install_pytorch(verbose)
File "C:\Users\undisclosed\anaconda3\envs\dream-factory\dream-factory\setup.py", line 55, in install_pytorch
exec(cmd, verbose)
File "C:\Users\undisclosed\anaconda3\envs\dream-factory\dream-factory\setup.py", line 27, in exec
subprocess.run(command.split(' '), stdout=subprocess.DEVNULL)
File "C:\Users\undisclosed\anaconda3\envs\dream-factory\lib\subprocess.py", line 505, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\Users\undisclosed\anaconda3\envs\dream-factory\lib\subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\undisclosed\anaconda3\envs\dream-factory\lib\subprocess.py", line 1420, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified
- Line 25 subprocess.run(command.split(' '))
+ Line 25 subprocess.run(command.split(' '), shell=True)
- Line 27 subprocess.run(command.split(' '), stdout=subprocess.DEVNULL)
+ Line 27 subprocess.run(command.split(' '), stdout=subprocess.DEVNULL, shell=True)
Seeing these errors since the fixes yesterday.
Exception in thread Thread-159:
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/mnt/data/AI/dream-factory/scripts/sdi.py", line 98, in run
[cuda:0] >>> received hypernetwork query response: SD indicates 0 hypernetworks available for use...
self.callback(response)
File "/mnt/data/AI/dream-factory/scripts/sdi.py", line 515, in sampler_response
[cuda:0] >>> received model query response: SD indicates 0 models available for use...
samplers.append(i['name'])
TypeError: string indices must be integers
[cuda:0] >>> received LoRA query response: SD indicates 0 LoRAs available for use...
Exception in thread Thread-163:
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/mnt/data/AI/dream-factory/scripts/sdi.py", line 158, in run
self.callback(response)
File "/mnt/data/AI/dream-factory/scripts/sdi.py", line 678, in script_response
Exception in thread Thread-164:
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
for i in r['txt2img']:
KeyError: 'txt2img'
self.run()
File "/mnt/data/AI/dream-factory/scripts/sdi.py", line 170, in run
self.callback(response)
File "/mnt/data/AI/dream-factory/scripts/sdi.py", line 696, in upscaler_response
upscalers.append(i['name'])
TypeError: string indices must be integers
All is in title.
Running Automatic1111 webui, when an image is generated a text file with the same name and the generation parameters is generated, as per the options in the UI.
Since this is bulk creating images, it is more imperative that it dumps said file. I can write a script and extract the metadata for part of it, and note the run parameters for the rest, but it should adhere to the options set in automatic1111, IMO.
Hello there, fantastic project!
Is support for Invoke AI considered?
Thank you,
eg. for people, ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))). (((more than 2 nipples))). out of frame, ugly, extra limbs, gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), (cross-eyed),body out of frame, , (closed eyes), (mutated), (bad body)
for non-people, something completely else
This has been helping me a lot but I would like to save the prompts/seed that used to generated each picture, which in A1111 it was easy to do with png saving option. Is there a way to call any API to save metadata into the pictures?
Pulled the new version of dream factory and !CKPT_FILE = all does not appear to work in random mode, the model never changes. It does appear to still be working in standard mode once all of the prompt combinations have been exhausted.
From error FileNotFoundError: [Errno 2] No such file or directory: 'output/2022-10-18-2022-10-18-prompts-standard/gpu_1'
if I have --device-id in my commandline params for my primary webui-user.bat, when this copies it, it keeps that and adds a second device-id flag. this appears to result in vram of 1 being used while the processing is split between the 2. should have everything split between them instead.
RT
No further explanation or anything given. It simply prints this message every time it queries something. This is the Ubuntu installation I am using.
What's even strange is that the same prompt file works on the installation on Windows.
First off, thanks so much for your contributions & support of this project!
Is it possible to adjust the DPI of images being rendered?
Hello,
First of all thanks for coding the dream-factory - a really cool tool!
I have a NVIDIA RTX 3070 with 8GB memory and am running into an issue when I use larger dimensions for the output format (e.g. 3000x4000) or seed images (2400x3000), I get the CUDA out-of-memory error below.
I tried setting the PYTORCH_CUDA_ALLOC_CONF environment variable but it didn't solve the issue.
On the forums they suggested to reduce the batch size but I checked the dream-factory source code and it is already set to 1.
Do you have any suggestions to create larger images?
Thanks!
RuntimeError: CUDA out of memory. Tried to allocate 65.48 GiB (GPU 0; 8.00 GiB total capacity; 2.74 GiB already allocated; 2.88 GiB free; 2.81 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
[cuda:0] >>> finished job #1 in 22.55 seconds.
More like a request!
Can we get this to work with sd-dynamic-thresholding as well? =)
https://github.com/mcmonkeyprojects/sd-dynamic-thresholding
I see in the readme
that this is possible, but I don't know where to begin. Any tips on how to run Dream Factory via command line?
I am using linux (fedora) and need help if there is any way I can test this on an amd video card.
I don't know if there is a way to make it work with rocm
Do you plan to have docker version of this repo
Fix: symlink stable_diffusion/scripts
to scripts_mod
Do you think it will be easy to support stable-diffusion 2? https://github.com/Stability-AI/stablediffusion
I'll try and make a PR if I can get it to work.
when you enable multiple servers starting at the same time, the tool will not run. the reason this occurs (as far as I can tell) is that the tool attempts to update venv folder, striking the same file, which leads to the one that is slightly slower crashing silently. what I am doing in an attempt to make a winforms project similar to this to get around the issue is hardlinking a duplicate of the venv folder and pointing to that duplicate. since this is python and going to be on linux, symlinks would probably also work similarly. additionally: just copying the folder will work, or just setting the path without doing anything else will work as well. it will just recreate everything its missing.
automatic1111 requires python 3.10.9 this project uses anaconda, which is python 3.9. attempting to run automatic1111 manually (ie: to train a model) after installing this fails because it now complains that "xformers is incompatible with 3.9" please switch the project to use pip instead.
GPU: 3080
OS: Win10
Starting up the script it does not detect my GPU at all.
python dream-factory.py
[controller] >>> reading configuration from config.txt...
[controller] >>> starting webserver (http://localhost:8080/) as a background process...
[controller] >>> detected 0 total GPU device(s)...
[controller] >>> ERROR: unable to initialize any GPUs for work; exiting!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.