Comments (3)
Speaking of using hivemind for distributed training/etc, just stumbled across the following on:
SD Training Labs is going to conduct the first global public distributed training on November 27th
- Distributed training information provided to me:
- Attempted combination of the compute power of over 40+ peers worldwide to train a finetune of Stable Diffusion with Hivemind
- This is an experimental test that is not guaranteed to work
- This is a peer-to-peer network.
- You can use a VPN to connect
- Run inside an isolated container if possible
- Developer will try to add code to prevent malicious scripting, but nothing is guaranteed
- Current concerns with training like this:
- Concern 1 - Poisoning: A node can connect and use a malicious dataset hence affecting the averaged gradients. Similar to a blockchain network, this will only have a small effect on the averaged weights. The larger the amount of malicious nodes connected, the more power they will have on the averaged weights. At the moment we are implementing super basic (and vague) discord account verification.
- Concern 2 - RCE: Pickle exploits should not be possible but haven't been tested.
- Concern 3 - IP leak & firewall issues: Due to the structure of hivemind, IPs will be seen by other peers. You can avoid this by seting client-only mode, but you will limit the network reach. IPFS should be possible to be used to avoid firewall and NAT issues but doesn't work at the moment
Doing some further googling/etc, it seems that the 'SD Training Labs' discord is:
And things are being coordinated in the #distributed-training
channel, which has a few pinned messages about the training, and links to the following repos:
- https://github.com/learning-at-home/hivemind
-
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
-
- https://github.com/chavinlo/distributed-diffusion
-
Train a Stable Diffusion model over the internet with Hivemind
-
- https://github.com/bigscience-workshop/petals
-
Decentralized platform for running 100B+ language models. Work in progress
-
It looks like the chavinlo/distributed-diffusion
repo is based on this one:
- https://github.com/Mikubill/naifu-diffusion
-
Train stable diffusion model with Diffusers, Hivemind and Pytorch Lightning
-
A couple of snippets from skimming that discord channel
Could you tell me what are the minimum hardware requirements to participate?
at the moment any GPU with 20.5 GB of VRAM. so a rtx 3090
yeah, you can connect and disconnect at anytime
It basically works like this:
When a training session starts there is one peer which is the one more peers are going to connect to, this one usually has two ports opened, one for TCP and other for UDP connections (TCP works most of the time while UDP doesn't)Then the rest of the peers connect to the first peer, they can either chose wether to open their ports too, so more people can connect to them, extending the network reach and reducing global latency, or chose to just be a client, meaning that no other peers can connect to them.
Then all of the peers train individually (in a federated manner) on the provided dataset (provided by a dataset server). Once a certain number of itterations has been reached all peers stop training and start exchanging data to one another, this usually takes 3 minutes in very ideal conditions but it can take up to 15 or 20.
If a peer joins while this is happening, or has outdated weights it will have to wait and download the weights again.
If a peer exits while this is happening or before it shares it's locally-trained weights, the network losses some potential learning, and if the dataset that was assigned to that peer isnt reported (a timeout of 30 minutes) it will be reassigned to another peer later.Once all the peers have syncronized they resume training and repeat the process until they reach the set number of itterations again.
Some potential concerns is the security of the network, since basically anyone can connect and send garbage data. I was thinking of adding basic discord account auth for now, I have read some PRs containing security network features but I am not sure
I am also testing right now the effects of compression during sync
I will prob just use the old unoptimized codebase and stick hivemind to it
its so complicated to port the diffusers thing into lightning
I will try to "port" it to lightning and see if it works because theres another repo (naifu) that is also doing training with diffusers, very similar to the current trainer, but does some weird things in the back
they got hivemind working I think
but im not sure because they dont even use the DHT (they have the modules though)
okay and is this just for the group project, or also to offer gpu to individual artists
I was also planning to do a distributed dreambooth like horde for everyone so yeah
from ai-horde.
That would take quite a bit of work to onboard to the horde, but it's a promising thing. The problem is that the horde is asynchronous so it could be that the latency would be prohibitive, but I would be willing to consider it, especially if someone sends a PR
from ai-horde.
I think the horde is primarily used for inference, not training. Do any jobs actually do training, or is that planned for the future? If not, it seems like this may provide limited benefit.
from ai-horde.
Related Issues (20)
- New alchemy forms - clip image feature extraction, clip text encode HOT 2
- Create Styles via the Horde
- [Enhancement]: Add Adetailer
- Add support for InstantID
- Bounty: 2,500,000 kudos for ipadapter and new controlnets
- Add option to change automatically the request requirements to stay within the upfront kudos limits HOT 2
- Add support for Stable Cascade
- Add Support for Large World Model (text)
- Add support for regional prompting HOT 1
- Worker pop endpoints should return whether the generation is shared: true or not
- Add Educator role
- Add SD-forge-layerdiffusion support
- Status endpoints should return which worker is processing request
- [Enhancement]: Add X-Adapter
- Return `job_ttl` with jobpop
- Worker-name identity token
- Add handling for 324115 lora
- Add support for superprompter/prompt-augmentation
- FR: request random model by baseline
- Batched jobs with post processing enabled don't have enough time to complete before being considered stale HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ai-horde.