eazel7 / flexgen Goto Github PK
View Code? Open in Web Editor NEWThis project forked from fminference/flexigen
Running large language models like ChatGPT/GPT-3/OPT-175B on a single GPU. Up to 100x faster than other offloading systems.
License: Apache License 2.0