Name: John K.Happy
Type: User
Company: Worked as server side engineer, cloud engineer, software engineer,AI engineer
Bio: AI/Data Engineer
Cloud Engineer, Azure/AWS/GCP Architect.
Major:Computational Neuroscience
Interest: Ruby/Python,Kaggle,DL,ML,Blockchain,Robot
Twitter: manjiroukeigo
Location: Japan
John K.Happy's Projects
Instruct-tune LLaMA on consumer hardware
Locally run an Instruction-Tuned Chat-Style LLM
The simplest way to run LLaMA on your local machine
Deep Learning (Python, C, C++, Java, Scala, Go)
Running large language models on a single GPU for throughput-oriented scenarios.
4 bits quantization of LLaMA using GPTQ
Keras implementations of Generative Adversarial Networks.
Implementation of the LLaMA language model based on nanoGPT. Supports quantization, LoRA fine-tuning, pre-training. Apache 2.0-licensed.
Inference code for LLaMA models
Implementation of Reinforcement Learning Algorithms. Python, OpenAI Gym, Tensorflow. Exercises and Solutions to accompany Sutton's Book and David Silver's course.
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
A latent text-to-image diffusion model
Code and documentation to train Stanford's Alpaca models, and generate the data.
Source code for Twitter's Recommendation Algorithm
Remtoe Repository of Local setting