khankindle's Projects
The open-source language model computer
MIT iQuHACK 2022 x Microsoft x IonQ Challenge
š A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
A library of reinforcement learning components and agents
Open-Source AI wearable device and software
12 Weeks, 24 Lessons, AI for All!
SOTA Weight-only Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
š¤ AutoTrain Advanced
A list of AI autonomous agents
Awesome Devin-inspired AI agents
An awesome repository of local AI tools
An Extensible Deep Learning Library
Video+code lecture on building nanoGPT from scratch
Repository for Meta Chameleon a mixed-modal early-fusion foundation model from FAIR.
An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.
VSCode inspired watch face for Sense and Versa 3
Copybara: A tool for transforming and moving code between repositories.
This repository is a curated collection of links to various courses and resources about Artificial Intelligence (AI)
DarkGS: Building 3DGS in the dark with a torch. [actively updating]
These are Github repositories for all data science material I feel important. I would update it daily.
Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective. Devika aims to be a competitive open-source alternative to Devin by Cognition AI.
Code for Discovering Preference Optimization Algorithms with and for Large Language Models
Tensor parallelism is all you need. Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.
Official Repository for "DrEureka: Language Model Guided Sim-To-Real Transfer"