Tugrul Konuk's Projects
Contrastive Language-Image Pretraining
Best Practices, code samples, and documentation for Computer Vision.
Recently updated with 50 new notebooks! Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
Distributed preprocessing and data loading for language datasets
Large Language-and-Vision Assistant built towards multimodal GPT-4 level capabilities.
Ongoing research training transformer models at scale
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.
NeMo: a toolkit for conversational AI
Scalable toolkit for efficient model alignment
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Finetune Bart model for spelling correction
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
A high-throughput and memory-efficient inference and serving engine for LLMs