Coder Social home page Coder Social logo

mk_llm's Introduction

Bigram Character-Level Language Model

This repository contains the code for a bigram word-level language model implemented using a Transformer architecture. The model is designed to predict the next word in a sequence based on the previous word (bigram).

Requirements:

Python 3.x PyTorch mmap argparse Instructions:

Clone this repository.

Install the required dependencies:

Bash pip install torch mmap argparse

Prepare your data:

Create vocabulary files for words (one file per split: train, validation). Each line should contain a unique word from your training data. Split your text data into training and validation sets (e.g., using a script or manually). Save them as plain text files. Run the training script:

Bash python train.py -batch_size 32 # Adjust batch size as needed Use code with caution. content_copy This script assumes your vocabulary files are named vocab_words.txt and your split files are named train_split.txt and val_split.txt. You can modify the script to use different file names.

(Optional) Evaluate the trained model on the validation set. You can modify the train.py script to add evaluation functionality.

Project Structure:

train.py: Script for training the bigram language model. vocab_words.txt: (Replace with your actual vocabulary file name) Vocabulary file containing unique words (one per line). train_split.txt: (Replace with your actual file name) Text data split for training. val_split.txt: (Replace with your actual file name) Text data split for validation. Further Considerations:

This is a basic implementation for demonstration purposes. You can explore advanced techniques for improving performance, such as: Subword tokenization (e.g., Byte Pair Encoding) for handling unknown words. Pre-trained word embeddings (e.g., Word2Vec, GloVe) for richer word representations. Optimization techniques like gradient accumulation or mixed precision training for memory efficiency. Experiment with different hyperparameters (e.g., vocabulary size, embedding size, batch size, learning rate) to find the best configuration for your dataset.

mk_llm's People

Contributors

madhumithakolkar avatar

Stargazers

 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.