Coder Social home page Coder Social logo

eurus-holmes / gemma_pytorch Goto Github PK

View Code? Open in Web Editor NEW

This project forked from google/gemma_pytorch

0.0 0.0 0.0 2.15 MB

The official PyTorch implementation of Google's Gemma models

Home Page: https://ai.google.dev/gemma

License: Apache License 2.0

Python 95.74% Dockerfile 4.26%

gemma_pytorch's Introduction

Gemma in PyTorch

Gemma is a family of lightweight, state-of-the art open models built from research and technology used to create Google Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. For more details, please check out the following links:

This is the official PyTorch implementation of Gemma models. We provide model and inference implementations using both PyTorch and PyTorch/XLA, and support running inference on CPU, GPU and TPU.

Updates

[April 9th] Support CodeGemma. You can find the checkpoints on Kaggle and Hugging Face [April 5] Support Gemma v1.1. You can find the v1.1 checkpoints on Kaggle and Hugging Face.

Download Gemma model checkpoint

You can find the model checkpoints on Kaggle here.

Alternatively, you can find the model checkpoints on the Hugging Face Hub here. To download the models, go the the model repository of the model of interest and click the Files and versions tab, and download the model and tokenizer files. For programmatic downloading, if you have huggingface_hub installed, you can also run:

huggingface-cli download google/gemma-7b-it-pytorch

Note that you can choose between the 2B, 7B, 7B int8 quantized variants.

VARIANT=<2b or 7b>
CKPT_PATH=<Insert ckpt path here>

Try it free on Colab

Follow the steps at https://ai.google.dev/gemma/docs/pytorch_gemma.

Try it out with PyTorch

Prerequisite: make sure you have setup docker permission properly as a non-root user.

sudo usermod -aG docker $USER
newgrp docker

Build the docker image.

DOCKER_URI=gemma:${USER}

docker build -f docker/Dockerfile ./ -t ${DOCKER_URI}

Run Gemma inference on CPU.

PROMPT="The meaning of life is"

docker run -t --rm \
    -v ${CKPT_PATH}:/tmp/ckpt \
    ${DOCKER_URI} \
    python scripts/run.py \
    --ckpt=/tmp/ckpt \
    --variant="${VARIANT}" \
    --prompt="${PROMPT}"
    # add `--quant` for the int8 quantized model.

Run Gemma inference on GPU.

PROMPT="The meaning of life is"

docker run -t --rm \
    --gpus all \
    -v ${CKPT_PATH}:/tmp/ckpt \
    ${DOCKER_URI} \
    python scripts/run.py \
    --device=cuda \
    --ckpt=/tmp/ckpt \
    --variant="${VARIANT}" \
    --prompt="${PROMPT}"
    # add `--quant` for the int8 quantized model.

Try It out with PyTorch/XLA

Build the docker image (CPU, TPU).

DOCKER_URI=gemma_xla:${USER}

docker build -f docker/xla.Dockerfile ./ -t ${DOCKER_URI}

Build the docker image (GPU).

DOCKER_URI=gemma_xla_gpu:${USER}

docker build -f docker/xla_gpu.Dockerfile ./ -t ${DOCKER_URI}

Run Gemma inference on CPU.

docker run -t --rm \
    --shm-size 4gb \
    -e PJRT_DEVICE=CPU \
    -v ${CKPT_PATH}:/tmp/ckpt \
    ${DOCKER_URI} \
    python scripts/run_xla.py \
    --ckpt=/tmp/ckpt \
    --variant="${VARIANT}" \
    # add `--quant` for the int8 quantized model.

Run Gemma inference on TPU.

Note: be sure to use the docker container built from xla.Dockerfile.

docker run -t --rm \
    --shm-size 4gb \
    -e PJRT_DEVICE=TPU \
    -v ${CKPT_PATH}:/tmp/ckpt \
    ${DOCKER_URI} \
    python scripts/run_xla.py \
    --ckpt=/tmp/ckpt \
    --variant="${VARIANT}" \
    # add `--quant` for the int8 quantized model.

Run Gemma inference on GPU.

Note: be sure to use the docker container built from xla_gpu.Dockerfile.

docker run -t --rm --privileged \
    --shm-size=16g --net=host --gpus all \
    -e USE_CUDA=1 \
    -e PJRT_DEVICE=CUDA \
    -v ${CKPT_PATH}:/tmp/ckpt \
    ${DOCKER_URI} \
    python scripts/run_xla.py \
    --ckpt=/tmp/ckpt \
    --variant="${VARIANT}" \
    # add `--quant` for the int8 quantized model.

Tokenizer Notes

99 unused tokens are reserved in the pretrained tokenizer model to assist with more efficient training/fine-tuning. Unused tokens are in the string format of <unused[0-98]> with token id range of [7-105].

"<unused0>": 7,
"<unused1>": 8,
"<unused2>": 9,
...
"<unused98>": 105,

Disclaimer

This is not an officially supported Google product.

gemma_pytorch's People

Contributors

pengchongjin avatar mon-ius avatar michaelmoynihan avatar joselpart avatar osanseviero avatar danielhanchen avatar mddct avatar eltociear avatar qubitium avatar k-nar avatar r-gheda avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.