Coder Social home page Coder Social logo

Comments (12)

lucataco avatar lucataco commented on April 26, 2024 4

Oh cool it works!
I changed this line in fastertransformer_backend to be: cmake -DSM=61 \ and built the image with:
docker build -t lucataco/triton_with_ft:22.06 -f docker/Dockerfile .
If you want to try it out, just change this line in fauxpilot's docker-compose to image: lucataco/triton_with_ft:22.06
and then run the usual ./setup.sh and ./launch.sh

from fauxpilot.

moyix avatar moyix commented on April 26, 2024 2

Very nice! I think it should also be possible to do builds with all architectures enabled via -DSM=60,61,70,75,80,86. Will try to get a new image pushed up for that soon :)

from fauxpilot.

leemgs avatar leemgs commented on April 26, 2024 1

I have figured out the cause of this issue.

Error message:

triton_1 | terminate called after throwing an instance of 'std::runtime_error'
triton_1 | what(): [FT][ERROR] CUDA runtime error: invalid device function /workspace/build/fastertransformer_backend/build/_deps/repo-ft-src/src/fastertransformer/kernels/sampling_topp_kernels.cu:1057
triton_1 |

Reason:

Compute Capability 7.0 or higher is required to run FauxPilot.
However, the Compute Capability version supported by Nvidia Titan Xp is 6.1.

Discussion:

I am wondering if there is a way to run FauxPilot using an Nvidia GPU card with Compute Capability 6.x.
Welcome to any comments. :)

from fauxpilot.

moyix avatar moyix commented on April 26, 2024 1

It seems this is probably an issue with the FasterTransformer library. One thing you may want to try is building FasterTransformer on your host machine, and testing if you get the same errors when following GPTJ example from the documentation:

https://github.com/NVIDIA/FasterTransformer/blob/main/docs/gptj_guide.md

Note that if you don't want to download and convert GPT-J just to test this, you can also point FasterTransformer at one of the CodeGen models you've downloaded with FauxPilot by using the configuration file here:

https://github.com/moyix/FasterTransformer/blob/main/examples/cpp/gptj/gptj_config.ini

That will help narrow it down to a bug in FasterTransformer or a problem with the NVIDIA Docker container environment.

from fauxpilot.

lucataco avatar lucataco commented on April 26, 2024 1

Nice, I can confirm that moyix/triton_with_ft:22.09 works for both my 1080Ti(DSM:61) and 3080Ti(DSM:86) graphics cards.
Tested both the codegen-350M-multi and codegen-2B-multi models. (Any hints on how to fit the codegen-6B-multi or higher onto 12Gb of VRAM? bitsandbytes? GradientAccumulation?)

from fauxpilot.

leemgs avatar leemgs commented on April 26, 2024 1

FYI,

Thank you very much, @lucataco and @moyix. By incorporating moyix/triton with ft:22.09 into the main line, I confirmed that we can run it more efficiently on the obsolete Nvidia GPus (e.g., Titan Xp).

On my Ubuntu 18.04 + Nvidia Titan Xp GPU system, I utilized the docker image version moyix/triton with ft:22.09 merged into the mainline.

I have tested the 2B model (codegen-2B-multi) and it functions without issue. The tested experimental outcomes are as follows:

invain@mymate:/work/leemgs/toyroom$ nvidia-smi
Fri Sep  9 11:59:10 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA TITAN Xp     On   | 00000000:01:00.0 Off |                  N/A |
| 28%   41C    P8    11W / 250W |   5933MiB / 12288MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2561      G   /usr/lib/xorg/Xorg                 41MiB |
|    0   N/A  N/A     30883      C   ...onserver/bin/tritonserver     5887MiB |
+-----------------------------------------------------------------------------+
invain@mymate:/work/leemgs/toyroom$
invain@mymate:/work/leemgs/toyroom$ curl -s -H "Accept: application/json" -H "Content-type: application/json" -X POST -d '{"prompt":"def hello","max_tokens":100,"temperature":0.1,"stop":["\n\n"]}' http://localhost:5000/v1/engines/codegen/completions


{"id": "cmpl-7TbowW6B96Itl1UodVvOGk47ROg6a", "model": "codegen", "object": "text_completion", "created": 1662692426, "choices": [{"text": "() {\n        System.out.println(\"Hello World!\");\n    }\n}\n", "index": 0, "finish_reason": "stop", "logprobs": null}], "usage": {"completion_tokens": 21, "prompt_tokens": 2, "total_tokens": 23}}invain@mymate:/work/leemgs/toyroom$
invain@mymate:/work/leemgs/toyroom$
invain@mymate:/work/leemgs/toyroom$

from fauxpilot.

leemgs avatar leemgs commented on April 26, 2024

Thank you very much. This is the information I really need. :)

from fauxpilot.

leemgs avatar leemgs commented on April 26, 2024

I trited to buidl FasterTransofrmer.git in order to get the /opt/tritonserver/bin/tritonserver and /opt/tritonserver/lib/libtritonserver.so with -DSM=61 option to support Nvidia Titan Xp.

  1. git clone https://github.com/NVIDIA/FasterTransformer.git
  2. cd FasterTransformer f&& mkdir build && cd build
  3. time cmake -DSM=61 -DCMAKE_BUILD_TYPE=Release .. && make -j12
  4. ./bin/gptj_example
            .......... Omission .................
After loading model : free:  0.24 GB, total: 11.91 GB, used: 11.67 GB
After forward       : free:  0.09 GB, total: 11.91 GB, used: 11.82 GB
Writing 320 elements
  818   262   938  3155   286  1528    11   257     0 39254
zeroCount = 8
[INFO] request_batch_size 8 beam_width 1 head_num 16 size_per_head 256 total_output_len 40 decoder_layers 28 vocab_size 50400 FT-CPP-decoding-beamsearch-time 2294.78 ms

The below contents show the ELF binary files and SO libraries that could be generated by cmake/make command.
However, I could not get the libtritonserver.so file . How can I get the libtritonserver.so file to support Nvidia Titan Xp (-DSM=61), Maybe, ./lib/libGptJTritonBackend.so can be replaced to libtritonserver.so file? Welcome thos any comments. :)

(base) invain@mymate:/work/qtlab/FasterTransformer/build$ ls -alh ./bin/gptj*
-rwxr-xr-x 1 invain invain  37M Aug 28 19:21 ./bin/gptj_example
-rwxr-xr-x 1 invain invain 235K Aug 28 19:21 ./bin/gptj_triton_example

(base) invain@mymate:/work/qtlab/FasterTransformer/build$ ls -alh ./lib/*.so
-rwxr-xr-x 1 invain invain 15M Aug 28 19:21 ./lib/libBertTritonBackend.so
-rwxr-xr-x 1 invain invain 37M Aug 28 19:21 ./lib/libGptJTritonBackend.so
-rwxr-xr-x 1 invain invain 37M Aug 28 19:21 ./lib/libGptNeoXTritonBackend.so
-rwxr-xr-x 1 invain invain 39M Aug 28 19:21 ./lib/libParallelGptTritonBackend.so
-rwxr-xr-x 1 invain invain 38M Aug 28 19:21 ./lib/libT5TritonBackend.so
-rwxr-xr-x 1 invain invain 35K Aug 28 19:20 ./lib/libTransformerTritonBackend.so
-rwxr-xr-x 1 invain invain 52M Aug 28 19:21 ./lib/libtransformer-shared.so

from fauxpilot.

lucataco avatar lucataco commented on April 26, 2024

I am also interested in running fauxpilot for Compute Capability 6.1/DSM=61 (for a 1080Ti).
Haven't tried this yet, but I thought it might be useful to someone else:
https://github.com/triton-inference-server/fastertransformer_backend/blob/dev/t5_gptj_blog/notebooks/GPT-J_and_T5_inference.ipynb

from fauxpilot.

moyix avatar moyix commented on April 26, 2024

I trited to buidl FasterTransofrmer.git in order to get the /opt/tritonserver/bin/tritonserver and /opt/tritonserver/lib/libtritonserver.so with -DSM=61 option to support Nvidia Titan Xp.

  1. git clone https://github.com/NVIDIA/FasterTransformer.git
  2. cd FasterTransformer f&& mkdir build && cd build
  3. time cmake -DSM=61 -DCMAKE_BUILD_TYPE=Release .. && make -j12
  4. ./bin/gptj_example
            .......... Omission .................
After loading model : free:  0.24 GB, total: 11.91 GB, used: 11.67 GB
After forward       : free:  0.09 GB, total: 11.91 GB, used: 11.82 GB
Writing 320 elements
  818   262   938  3155   286  1528    11   257     0 39254
zeroCount = 8
[INFO] request_batch_size 8 beam_width 1 head_num 16 size_per_head 256 total_output_len 40 decoder_layers 28 vocab_size 50400 FT-CPP-decoding-beamsearch-time 2294.78 ms

The below contents show the ELF binary files and SO libraries that could be generated by cmake/make command. However, I could not get the libtritonserver.so file . How can I get the libtritonserver.so file to support Nvidia Titan Xp (-DSM=61), Maybe, ./lib/libGptJTritonBackend.so can be replaced to libtritonserver.so file? Welcome thos any comments. :)

I believe you should be trying to build this repo, which automatically downloads and builds FasterTransformer along with the Triton backend:

https://github.com/triton-inference-server/fastertransformer_backend/

I have my own fork of it here that I used to add in a couple bugfixes and patches that hadn't yet made it into the main branch of the official repository:

https://github.com/moyix/fastertransformer_backend

from fauxpilot.

moyix avatar moyix commented on April 26, 2024

I pushed up moyix/triton_with_ft:22.09 to Docker Hub! Could someone give it a try by changing moyix/triton_with_ft:22.06 to moyix/triton_with_ft:22.09 in docker-compose.yaml?

from fauxpilot.

moyix avatar moyix commented on April 26, 2024

6B-multi would work in 12GB of VRAM with bitsandbytes I believe, yes (with bitsandbytes it takes about 1 byte per parameter so 6B = 6GB). However, I think right now bitsandbytes is only available for Huggingface Transformers, so we'd need to use Triton's Python backend. This seems doable but I'm not sure when I'll have time to try to implement it (PRs are of course welcome :)).

There's some discussion of using HF models with Triton here, for reference:

triton-inference-server/server#2747

from fauxpilot.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.