Comments (6)
Thanks for the study! I also checked with a few users, they don't care too much about Python version. But they uses some popular docker images provided by Torch and Nvidia etc. I think most are using Python 3.6. And consider the diff above is not so much. I suggest we just use our existing docker image, which is 3.6.9.
from fastseq.
The expected score is set based on the result without docker. Let me re-run it using docker.
from fastseq.
@JiushengChen I got the following result by using docker on gpu4. It is still different from yours. Which version of Python are you using?
- use Python==3.8.3
Util | Model | Task | Split | BatchSize | Samples | Tokens | Bleu | Rouge | Loss | Perplexity | Runtime(seconds) | Throughput(samples/s) | Throughput(tokens/s) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 378 | 5.3 | NA |
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 367 | 5.4 | NA |
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 369 | 5.4 | NA |
Util | Model | Task | Split | BatchSize | Samples | Tokens | Bleu | Rouge | Loss | Perplexity | Runtime(seconds) | Throughput(samples/s) | Throughput(tokens/s) |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 275 | 7.3 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 246 | 8.1 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 279 | 7.2 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 247 | 8.1 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 285 | 7.0 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 259 | 7.7 | NA |
- use the default python (3.6.9) in the docker
Util | Model | Task | Split | BatchSize | Samples | Tokens | Bleu | Rouge | Loss | Perplexity | Runtime(seconds) | Throughput(samples/s) | Throughput(tokens/s) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 407 | 4.9 | NA |
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 407 | 4.9 | NA |
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 406 | 4.9 | NA |
Util | Model | Task | Split | BatchSize | Samples | Tokens | Bleu | Rouge | Loss | Perplexity | Runtime(seconds) | Throughput(samples/s) | Throughput(tokens/s) |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 318 | 6.3 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 296 | 6.8 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 317 | 6.3 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 298 | 6.7 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 330 | 6.1 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 294 | 6.8 | NA |
I guess the cause is that @replace(...)
is not fully compatible with Python-3.6.9 yet and some optimizations did not take into effect. This issue is on my list. Will fix it once I finish the logging issue reported by the user.
from fastseq.
I am using docker from docker/Dockerfile. It uses Python 3.6.9 :: Anaconda, Inc. Good to know root cause identified.
from fastseq.
Benchmark results with Docker
- Python-3.6.9
Util | Model | Task | Split | BatchSize | Samples | Tokens | Bleu | Rouge | Loss | Perplexity | Runtime(seconds) | Throughput(samples/s) | Throughput(tokens/s) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 402 | 5.0 | NA |
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 403 | 5.0 | NA |
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 399 | 5.0 | NA |
Util | Model | Task | Split | BatchSize | Samples | Tokens | Bleu | Rouge | Loss | Perplexity | Runtime(seconds) | Throughput(samples/s) | Throughput(tokens/s) |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 326 | 6.1 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 278 | 7.2 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 314 | 6.4 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 280 | 7.1 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 312 | 6.4 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 278 | 7.2 | NA |
- Python-3.8.3
Util | Model | Task | Split | BatchSize | Samples | Tokens | Bleu | Rouge | Loss | Perplexity | Runtime(seconds) | Throughput(samples/s) | Throughput(tokens/s) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 388 | 5.2 | NA |
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 385 | 5.2 | NA |
transformers_v3.0.2 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.44 | NA|NA|NA | NA | NA | 379 | 5.3 | NA |
Util | Model | Task | Split | BatchSize | Samples | Tokens | Bleu | Rouge | Loss | Perplexity | Runtime(seconds) | Throughput(samples/s) | Throughput(tokens/s) |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 297 | 6.7 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 268 | 7.5 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 286 | 7.0 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 252 | 7.9 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 64 | 1999 | NA | 27.43 | NA|NA|NA | NA | NA | 282 | 7.1 | NA |
transformers_v3.0.2+fastseq_v0.0.3 | t5-base | wmt_en_ro/raw | val | 128 | 1999 | NA | 27.42 | NA|NA|NA | NA | NA | 260 | 7.7 | NA |
from fastseq.
@JiushengChen Sounds good! Let's use Python-3.6.9 for our benchmarks and tests.
from fastseq.
Related Issues (20)
- Does fastseq support cpu HOT 3
- NMT models speedup abnormally related to batch size HOT 2
- Can the fastseq install on windows? HOT 3
- Where to read EL-Attention source code for huggingface-transformers HOT 4
- In which file to read the source code implementation of El-Attention for self-attention HOT 1
- EL-attention GPT-2 HOT 2
- Running error with PyTorch 1.12.1 HOT 2
- fairseq/transformers unit test modify local environment HOT 3
- Transformers unit tests failure
- ACTION REQUIRED: Microsoft needs this private repository to complete compliance info
- RuntimeError: CUDA error: no kernel image is available for execution on the device HOT 7
- ModuleNotFoundError: No module named 'fastseq.models' HOT 2
- fairseq eval_lm HOT 2
- Does it support Tensorflow 2? HOT 3
- Any end-end inference example with Google Colab & HuggingFace HOT 2
- Does it support model seq2seq with encoder, and decoder base on lstm, bi-lstm? HOT 2
- Support for ONNX models & INT8 quantization HOT 2
- Support for HF's transformers 3.1+ HOT 6
- Support for Hugging Face's PEGASUS Model HOT 6
- Support for current fairseq 0.10.2 HOT 13
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fastseq.