u39kun / deep-learning-benchmark Goto Github PK
View Code? Open in Web Editor NEWDeep Learning Benchmark for comparing the performance of DL frameworks, GPUs, and single vs half precision
Deep Learning Benchmark for comparing the performance of DL frameworks, GPUs, and single vs half precision
Hi, @u39kun. Thank you for your work!
When I check the way you implement the tensorflow here, I found that there is a notation in the Func get_variable()
like following:
def get_variable(self, name, shape, dtype, cast_dtype, *args, **kwargs):
# TODO(reedwm): Currently variables and gradients are transferred to other
# devices and machines as type `dtype`, not `cast_dtype`. In particular,
# this means in fp16 mode, variables are transferred as fp32 values, not
# fp16 values, which uses extra bandwidth.
Do you mean that currently, if the model is trained as float32, when the model'll be loaded as float32, but it will compute as float16? Only the bandwidth in the GPU will be effected, but the speed will keep same?
Thank you for excellent data points!
Can you estimate potential increase in minibatch size when going to mixed precision?
Nvidia claims memory usage should go down, but aren't specific.
In my experiments with Titan V (using Tensorflow and home-grown implementation of Transformer model) I can only increase batch size by about 10%, which is much less than I expected.
Thanks!
Actually to use fp16 tensor cores you need cuda9 and tensorflow 1.5 which is released today.
from release notes:
Add support for CUBLAS_TENSOR_OP_MATH in fp16 GEMM
Any chances we could see retest with new TF soon?
Results from V100 from AWS for TF 1.5 and newest PyTorch
I was unable to run experiments using Caffe 2 on fp 16 :(
I'm sending here this files as I thought you would like to update tables and charts:)
Hi,
I have been using your benchmark to run different test and comparison between 10 series cards, now I have received the RTX 2080 ti and when trying to run the benchmark I am getting this:
running benchmark for frameworks ['pytorch', 'tensorflow', 'caffe2']
cuda version= None
cudnn version= 7201
/home/bizon/benchmark/deep-learning-benchmark-master/frameworks/pytorch/models.py:17: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad():
instead.
self.eval_input = torch.autograd.Variable(x, volatile=True).cuda() if precision == 'fp32'
Segmentation fault
The benchmark have been running with all the other cards and also the minst benchmark is running perfect, I would like to test all the new RTX cards to see the performance.
Your help here will be really appreciate it.
Thanks in advance
Hi! Thanks for your wonderful work. I hope to cite the training speed results in my research on GPU devices, can you let me know if there's paper or report of this work that I can cite?
Hi, @u39kun!
Nvidia claims 6x performance improvement with recent cudnn 7.2 (https://developer.nvidia.com/cudnn)
Could you please try it on Titan V?
Thank you!
This is an open thread for requests.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.