ebay / autoopt Goto Github PK
View Code? Open in Web Editor NEWAutomatic and Simultaneous Adjustment of Learning Rate and Momentum for Stochastic Gradient Descent
License: Apache License 2.0
Automatic and Simultaneous Adjustment of Learning Rate and Momentum for Stochastic Gradient Descent
License: Apache License 2.0
Reported by @lessw2020:
Possibly related but I can't train with it on the CPU, either with Auto_Adam nor regular Adam.
It's just spinning it's wheels and is always at 10% or 9% (i.e. completely random). Not sure if that is related to the params being stored elswhere but they don't seem to be being updated.
Test set: Average loss: 0.3721, Accuracy: 980/10000 (9%)
Train Epoch: 11 [0/60000 (0%)] Loss: 8.757422
Train Epoch: 11 [9000/60000 (15%)] Loss: 7.236280
Train Epoch: 11 [18000/60000 (30%)] Loss: 7.866436
Train Epoch: 11 [27000/60000 (45%)] Loss: 10.472101
Train Epoch: 11 [36000/60000 (60%)] Loss: 14.052526
Train Epoch: 11 [45000/60000 (75%)] Loss: 12.578988
Train Epoch: 11 [54000/60000 (90%)] Loss: 12.001129
Test set: Average loss: 0.3295, Accuracy: 982/10000 (9%)
Train Epoch: 12 [0/60000 (0%)] Loss: 9.479150
Train Epoch: 12 [9000/60000 (15%)] Loss: 12.103229
Train Epoch: 12 [18000/60000 (30%)] Loss: 5.913055
Train Epoch: 12 [27000/60000 (45%)] Loss: 11.299622
Train Epoch: 12 [36000/60000 (60%)] Loss: 14.419320
Train Epoch: 12 [45000/60000 (75%)] Loss: 16.773289
Train Epoch: 12 [54000/60000 (90%)] Loss: 15.650626
Test set: Average loss: 0.4353, Accuracy: 974/10000 (9%)
Hi Selçuk,
Possibly related but I can't train with it on the CPU, either with Auto_Adam nor regular Adam.
It's just spinning it's wheels and is always at 10% or 9% (i.e. completely random). Not sure if that is related to the params being stored elswhere but they don't seem to be being updated.
Test set: Average loss: 0.3721, Accuracy: 980/10000 (9%)
Train Epoch: 11 [0/60000 (0%)] Loss: 8.757422
Train Epoch: 11 [9000/60000 (15%)] Loss: 7.236280
Train Epoch: 11 [18000/60000 (30%)] Loss: 7.866436
Train Epoch: 11 [27000/60000 (45%)] Loss: 10.472101
Train Epoch: 11 [36000/60000 (60%)] Loss: 14.052526
Train Epoch: 11 [45000/60000 (75%)] Loss: 12.578988
Train Epoch: 11 [54000/60000 (90%)] Loss: 12.001129
Test set: Average loss: 0.3295, Accuracy: 982/10000 (9%)
Train Epoch: 12 [0/60000 (0%)] Loss: 9.479150
Train Epoch: 12 [9000/60000 (15%)] Loss: 12.103229
Train Epoch: 12 [18000/60000 (30%)] Loss: 5.913055
Train Epoch: 12 [27000/60000 (45%)] Loss: 11.299622
Train Epoch: 12 [36000/60000 (60%)] Loss: 14.419320
Train Epoch: 12 [45000/60000 (75%)] Loss: 16.773289
Train Epoch: 12 [54000/60000 (90%)] Loss: 15.650626
Test set: Average loss: 0.4353, Accuracy: 974/10000 (9%)
Originally posted by @lessw2020 in #2 (comment)
Working on running AutoOpt but I keep getting floatTensor vs cuda.FloatTensor mismatch when computing of the gradients starts.
This seems to be that the model and data are on Cuda but the optimizer itself is not? Not sure how to resolve. Regular Adam, etc. works.
Here's the error:
RuntimeError Traceback (most recent call last)
in
1 for epoch in range(1, epochs + 1):
----> 2 train(epoch, model, train_loader, optimizer)
3 test(model, test_loader)
in train(epoch, model, train_loader, optimizer)
18
19 loss.backward()
---> 20 optimizer.step()
21
22 if batch_idx % 1000 == 0:
~/AutoOpt/auto_adam.py in step(self, closure, verbose)
132 hessian = denom / sqrt_bias_correction2
133 # print(torch.norm(hessian))
--> 134 self.auto_tune(parameter=param, hessian=hessian, verbose=verbose)
135 group['lr'] = 1 - param.gamma[0]
136 adaptive_beta1 = param.gamma[1] / (1 - param.gamma[0])
~/AutoOpt/auto_optimizer.py in auto_tune(self, parameter, hessian, with_momentum, verbose)
158 :param verbose: Be verbose and print computed values.
159 """
--> 160 G = torch.stack((parameter.grad, parameter.grad - parameter.gradient_est))
161 if hessian is None:
162 B = G
RuntimeError: expected type torch.cuda.FloatTensor but got torch.FloatTensor
Thanks for any assistance!
Less
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.