Comments (4)
I am afraid that it is a misunderstanding about register_buffer(). Once you utilize register_buffer() to register a variable, you cannot use model.named_parameter to obtain that variable, but you can still update it by an optimizer as long as you can index that Variable and pass it to the optimizer.
In architect.py
, we utilize model.arch_parameters()
to index alphas
and pass it to the optimizer.
from eautodet.
So, the alphas are updated only by the validation data by the code below !
# architect
if epoch >= opt.search_warmup:
# input_valid = imgs
# target_valid = targets
input_valid, target_valid, _, _ = next(valid_gen)
input_valid = input_valid.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0
# Multi-scale
if opt.multi_scale:
sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size
sf = sz / max(input_valid.shape[2:]) # scale factor
if sf != 1:
ns = [math.ceil(x * sf / gs) * gs for x in input_valid.shape[2:]] # new shape (stretched to gs-multiple)
input_valid = F.interpolate(input_valid, size=ns, mode='bilinear', align_corners=False)
architect.step(input_valid, target_valid)
Now I'm curious if arch_paramters could be updated enough.
As shown in the picture below, alpha values didn't change much during the search epochs(=50)
Did you also only get slight change in alphas in your experiment?
Thanks for the quick answer always!
from eautodet.
Since we normalize alpha by softmax, so even if the absolute values have little difference, softmax(alpha) can be quite different.
BTW, Our code is based on DARTS (DARTS: Differentiable Architecture Search), where alphas are initialized at a scale of 1e-3 and the lr for alpha is set as 5e-4, making the absolute values of alpha change slightly. But softmax(alpha) changes quite drastic.
from eautodet.
Softmax value for model.1.alphas at epoch0 is
F.softmax(torch.tensor([-0.00035720536834560335, 0.0007502037915401161, -0.00034166971454396844, 0.0004349834634922445]))
tensor([0.2499, 0.2502, 0.2499, 0.2501])
and softmax value for model.1.alphas at epoch49 is
F.softmax(torch.tensor([-0.0003571510314941406, 0.0007500648498535156, -0.00034165382385253906, 0.00043487548828125]))
tensor([0.2499, 0.2502, 0.2499, 0.2501])
Softmax(alpha) also seems to show only slight change. Am I missing something?
Maybe we could increase the lr for alpha..?
from eautodet.
Related Issues (5)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from eautodet.