graal-research / poutyne Goto Github PK
View Code? Open in Web Editor NEWA simplified framework and utilities for PyTorch
Home Page: https://poutyne.org
License: GNU Lesser General Public License v3.0
A simplified framework and utilities for PyTorch
Home Page: https://poutyne.org
License: GNU Lesser General Public License v3.0
TL;DR: some LRSchedulers must be able to update the learning rate (and other parameters) during the batches and not just at the end of an epoch. Should we implement this? What is the best way?
Long version:
I would love to see PyToune offer advanced learning rate schedules like 1 cycle policy!
PyTorch's _LRScheduler
have a step()
method which set the learning rate. The user is then responsible to call the step()
method somewhere in the training loop. This normally happens at the end of each epoch, but it could also happen after each batch.
PyToune wraps PyTorch's _LRScheduler
s via the Callback
interface and calls the step()
method on_epoch_end
. This is great for convenience, but some LRSchedulers have to update the lr after each batch.
Here are a few options I see:
We could update the PyTorchLRSchedulerWrapper
and make it configurable when to trigger the step()
call, and then use it something like this:
CosineAnnealingLR(step_on_batch_end=True, ...)
We could introduce batch versions of all LRSchedulers.
I use a simple callback LRSetter
that gets a list of learning rates (and momentum values) and sets the lr (and momentum) on each batch end.
This should includes wrappers for pytorch modules, which do not support initializers in their constructors.
Furthermore, there should be some kind of way to take a given pytorch module and initialize it with default and user-chosen intializers depending on the type of the modules inside the given module. For instance, say I have some module A which contains other modules and some of them are of type Linear
. The functionality should take the module A as a parameters and initialize the Linear modules in some default or non-default way.
First of all I have used Poutyne on several projects very successfully and am appreciative of the ease of use and how much it simplifies my Pytorch training code.
I wish to incorporate adversarial loss during my training, along the lines of the Usage example in https://github.com/lyakaap/VAT-pytorch . In order to compute adversarial loss I need access to the model and per-batch training data before the forward pass. How would one do this in Poutyne or in Poutyne callback? Thanks, Lars
Hi, thanks for the package. Does the package support multiple -GPUs mode.
Some thing like this when initializing the Model.init()
def __init__(self, model, optimizer, loss_function, metrics=[]):
self.model = model
self.optimizer = optimizer
self.loss_function = loss_function
self.metrics = list(map(get_metric, metrics))
self.metrics_names = [metric.__name__ for metric in self.metrics]
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f'Using GPU {self.device}')
#Transfer model to GPUs model
nGPUs = torch.cuda.device_count()
if nGPUs> 1:
print(f"==============Using {nGPUs} GPUs ==============")
self.model = torch.nn.DataParallel(self.model)
self.model = self.model.to(self.device)
I noticed that most of (if not all) the steps of the Contributing wiki can be automated via pre-commit. Having pre-commit would make that process much easier and enforce that the developers follow coding conventions.
I'd be more than happy to take this up.
Possible suggestion inspire from SciPy.
@Misc{frederickParadisPyToune,
author = {Frédérick Paradis},
title = {{PyToune}:Keras-like framework for {PyTorch}},
year = {2018--},
url = "https://pytoune.org/",
note = {[Online; accessed <today>]}
}
due to lack of predict examples, I am unable to use the model.predict correctly, there also exists a bug as:
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [256, 1, 1, 3], but got 3-dimensional input of size [31, 2, 132] instead
I used :output=model.predict_on_batch(Singledata)
and my Singledata is a array of size (31,2,128), float type
It would be nice to have a progress bar similar for valid and test.
Describe the bug
Installation error.
To Reproduce
pip3 install -U poutyne
or
pip3 install -U git+https://github.com/GRAAL-Research/poutyne.git
Desktop (please complete the following information):
OS: Ubuntu 18.4
Screenshots
Collecting git+https://github.com/GRAAL-Research/poutyne.git
Cloning https://github.com/GRAAL-Research/poutyne.git to /tmp/pip-oo9s7g9u-build
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-oo9s7g9u-build/setup.py", line 8, in
readme = f.read()
File "/usr/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 4320: ordinal not in range(128)
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-oo9s7g9u-build/
Bug description
The EarlyStop callback seems to execute one more epoch before the training stops.
The same thing happens with ReduceLROnPlateau.
To Reproduce
model = Model(net, "adam", "cross_entropy", batch_metrics=['accuracy'])
callbacks = [pt.EarlyStopping(patience=10, monitor = 'val_acc', mode='max')]
model.fit_generator(train_dataloader, val_dataloader, epochs=100, callbacks=callbacks)
Expected behavior
The training should stops after the patience is over.
Screenshots
This one was executed with patience = 5. It stopped after the last displayed epoch.
This one was executed with EarlyStop patience = 5 and ReduceLROnPlateau patience = 2.
Desktop (please complete the following information):
Google colab
So that it works like the accuracy metric and cross entropy loss while still keeping the mask.
The method should probably verify that the inputs are not already a dataloader before wrapping.
I know framework.Model
is:
The Model class encapsulates a PyTorch module/network, a PyTorch optimizer,
a loss function and metric functions. It allows the user to train a neural
network without hand-coding the epoch/step logic.
also, nn.Module
is not nn.Model
, but many people implement the nn.Module
with the name Model
and create instance with name model
. It's little confusing when writing callback class (when accessing self.model
) and the framework.Model.__init__
has the parameter model
which is in pytorch's concept.
Can we change the name of Model
to Engine
(just like ignite or tnt) or other name?
I know it is shameless self-advertisement, but I https://github.com/stared/livelossplot to make it easy to have inline training plots in Jupyter Notebook.
Right now I support Keras out-of-box. There is an API to add it anywhere, see https://github.com/stared/livelossplot/blob/master/minimal_example.ipynb. I think that it would be nice if it worked with PyToune. Any ideas how to make it plug&play for the end user?
Tracker fail because missing .cpu
File "train.py", line 183, in <module>
main()
File "/home/fredy/anaconda3/lib/python3.7/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/home/fredy/anaconda3/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/home/fredy/anaconda3/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/fredy/anaconda3/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "train.py", line 176, in main
callbacks=[tb_tracker])
File "train.py", line 39, in train
seed=seed)
File "/home/fredy/git/poutyne/poutyne/framework/experiment.py", line 549, in train
callbacks=callbacks)
File "/home/fredy/git/poutyne/poutyne/framework/model.py", line 393, in fit_generator
self._fit_generator_one_batch_per_step(epoch_iterator, callback_list)
File "/home/fredy/git/poutyne/poutyne/framework/model.py", line 462, in _fit_generator_one_batch_per_step
for step, (x, y) in train_step_iterator:
File "/home/fredy/git/poutyne/poutyne/framework/iterators.py", line 84, in __iter__
self.on_batch_end(step, batch_logs)
File "/home/fredy/git/poutyne/poutyne/framework/callbacks/callbacks.py", line 234, in on_train_batch_end
callback.on_train_batch_end(batch_number, logs)
File "/home/fredy/git/poutyne/poutyne/framework/callbacks/tracker.py", line 134, in on_train_batch_end
self.tracker.batch_statistic_upgrade(named_parameters)
File "/home/fredy/git/poutyne/poutyne/framework/callbacks/tracker.py", line 44, in batch_statistic_upgrade
batch_layer_abs_means.append(abs_value_layer_gradient.mean().numpy())
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Describe the bug
EpochMetrics are designed to accumulate statistics for a given Epoch.
Given their interface, there is a method to accumulate and a method to retrieve the said metric.
If one does not explicitly reset the accumulators within the get_metric
method, statistics are wrong.
To Reproduce
class CorrelationMetric(EpochMetric):
def __init__(self) -> None:
super().__init__()
self.scores = list()
self.distances = list()
def forward(self, x, y):
# Accumulate metrics here
e3 = x['e3']
scores = x['scores']
distances = x['distances']
for i, (s, d) in enumerate(zip(scores, distances)):
self.scores.append(1 - float(s[e3[i]]))
self.distances.append(float(d)) # We append the distance
def get_metric(self):
val = np.corrcoef(self.scores, self.distances)[0][1]
self.reset() # Forgetting this line is problematic, and people will do!
return val
def reset(self) -> None:
self.scores = list()
self.distances = list()
Expected behavior
The interface should force to implement the reset
method, and IMO the training loop should call it after the end of an epoch, not the get_metric
method.
Desktop (please complete the following information):
Hi,
I'm using the steps_per_epoch argument of the Model's fit_generator function and the functionality I observe doesn't seem to be correct.
steps_per_epoch (int, optional): Number of batch used during one
epoch. Obviously, using this argument may cause one epoch not to
see the entire training dataset or see it multiple times.
(Defaults the number of steps needed to see the entire
training dataset)
Is my understanding correct that if we use the standard MNIST train-test split, I set the n_epochs= 1 and steps_per_epoch to 120000, it is the equivalent of training 2 epochs. However, it doesn't seem to work when the size of the dataset object is less than the steps_per_epoch. The following code reproduces the scenario. This has been taken from the mnist example provided.
import math
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data.dataset import Subset
from torchvision import transforms
from torchvision.datasets.mnist import MNIST
from poutyne.framework import Model, ModelCheckpoint, CSVLogger
from poutyne import torch_to_numpy
from poutyne.layers import Flatten
torch.manual_seed(42)
np.random.seed(42)
cuda_device = 0
device = torch.device("cuda:%d" %
cuda_device if torch.cuda.is_available() else "cpu")
train_split_percent = 0.8
batch_size = 32
learning_rate = 0.01
n_epoch = 5
num_classes = 10
train_dataset = MNIST('./mnist/', train=True, download=True,
transform=transforms.ToTensor())
valid_dataset = MNIST('./mnist/', train=True, download=True,
transform=transforms.ToTensor())
test_dataset = MNIST('./mnist/', train=False, download=True,
transform=transforms.ToTensor())
num_data = len(train_dataset)
indices = list(range(num_data))
np.random.shuffle(indices)
split = math.floor(train_split_percent * num_data)
train_indices = indices[:split]
train_dataset = Subset(train_dataset, train_indices)
valid_indices = indices[split:]
valid_dataset = Subset(valid_dataset, valid_indices)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=batch_size, shuffle=True)
valid_loader = torch.utils.data.DataLoader(
valid_dataset, batch_size=batch_size)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
def train(name, pytorch_module):
# Finally, we start the training and output its final test
# loss and accuracy.
# Optimizer and loss function
optimizer = optim.SGD(pytorch_module.parameters(),
lr=learning_rate, weight_decay=0.001)
loss_function = nn.CrossEntropyLoss()
# Poutyne Model
model = Model(pytorch_module, optimizer,
loss_function, metrics=['accuracy'])
# Send model on GPU
model.to(device)
# Train
model.fit_generator(train_loader, valid_loader,
epochs=1, steps_per_epoch=120000)
# Test
test_loss, test_acc = model.evaluate_generator(test_loader)
print('Test:\n\tLoss: {}\n\tAccuracy: {}'.format(test_loss, test_acc))
def create_convolutional_network():
return nn.Sequential(
nn.Conv2d(1, 32, 3, padding=1),
nn.ReLU(),
nn.Conv2d(32, 64, 3, padding=1),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Dropout(0.25),
Flatten(),
nn.Linear(64*14*14, 128),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(128, num_classes)
)
conv_net = create_convolutional_network()
print(conv_net)
# Start training
train('conv', conv_net)
In this the train generator has 48000 samples. When I set the steps_per_epoch=120000 and batch_size=32 (as above), the number of batches/steps should be 120000/32 = 3750
. However, the code stops after 48000/32=1500
batches/steps.
This behavior in the code seems to be stemming from https://github.com/GRAAL-Research/poutyne/blob/master/poutyne/framework/iterators.py#L17. Specifically,
zip(range(steps), generator)
is truncated to the shorter of sequences. Simple example:
for x, y in zip(range(3), range(5)):
print(x, y)
prints
0 0
1 1
2 2
So if the len(generator) < num_steps, then the zip only iterates over the generator once, and not multiple times (as the documentation states).
Is this expected behavior?
model.fit_generator(train_loader, valid_loader, epochs=args.EPOCH, callbacks=callbacks)
model.predict_generator(train_loader)
Epoch 1/1 8.06s Step 11/11: loss: 2.622029, acc: 22.405432, val_loss: 2.358737, val_acc: 35.961538
Epoch 1: val_acc improved from -inf to 35.96154, saving model to best_epoch_1.ckpt
Restoring model from best_epoch_1.ckpt
Traceback (most recent call last):
File "c:/Users/Administrator/excise.py", line 138, in
model.predict_generator(train_loader)
File "C:\Users\Administrator\Anaconda3\lib\site-packages\pytoune\framework\model.py", line 397, in predict_generator
pred_y.append(torch_to_numpy(self.model(x)))
File "C:\Users\Administrator\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "c:\Users\Administrator\d.py", line 260, in forward
out = self.conv1(x)
File "C:\Users\Administrator\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "C:\Users\Administrator\Anaconda3\lib\site-packages\torch\nn\modules\conv.py", line 421, in forward
self.padding, self.dilation, self.groups)
TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not list'
Use a similar approach as Hydra to color the terminal log output.
Please change the name so it is not related to a warmongering Russian oligarch. It is offensive.
We are looking for ideas to improve Poutyne. So, if you have any, you can provide them by replying to this issue.
Hi there :)
What if I have to change the training loop, e.g. trian gans?
Thank you!
Cheers,
Francesco
I think it would be nice to have a progress bar indicating the status of training. There's tons of examples to implement this, but I think the one used by FastAI would be nice. Here is the repo for it and here is the actual python file that implements it. I'd be happy to submit a PR for it, if someone can point me in the right direction of where this would go.
Fasttext is not installed in the Tips and tricks notebook. The following line is missing in the header:
%pip install --upgrade fasttext
The logo should be inspired from pytorch's Logo and have a log (like a wood log y'know) in fire. The description of the project in the README should start with something like "Pytoune is a Keras-like interface for the PyTorch framework...." and should have SEO (search engine optimization) in mind. I don't know what it is possible for that in github/markdown. We should also includes lots of examples of what's possible with our framework.
Describe the bug
The Experiment class does not recognize "val_accuracy" as a valid value for monitor_metric, while the doc implies that both acc and accuracy should be supported.
To Reproduce
Instantiate an Experiment, passing "val_accuracy" as value for the monitor_metric parameter
Expected behavior
The Experiment should accept the value and track validation accuracy to determine the best model, as it does when the value "val_acc" is passed.
Add notebooks examples to colab.
First of all, I like the library!
The Experiment class, which looks really interesting, is not part of the docs but it should be.
It could be nice to have some kind of way to add regularization like it is possible in Keras with the add_loss function. Maybe adding some optional list of losses in the module that would be summed up by the Module class?
Is your feature request related to a problem? Please describe.
The Experiment class code is currently untested. We will have to introduce unit tests for it.
Describe the solution you'd like
Unit-testing this class does not appear particularly straightforward to me. For example, have a look at the load_checkpoint method: if we wish to test that the correct loading method is used acording to the checkpoint arg, we will only end up reproducing the method's code in the tests and end up testing Python instead of our model's logic. Also, since the callbacks and model are all tested on their own and Experiment is mainly a Model + Callbacks wrapper, what is really the Experiment-only logic we wish to test ?
I am open to writing the tests to the class, but would enjoy some inputs regarding the tests' orientation.
Is there anyway to implement the functionality as described in this post. https://stackoverflow.com/questions/46464549/keras-custom-loss-function-accessing-current-input-pattern
Gist:
implementing something like code below
def custom_loss_wrapper(input_tensor):
def custom_loss(y_true, y_pred):
return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)
return custom_loss
This can be called with loss=custom_loss_wrapper(model.input)
I came across this while trying to fine tune BERT using poutyne. One of the features of BERT is an input mask, so the input to the model consists of three parts (input, mask, label), which doesn't seem to be supported by poutyne.
I'm thinking enabling keyword arguments beyond (x, y) in model.fit(...) could make this work?
Hello everyone,
Due to some connotation of the name Pytoune (pitoune) in Quebec French, we have decided to change the name of the library. We are thus seeking suggestions from our community for a new name. Here are some of our current suggestions:
Right now, the suggestions are all related to fire/torch but they don't have to. The new name can be really anything; it just has to sound nice. Of course, it'd be nice if it is somewhat related to what the library does. Also, it could be a name in another language such as the French names suggested above.
Some of the suggestions above are already names for other things but nothing really major or related. We have to take care of choosing a name that is not already the name of something else.
So, if you any suggestions or comments on the already suggested names, please post below!
Thank you.
Frédérik
Are you interested in a learning rate finder as described in
https://arxiv.org/abs/1506.01186 section 3.3?
I ported my LR finder to PyToune. It still needs some work but it's quite handy!
In the line 311 of file model.py, should It be
self.optimizer.zero_grad()
instead of:
self.model.zero_grad()
Hi,
First of all,thank you for providing such a wonderful tool.
the issue is as follows,I'm using MultiStepLR:
from poutyne.framework.callbacks.lr_scheduler import MultiStepLR
optimizer = torch.optim.Adam(resnet.parameters(),lr=0.01)
model = Model(resnet, optimizer, criterion, metrics=['accuracy'])
scheduler = MultiStepLR(optimizer, milestones=[18,36,70], gamma = 0.1)
history=model.fit(
x_train, y_train,
validation_data=(x_test, y_test),
epochs=epochs,
batch_size=128,
verbose=True,
callbacks=[scheduler]
)
the error message is “TypeError: init() got multiple values for argument 'milestones”
What should I do next?
Could you please provide more and more detailed documentation?For instance, how to use the available callbacks. It allows beginners to get started quickly and make your project get more attention.Thank you very much!
Dear sir:
I came across an error when I installed poutyne with "pip install -U git+https://github.com/GRAAL-Research/poutyne.git@dev".
I think the problem is in my torch.As the Error information said: "ERROR: Failed building wheel for torch". But I have no idea in solving this.So I hope you can provide some solution to me. The overall error feedback is in below.
Thanks!
ERROR: Failed cleaning build dir for torch
Successfully built Poutyne
Failed to build torch
Installing collected packages: torch, Poutyne
Running setup.py install for torch ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\彭张智computational ai\appdata\local\programs\python\python38-32\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\彭张智computational AI\AppData\Local\Temp\pip-install-fgpcp5ez\torch\setup.py'"'"'; file='"'"'C:\Users\彭张智computational AI\AppData\Local\Temp\pip-install-fgpcp5ez\torch\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\彭 张智computational AI\AppData\Local\Temp\pip-record-e8il4qu8\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\彭张智computational ai\appdata\local\programs\python\python38-32\Include\torch'
cwd: C:\Users\彭张智computational AI\AppData\Local\Temp\pip-install-fgpcp5ez\torch
Complete output (23 lines):
running install
running build_deps
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\彭张智computational AI\AppData\Local\Temp\pip-install-fgpcp5ez\torch\setup.py", line 225, in
setup(name="torch", version="0.1.2.post2",
File "c:\users\彭张智computational ai\appdata\local\programs\python\python38-32\lib\site-packages\setuptools_init_.py", line 144, in setup
return distutils.core.setup(**attrs)
File "c:\users\彭张智computational ai\appdata\local\programs\python\python38-32\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "c:\users\彭张智computational ai\appdata\local\programs\python\python38-32\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "c:\users\彭张智computational ai\appdata\local\programs\python\python38-32\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\彭张智computational AI\AppData\Local\Temp\pip-install-fgpcp5ez\torch\setup.py", line 99, in run
self.run_command('build_deps')
File "c:\users\彭张智computational ai\appdata\local\programs\python\python38-32\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\users\彭张智computational ai\appdata\local\programs\python\python38-32\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\彭张智computational AI\AppData\Local\Temp\pip-install-fgpcp5ez\torch\setup.py", line 51, in run
from tools.nnwrap import generate_wrappers as generate_nn_wrappers
ModuleNotFoundError: No module named 'tools.nnwrap'
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\彭张智computational ai\appdata\local\programs\python\python38-32\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\彭张智computational AI\AppData\Local\Temp\pip-install-fgpcp5ez\torch\setup.py'"'"'; file='"'"'C:\Users\彭张智computational AI\AppData\Local\Temp\pip-install-fgpcp5ez\torch\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\彭张智computational AI\AppData\Local\Temp\pip-record-e8il4qu8\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\彭 张智computational ai\appdata\local\programs\python\python38-32\Include\torch' Check the logs for full command output.
WARNING: You are using pip version 20.0.2; however, version 20.2.3 is available.
You should consider upgrading via the 'c:\users\彭张智computational ai\appdata\local\programs\python\python38-32\python.exe -m pip install --upgrade pip' command.
Would it be wise to support multiple output? At least a pass through? I have a model that returns two outputs (prediction and attention weights). I had to do a small wrapper around the loss function as such
def loss_function(y_pred, y_true):
output, word_attention = y_pred
return criterion(output, y_true)
def acc(y_pred, y_true, *args):
y_pred, _ = y_pred
return metrics.bin_acc(y_pred, y_true, *args)
Which is totally normal considering that this is an uncommon output and there is no way to know how to calculate the loss or the metric in a generic way. Training was working just fine with this. But when it came to predict()
it couldn't get through because of
https://github.com/GRAAL-Research/poutyne/blob/master/poutyne/framework/model.py#L406
So I experimented a little bit and came up with this function in utils.py
def _concat(obj):
if isinstance(obj[0], tuple):
return tuple([_concat(ele) for ele in zip(*obj)])
else:
return np.concatenate(obj)
Then in predict()
I call it instead of np.concatenate
directly
pred_y = self.predict_generator(generator)
return utils._concat(pred_y)
This will work as long as the model returns either a tuple or multiple values at once
return (output1, output2)
or return output1, output2
.
I don't know if it's a reasonable expectation. All the tests are passing. Since it requires custom loss functions (and metrics), there doesn't seem to be any new tests to add.
Just wanted your thoughts on this.
Thanks
Hi,
I added a gist with a couple of functions to convert a numpy array (or python object such as list) into a pytorch tensor. I couldn't find anything like this in the library except a function going the other way around poutyne.torch_to_numpy
.
These functions are copied from the FastAI library (core module) and modified a bit. If this is useful, I could submit a PR.
Add possibility to give personnal name to metric.
It would be nice to add more examples of usage. For instance, how to use the available callbacks, etc. The examples could be in both the doc and in an "examples" folder.
Describe the bug
When using the Experiment
module with multiple learning rates, the test
function crashes when it tries to load the metrics. This must occur because the learning rates are logged as a list ([0.001, 0.0001]) in the log.tsv instead of a single float value.
To Reproduce
Use an optimizer where different parameters have different learning rates to train a network using the Experiment
module, and call the test
function.
optimizer = torch.optim.SGD([{'params': model.word_embedding.parameters(), 'lr': 0.001},
{'params': model.classifier.parameters(), 'lr':0.01}])
Expected behavior
Calling the test
function on experiment should not result in a crash.
Desktop (please complete the following information):
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.