prlz77 / resnext.pytorch Goto Github PK
View Code? Open in Web Editor NEWReproduces ResNet-V3 with pytorch
License: MIT License
Reproduces ResNet-V3 with pytorch
License: MIT License
when I run the codes following your instructions, "TypeError: tensor(0, device='cuda:0') is not JSON serializable" occurs at the line 167 of train.py, and it's the code "log.write('%s\n' % json.dumps(state))".
I wonder whether it caused by the version of pytorch or the version of python? I use Pytorch 4.0 python 2.7 now.
If the input of the net is of 643224224 dimension,where 64 is the batch size,3 is the channels and 224 is the size of the original image,and i run the code and find out that the output's dimension of the net is 480210,where 10 is the classes to predict.
Is the output correct?Shouldn't the dimension of the output be 64*10?
Maybe i get something wrong?
ResNeXt.pytorch/models/model.py
Line 39 in 48c19fb
Hi, This may be a stupid question. I did not read the original paper, but i think the channels of the conv layer with stride 3 should be less than that with stride 1, to reduce the computational complexity.
I print the channels after line 39:
print(widen_factor, in_channels, D, out_channels)
and the output:
4 64 512 256
4 256 512 256
4 256 512 256
4 256 1024 512
4 512 1024 512
4 512 1024 512
4 512 2048 1024
4 1024 2048 1024
4 1024 2048 1024
Is that right? thanks for answer
Hello,
I am getting Cannot allocate memory error;I understand this is something related to my GPU. But it is quite surprising that I should get this error because, I am training this on 3 1080TI GPUs, with a batch size of 64.
Traceback (most recent call last):
File "train.py", line 162, in <module>
train()
File "train.py", line 113, in train
for batch_idx, (data, target) in enumerate(train_loader):
File "/usr/local/torch3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 310, in __iter__
return DataLoaderIter(self)
File "/usr/local/torch3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 167, in __init__
w.start()
File "/usr/lib/python3.5/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/usr/lib/python3.5/multiprocessing/context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/usr/lib/python3.5/multiprocessing/context.py", line 267, in _Popen
return Popen(process_obj)
File "/usr/lib/python3.5/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/usr/lib/python3.5/multiprocessing/popen_fork.py", line 67, in _launch
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
CUDA_VISIBLE_DEVICES=0,1,2 python train.py ~/DATASETS/cifar.python cifar10 -s ./snapshots --log ./logs --ngpu 3 --learning_rate 0.05 -b 64
Please suggest what I could do to avoid this issue.
Thank You!
With default arguments apart from cardinality (set to 16), I get:
On 1 1080 ti with minibatch size 20: ~9 minutes per epoch.
Using dataparallel across 4 1080 ti's with minibatch size 128: ~4.5 minutes per epoch.
Perfect linear scaling would give you 2.25 minutes per epoch.
Any idea what's going on here/how to get better scaling?
hey! I am confused with the input size of the training data. I train a dataset which input size is (3,112,112), should I have to make some changes to the model?
self.stage_1 = self.block('stage_1', self.stages[0], self.stages[1], 1)
self.stage_2 = self.block('stage_2', self.stages[1], self.stages[2], 2)
self.stage_3 = self.block('stage_3', self.stages[2], self.stages[3], 2)
self.classifier = nn.Linear(self.stages[3], nlabels)
.
.
.
for bottleneck in range(self.block_depth):
name_ = '%s_bottleneck_%d' % (name, bottleneck)
if bottleneck == 0:
block.add_module(name_, ResNeXtBottleneck(in_channels, out_channels, pool_stride, self.cardinality,
self.base_width, self.widen_factor))
else:
block.add_module(name_,
ResNeXtBottleneck(out_channels, out_channels, 1, self.cardinality, self.base_width,
self.widen_factor))
The structure of net is strange
I see the Lua source code, it seems that there should have a maxpooling before stage_1? And the source code use the resnet structure to build the resneXt, but in your code, you use the same block number in every stage, but the resnet use four layers and use different block number in every layer, and there only have three stage, so is this the mistake?
Could you provide the ResNeXt of Caffe version ?
It seems to me that each image uses ~5GB of GPU memory (ResNeXt-152), making it only possible to train with 2 images per GPU (TITAN X). Is that normal? I would appreciate if someone could be able to point out where I can start debugging for this?
Can I directly convert TensorFlow's ResNext weight to pytorch's weight and use it directly?
Or, do you have a pre-trained model on a large data set?
When I try to run test.py , I got this:
Traceback (most recent call last):
File "/home/ubuntu/bigdisk/part1/resnext.pytorch/test.py", line 114, in <module>
test()
File "/home/ubuntu/bigdisk/part1/resnext.pytorch/test.py", line 79, in test
net.load_state_dict(loaded_state_dict)
File "/home/ubuntu/anaconda3/envs/resnext/lib/python2.7/site-packages/torch/nn/modules/module.py", line 845, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for CifarResNeXt:
Missing key(s) in state_dict: "conv_1_3x3.weight", "bn_1.running_var", "bn_1.bias", "bn_1.weight", "bn_1.running_mean", "stage_1.stage_1_bottleneck_0.conv_reduce.weight", "stage_1.stage_1_bottleneck_0.bn_reduce.running_var", "stage_1.stage_1_bottleneck_0.bn_reduce.bias", "stage_1.stage_1_bottleneck_0.bn_reduce.weight", "stage_1.stage_1_bottleneck_0.bn_reduce.running_mean", "stage_1.stage_1_bottleneck_0.conv_conv.weight", "stage_1.stage_1_bottleneck_0.bn.running_var", "stage_1.stage_1_bottleneck_0.bn.bias", "stage_1.stage_1_bottleneck_0.bn.weight", "stage_1.stage_1_bottleneck_0.bn.running_mean", "stage_1.stage_1_bottleneck_0.conv_expand.weight", "stage_1.stage_1_bottleneck_0.bn_expand.running_var", "stage_1.stage_1_bottleneck_0.bn_expand.bias", "stage_1.stage_1_bottleneck_0.bn_expand.weight", "stage_1.stage_1_bottleneck_0.bn_expand.running_mean", "stage_1.stage_1_bottleneck_0.shortcut.shortcut_conv.weight", "stage_1.stage_1_bottleneck_0.shortcut.shortcut_bn.running_var", "stage_1.stage_1_bottleneck_0.shortcut.shortcut_bn.bias", "stage_1.stage_1_bottleneck_0.shortcut.shortcut_bn.weight", "stage_1.stage_1_bottleneck_0.shortcut.shortcut_bn.running_mean", "stage_1.stage_1_bottleneck_1.conv_reduce.weight", "stage_1.stage_1_bottleneck_1.bn_reduce.running_var", "stage_1.stage_1_bottleneck_1.bn_reduce.bias", "stage_1.stage_1_bottleneck_1.bn_reduce.weight", "stage_1.stage_1_bottleneck_1.bn_reduce.running_mean", "stage_1.stage_1_bottleneck_1.conv_conv.weight", "stage_1.stage_1_bottleneck_1.bn.running_var", "stage_1.stage_1_bottleneck_1.bn.bias", "stage_1.stage_1_bottleneck_1.bn.weight", "stage_1.stage_1_bottleneck_1.bn.running_mean", "stage_1.stage_1_bottleneck_1.conv_expand.weight", "stage_1.stage_1_bottleneck_1.bn_expand.running_var", "stage_1.stage_1_bottleneck_1.bn_expand.bias", "stage_1.stage_1_bottleneck_1.bn_expand.weight", "stage_1.stage_1_bottleneck_1.bn_expand.running_mean", "stage_1.stage_1_bottleneck_2.conv_reduce.weight", "stage_1.stage_1_bottleneck_2.bn_reduce.running_var", "stage_1.stage_1_bottleneck_2.bn_reduce.bias", "stage_1.stage_1_bottleneck_2.bn_reduce.weight", "stage_1.stage_1_bottleneck_2.bn_reduce.running_mean", "stage_1.stage_1_bottleneck_2.conv_conv.weight", "stage_1.stage_1_bottleneck_2.bn.running_var", "stage_1.stage_1_bottleneck_2.bn.bias", "stage_1.stage_1_bottleneck_2.bn.weight", "stage_1.stage_1_bottleneck_2.bn.running_mean", "stage_1.stage_1_bottleneck_2.conv_expand.weight", "stage_1.stage_1_bottleneck_2.bn_expand.running_var", "stage_1.stage_1_bottleneck_2.bn_expand.bias", "stage_1.stage_1_bottleneck_2.bn_expand.weight", "stage_1.stage_1_bottleneck_2.bn_expand.running_mean", "stage_2.stage_2_bottleneck_0.conv_reduce.weight", "stage_2.stage_2_bottleneck_0.bn_reduce.running_var", "stage_2.stage_2_bottleneck_0.bn_reduce.bias", "stage_2.stage_2_bottleneck_0.bn_reduce.weight", "stage_2.stage_2_bottleneck_0.bn_reduce.running_mean", "stage_2.stage_2_bottleneck_0.conv_conv.weight", "stage_2.stage_2_bottleneck_0.bn.running_var", "stage_2.stage_2_bottleneck_0.bn.bias", "stage_2.stage_2_bottleneck_0.bn.weight", "stage_2.stage_2_bottleneck_0.bn.running_mean", "stage_2.stage_2_bottleneck_0.conv_expand.weight", "stage_2.stage_2_bottleneck_0.bn_expand.running_var", "stage_2.stage_2_bottleneck_0.bn_expand.bias", "stage_2.stage_2_bottleneck_0.bn_expand.weight", "stage_2.stage_2_bottleneck_0.bn_expand.running_mean", "stage_2.stage_2_bottleneck_0.shortcut.shortcut_conv.weight", "stage_2.stage_2_bottleneck_0.shortcut.shortcut_bn.running_var", "stage_2.stage_2_bottleneck_0.shortcut.shortcut_bn.bias", "stage_2.stage_2_bottleneck_0.shortcut.shortcut_bn.weight", "stage_2.stage_2_bottleneck_0.shortcut.shortcut_bn.running_mean", "stage_2.stage_2_bottleneck_1.conv_reduce.weight", "stage_2.stage_2_bottleneck_1.bn_reduce.running_var", "stage_2.stage_2_bottleneck_1.bn_reduce.bias", "stage_2.stage_2_bottleneck_1.bn_reduce.weight", "stage_2.stage_2_bottleneck_1.bn_reduce.running_mean", "stage_2.stage_2_bottleneck_1.conv_conv.weight", "stage_2.stage_2_bottleneck_1.bn.running_var", "stage_2.stage_2_bottleneck_1.bn.bias", "stage_2.stage_2_bottleneck_1.bn.weight", "stage_2.stage_2_bottleneck_1.bn.running_mean", "stage_2.stage_2_bottleneck_1.conv_expand.weight", "stage_2.stage_2_bottleneck_1.bn_expand.running_var", "stage_2.stage_2_bottleneck_1.bn_expand.bias", "stage_2.stage_2_bottleneck_1.bn_expand.weight", "stage_2.stage_2_bottleneck_1.bn_expand.running_mean", "stage_2.stage_2_bottleneck_2.conv_reduce.weight", "stage_2.stage_2_bottleneck_2.bn_reduce.running_var", "stage_2.stage_2_bottleneck_2.bn_reduce.bias", "stage_2.stage_2_bottleneck_2.bn_reduce.weight", "stage_2.stage_2_bottleneck_2.bn_reduce.running_mean", "stage_2.stage_2_bottleneck_2.conv_conv.weight", "stage_2.stage_2_bottleneck_2.bn.running_var", "stage_2.stage_2_bottleneck_2.bn.bias", "stage_2.stage_2_bottleneck_2.bn.weight", "stage_2.stage_2_bottleneck_2.bn.running_mean", "stage_2.stage_2_bottleneck_2.conv_expand.weight", "stage_2.stage_2_bottleneck_2.bn_expand.running_var", "stage_2.stage_2_bottleneck_2.bn_expand.bias", "stage_2.stage_2_bottleneck_2.bn_expand.weight", "stage_2.stage_2_bottleneck_2.bn_expand.running_mean", "stage_3.stage_3_bottleneck_0.conv_reduce.weight", "stage_3.stage_3_bottleneck_0.bn_reduce.running_var", "stage_3.stage_3_bottleneck_0.bn_reduce.bias", "stage_3.stage_3_bottleneck_0.bn_reduce.weight", "stage_3.stage_3_bottleneck_0.bn_reduce.running_mean", "stage_3.stage_3_bottleneck_0.conv_conv.weight", "stage_3.stage_3_bottleneck_0.bn.running_var", "stage_3.stage_3_bottleneck_0.bn.bias", "stage_3.stage_3_bottleneck_0.bn.weight", "stage_3.stage_3_bottleneck_0.bn.running_mean", "stage_3.stage_3_bottleneck_0.conv_expand.weight", "stage_3.stage_3_bottleneck_0.bn_expand.running_var", "stage_3.stage_3_bottleneck_0.bn_expand.bias", "stage_3.stage_3_bottleneck_0.bn_expand.weight", "stage_3.stage_3_bottleneck_0.bn_expand.running_mean", "stage_3.stage_3_bottleneck_0.shortcut.shortcut_conv.weight", "stage_3.stage_3_bottleneck_0.shortcut.shortcut_bn.running_var", "stage_3.stage_3_bottleneck_0.shortcut.shortcut_bn.bias", "stage_3.stage_3_bottleneck_0.shortcut.shortcut_bn.weight", "stage_3.stage_3_bottleneck_0.shortcut.shortcut_bn.running_mean", "stage_3.stage_3_bottleneck_1.conv_reduce.weight", "stage_3.stage_3_bottleneck_1.bn_reduce.running_var", "stage_3.stage_3_bottleneck_1.bn_reduce.bias", "stage_3.stage_3_bottleneck_1.bn_reduce.weight", "stage_3.stage_3_bottleneck_1.bn_reduce.running_mean", "stage_3.stage_3_bottleneck_1.conv_conv.weight", "stage_3.stage_3_bottleneck_1.bn.running_var", "stage_3.stage_3_bottleneck_1.bn.bias", "stage_3.stage_3_bottleneck_1.bn.weight", "stage_3.stage_3_bottleneck_1.bn.running_mean", "stage_3.stage_3_bottleneck_1.conv_expand.weight", "stage_3.stage_3_bottleneck_1.bn_expand.running_var", "stage_3.stage_3_bottleneck_1.bn_expand.bias", "stage_3.stage_3_bottleneck_1.bn_expand.weight", "stage_3.stage_3_bottleneck_1.bn_expand.running_mean", "stage_3.stage_3_bottleneck_2.conv_reduce.weight", "stage_3.stage_3_bottleneck_2.bn_reduce.running_var", "stage_3.stage_3_bottleneck_2.bn_reduce.bias", "stage_3.stage_3_bottleneck_2.bn_reduce.weight", "stage_3.stage_3_bottleneck_2.bn_reduce.running_mean", "stage_3.stage_3_bottleneck_2.conv_conv.weight", "stage_3.stage_3_bottleneck_2.bn.running_var", "stage_3.stage_3_bottleneck_2.bn.bias", "stage_3.stage_3_bottleneck_2.bn.weight", "stage_3.stage_3_bottleneck_2.bn.running_mean", "stage_3.stage_3_bottleneck_2.conv_expand.weight", "stage_3.stage_3_bottleneck_2.bn_expand.running_var", "stage_3.stage_3_bottleneck_2.bn_expand.bias", "stage_3.stage_3_bottleneck_2.bn_expand.weight", "stage_3.stage_3_bottleneck_2.bn_expand.running_mean", "classifier.bias", "classifier.weight".
Unexpected key(s) in state_dict: ".stage_1_bottleneck_0.bn.num_batches_tracked", ".stage_1_bottleneck_2.bn.bias", ".stage_1_bottleneck_1.bn_expand.bias", ".stage_2_bottleneck_0.shortcut.shortcut_bn.num_batches_tracked", ".stage_2_bottleneck_0.bn_expand.running_var", ".stage_2_bottleneck_1.bn_expand.bias", ".stage_3_bottleneck_0.bn_expand.running_mean", ".stage_3_bottleneck_2.bn.bias", ".stage_3_bottleneck_0.bn_reduce.weight", ".stage_2_bottleneck_0.bn.weight", ".stage_2_bottleneck_0.bn.running_mean", ".stage_2_bottleneck_0.shortcut.shortcut_bn.running_mean", ".stage_3_bottleneck_0.bn_reduce.num_batches_tracked", ".stage_2_bottleneck_1.bn_expand.running_mean", ".stage_2_bottleneck_0.bn.num_batches_tracked", ".stage_2_bottleneck_2.conv_expand.weight", ".stage_1_bottleneck_2.bn_expand.weight", ".stage_2_bottleneck_1.bn_expand.weight", ".stage_2_bottleneck_0.bn_reduce.running_var", ".stage_1_bottleneck_2.bn_expand.running_var", ".stage_1_bottleneck_0.bn.running_mean", ".stage_1_bottleneck_0.bn_reduce.running_var", ".stage_1_bottleneck_0.bn_reduce.weight", ".stage_2_bottleneck_1.bn.running_var", "ight", ".stage_2_bottleneck_2.bn_reduce.running_var", ".stage_2_bottleneck_0.bn_reduce.num_batches_tracked", ".stage_3_bottleneck_0.bn.running_mean", ".stage_2_bottleneck_2.bn_expand.running_var", ".stage_1_bottleneck_0.conv_reduce.weight", ".stage_2_bottleneck_1.bn_reduce.weight", ".stage_1_bottleneck_1.bn_expand.num_batches_tracked", ".stage_2_bottleneck_2.bn_reduce.weight", ".stage_3_bottleneck_0.shortcut.shortcut_bn.bias", ".stage_3_bottleneck_2.bn.weight", ".stage_1_bottleneck_1.bn.running_var", ".stage_1_bottleneck_1.bn_reduce.weight", ".stage_1_bottleneck_0.bn_expand.weight", ".stage_2_bottleneck_2.conv_conv.weight", ".stage_1_bottleneck_1.bn_expand.running_mean", ".stage_2_bottleneck_0.bn_expand.bias", ".stage_2_bottleneck_1.bn.bias", ".stage_3_bottleneck_1.bn_expand.num_batches_tracked", ".stage_2_bottleneck_2.bn.num_batches_tracked", ".stage_1_bottleneck_2.conv_conv.weight", ".stage_3_bottleneck_0.conv_conv.weight", ".stage_2_bottleneck_1.bn_reduce.running_var", ".stage_1_bottleneck_1.bn_expand.weight", ".stage_3_bottleneck_0.bn_expand.weight", ".stage_1_bottleneck_1.bn.weight", ".stage_3_bottleneck_0.bn.weight", ".stage_3_bottleneck_2.bn_reduce.weight", ".stage_1_bottleneck_2.bn.weight", ".stage_2_bottleneck_0.bn_expand.weight", ".stage_2_bottleneck_0.shortcut.shortcut_bn.weight", ".stage_1_bottleneck_2.bn.running_mean", ".stage_1_bottleneck_0.bn.weight", "nning_mean", ".stage_1_bottleneck_0.shortcut.shortcut_bn.running_var", ".stage_3_bottleneck_1.conv_reduce.weight", ".stage_2_bottleneck_2.bn_expand.num_batches_tracked", ".stage_2_bottleneck_2.bn_expand.weight", ".stage_1_bottleneck_2.bn_reduce.bias", ".stage_3_bottleneck_2.bn_reduce.num_batches_tracked", ".stage_1_bottleneck_1.conv_expand.weight", ".stage_3_bottleneck_1.bn_expand.bias", ".stage_3_bottleneck_1.conv_conv.weight", ".stage_1_bottleneck_2.bn.num_batches_tracked", ".stage_3_bottleneck_0.shortcut.shortcut_conv.weight", ".stage_3_bottleneck_0.shortcut.shortcut_bn.num_batches_tracked", ".stage_3_bottleneck_1.bn.running_var", ".stage_2_bottleneck_2.bn.running_mean", ".stage_2_bottleneck_0.bn_expand.num_batches_tracked", ".stage_3_bottleneck_1.bn_reduce.num_batches_tracked", ".stage_3_bottleneck_0.bn.running_var", ".stage_2_bottleneck_1.bn_reduce.running_mean", ".stage_3_bottleneck_0.shortcut.shortcut_bn.weight", ".stage_1_bottleneck_0.bn.bias", ".stage_1_bottleneck_2.bn_reduce.weight", ".stage_3_bottleneck_0.conv_expand.weight", ".stage_1_bottleneck_0.bn_reduce.num_batches_tracked", ".stage_3_bottleneck_2.bn.running_var", ".stage_3_bottleneck_2.conv_conv.weight", ".stage_3_bottleneck_2.bn_expand.running_var", ".stage_1_bottleneck_1.bn.num_batches_tracked", ".stage_3_bottleneck_0.bn.bias", ".stage_3_bottleneck_0.bn_reduce.running_mean", ".stage_2_bottleneck_0.bn_reduce.bias", ".stage_1_bottleneck_0.shortcut.shortcut_conv.weight", ".stage_2_bottleneck_2.bn.weight", ".stage_1_bottleneck_0.shortcut.shortcut_bn.running_mean", ".stage_3_bottleneck_1.bn_reduce.running_var", ".stage_2_bottleneck_0.bn_expand.running_mean", ".stage_2_bottleneck_1.bn_reduce.num_batches_tracked", ".stage_2_bottleneck_1.conv_reduce.weight", ".stage_2_bottleneck_0.bn_reduce.running_mean", ".stage_1_bottleneck_1.bn_expand.running_var", ".stage_1_bottleneck_1.bn_reduce.running_var", ".stage_3_bottleneck_1.bn_reduce.running_mean", ".stage_2_bottleneck_0.shortcut.shortcut_bn.bias", ".stage_2_bottleneck_2.bn_expand.running_mean", "ier.bias", ".stage_3_bottleneck_0.bn_expand.num_batches_tracked", ".stage_2_bottleneck_1.bn_expand.running_var", ".stage_3_bottleneck_0.bn_expand.bias", "3x3.weight", ".stage_3_bottleneck_1.bn.weight", ".stage_2_bottleneck_0.bn.bias", ".stage_1_bottleneck_0.shortcut.shortcut_bn.weight", ".stage_1_bottleneck_2.bn.running_var", ".stage_2_bottleneck_2.bn.bias", ".stage_2_bottleneck_2.conv_reduce.weight", ".stage_1_bottleneck_0.bn.running_var", ".stage_3_bottleneck_2.bn_expand.num_batches_tracked", ".stage_3_bottleneck_1.bn.num_batches_tracked", ".stage_1_bottleneck_0.bn_expand.running_mean", ".stage_3_bottleneck_1.bn_reduce.bias", ".stage_2_bottleneck_2.bn_expand.bias", ".stage_3_bottleneck_1.bn.bias", ".stage_2_bottleneck_2.bn_reduce.bias", ".stage_2_bottleneck_0.conv_conv.weight", ".stage_1_bottleneck_2.bn_expand.num_batches_tracked", ".stage_1_bottleneck_1.bn.bias", ".stage_2_bottleneck_1.bn.weight", ".stage_2_bottleneck_2.bn.running_var", ".stage_3_bottleneck_0.bn.num_batches_tracked", ".stage_1_bottleneck_0.conv_expand.weight", ".stage_1_bottleneck_1.conv_reduce.weight", ".stage_3_bottleneck_2.bn_expand.weight", ".stage_2_bottleneck_1.conv_conv.weight", ".stage_1_bottleneck_1.bn_reduce.num_batches_tracked", ".stage_1_bottleneck_2.bn_expand.bias", ".stage_2_bottleneck_1.conv_expand.weight", ".stage_3_bottleneck_0.conv_reduce.weight", ".stage_1_bottleneck_0.bn_expand.num_batches_tracked", ".stage_2_bottleneck_1.bn_expand.num_batches_tracked", ".stage_3_bottleneck_2.conv_expand.weight", ".stage_2_bottleneck_1.bn.num_batches_tracked", "ier.weight", ".stage_3_bottleneck_2.bn_expand.bias", ".stage_3_bottleneck_2.bn_reduce.bias", ".stage_3_bottleneck_2.bn.num_batches_tracked", ".stage_1_bottleneck_2.conv_expand.weight", "as", ".stage_2_bottleneck_2.bn_reduce.num_batches_tracked", ".stage_1_bottleneck_2.conv_reduce.weight", ".stage_3_bottleneck_1.conv_expand.weight", ".stage_3_bottleneck_2.conv_reduce.weight", ".stage_2_bottleneck_0.bn_reduce.weight", ".stage_3_bottleneck_0.shortcut.shortcut_bn.running_mean", ".stage_1_bottleneck_0.bn_reduce.bias", ".stage_1_bottleneck_2.bn_reduce.running_mean", ".stage_2_bottleneck_1.bn.running_mean", ".stage_1_bottleneck_0.shortcut.shortcut_bn.bias", ".stage_3_bottleneck_0.bn_reduce.running_var", "m_batches_tracked", ".stage_1_bottleneck_0.bn_expand.bias", ".stage_1_bottleneck_2.bn_expand.running_mean", ".stage_3_bottleneck_0.bn_expand.running_var", ".stage_2_bottleneck_0.conv_expand.weight", ".stage_2_bottleneck_0.bn.running_var", ".stage_3_bottleneck_1.bn_expand.weight", ".stage_1_bottleneck_1.bn.running_mean", ".stage_3_bottleneck_2.bn.running_mean", ".stage_3_bottleneck_2.bn_expand.running_mean", ".stage_1_bottleneck_0.conv_conv.weight", ".stage_3_bottleneck_1.bn_expand.running_mean", ".stage_2_bottleneck_0.conv_reduce.weight", ".stage_2_bottleneck_1.bn_reduce.bias", ".stage_1_bottleneck_0.bn_reduce.running_mean", ".stage_1_bottleneck_2.bn_reduce.running_var", ".stage_1_bottleneck_0.shortcut.shortcut_bn.num_batches_tracked", ".stage_1_bottleneck_2.bn_reduce.num_batches_tracked", ".stage_1_bottleneck_1.bn_reduce.running_mean", ".stage_3_bottleneck_1.bn_expand.running_var", ".stage_2_bottleneck_2.bn_reduce.running_mean", ".stage_3_bottleneck_0.shortcut.shortcut_bn.running_var", ".stage_3_bottleneck_2.bn_reduce.running_mean", "nning_var", ".stage_3_bottleneck_1.bn_reduce.weight", ".stage_3_bottleneck_2.bn_reduce.running_var", ".stage_2_bottleneck_0.shortcut.shortcut_conv.weight", ".stage_1_bottleneck_0.bn_expand.running_var", ".stage_2_bottleneck_0.shortcut.shortcut_bn.running_var", ".stage_3_bottleneck_0.bn_reduce.bias", ".stage_1_bottleneck_1.conv_conv.weight", ".stage_1_bottleneck_1.bn_reduce.bias", ".stage_3_bottleneck_1.bn.running_mean".
Process finished with exit code 1
Hi, I was trying to run inference on the trained model using the test.py
script but first there's an error on ordered dict iteritems()
method which should be chnaged into items()
and the other thing is that there are a lot of mismatches when loading the weighst into the model. Here's a screenshot:
Any ideas how to resolve those?
Hi,
May I ask you a question? Why are the output channels of conv_reduce four times the number of input channels and how it can play the role of reducing dimensions before 3*3 convolution?
CifarResNeXt (
(conv_1_3x3): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn_1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
(stage_1): Sequential (
(stage_1_bottleneck_0): ResNeXtBottleneck (
(conv_reduce): Conv2d(64, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn_reduce): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(conv_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=8, bias=False)
(bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(conv_expand): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn_expand): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(shortcut): Sequential (
(shortcut_conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(shortcut_bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
)
)
Hi,
May I know what's the initial learning rate used in Cifar10 and Cifar100 experiments (-b 128 on 2 GPU cards)? The default value 0.1 or the sample value 0.05? Many thanks in advance!
In the README, link to the CIFAR trained models (https://mega.nz/#F!wbJXDS6b!YN3hCDi1tT3SdNFrLPm7mA) is broken. Can you check and share the pretrained models?
Thanks :)
Hi,
May I ask your final performance, the curves are a little confusing.
I also implement a different version (https://github.com/D-X-Y/ResNeXt), my results are a little bit lower than the official code, about 0.2 for cifar10 and 1.0 for cifar100.
I really want to what causes the differences.
And I also try training resnet20,32,44,56 , I'm pretty sure the model archieteture is the same as the official code but even obtain a much lower accuracy.
Would you mind to give me some suggestions?
Hi, compared to ResNet, what the GPU memory usage is for ResNeXt? Will this take more GPU memory? Thanks.
~/ResNeXt.pytorch0$ python test.py ~/DATASETS/cifar.python cifar10 --ngpu 1 --load ./snapshots/model.pytorch --test_bs 128
Files already downloaded and verified
Files already downloaded and verified
While copying the parameter named stage_1.stage_1_bottleneck_0.conv_reduce.weight, whose dimensions in the model are torch.Size([32, 64, 1, 1]) and whose dimensions in the checkpoint are torch.Size([512, 64, 1, 1]), ...
Traceback (most recent call last):
File "test.py", line 114, in
test()
File "test.py", line 79, in test
net.load_state_dict(loaded_state_dict)
File "/home/engineer/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 360, in load_state_dict
own_state[name].copy_(param)
RuntimeError: invalid argument 2: sizes do not match at /opt/conda/conda-bld/pytorch_1503970438496/work/torch/lib/THC/generic/THCTensorCopy.c:95
please look into this issue, i am using pytorch 0.2.0. thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.