Coder Social home page Coder Social logo

deep-learning-random-explore's People

Contributors

pppw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-learning-random-explore's Issues

PNAS-5 Large cannot be splitted

There is a weird StopIteration: error happens only with this model.

Example:

from fastai.vision.learner import model_meta

def identity(x): return x

def pnasnet5large(pretrained=False):    
    pretrained = 'imagenet' if pretrained else None
    model = models.cadene_models.pretrainedmodels.pnasnet5large(pretrained=pretrained, num_classes=1000) 
    model.logits = identity
    return nn.Sequential(model)

model_meta[pnasnet5large] =  { 'cut': None, 
                               'split': lambda m: (list(m[0][0].children())[8], m[1]) }

learn = cnn_learner(data, base_arch=pnasnet5large, pretrained=False)

Error:


---------------------------------------------------------------------------
StopIteration                             Traceback (most recent call last)
<ipython-input-79-2126a555d117> in <module>
----> 1 learn = cnn_learner(data, base_arch=pnasnet5large, pretrained=False)

~/anaconda3/envs/fastai-v1/lib/python3.7/site-packages/fastai/vision/learner.py in cnn_learner(data, base_arch, cut, pretrained, lin_ftrs, ps, custom_head, split_on, bn_final, init, concat_pool, **kwargs)
     95     meta = cnn_config(base_arch)
     96     model = create_cnn_model(base_arch, data.c, cut, pretrained, lin_ftrs, ps=ps, custom_head=custom_head,
---> 97         split_on=split_on, bn_final=bn_final, concat_pool=concat_pool)
     98     learn = Learner(data, model, **kwargs)
     99     learn.split(split_on or meta['split'])

~/anaconda3/envs/fastai-v1/lib/python3.7/site-packages/fastai/vision/learner.py in create_cnn_model(base_arch, nc, cut, pretrained, lin_ftrs, ps, custom_head, split_on, bn_final, concat_pool)
     81         split_on:Optional[SplitFuncOrIdxList]=None, bn_final:bool=False, concat_pool:bool=True):
     82     "Create custom convnet architecture"
---> 83     body = create_body(base_arch, pretrained, cut)
     84     if custom_head is None:
     85         nf = num_features_model(nn.Sequential(*body.children())) * (2 if concat_pool else 1)

~/anaconda3/envs/fastai-v1/lib/python3.7/site-packages/fastai/vision/learner.py in create_body(arch, pretrained, cut)
     57     if cut is None:
     58         ll = list(enumerate(model.children()))
---> 59         cut = next(i for i,o in reversed(ll) if has_pool_type(o))
     60     if   isinstance(cut, int):      return nn.Sequential(*list(model.children())[:cut])
     61     elif isinstance(cut, Callable): return cut(model)

StopIteration: 
 

Object detection models

Pytorch recently included pretrained object detection models such FastRCNN (
torchvision.models.detection.faster_rcnn).
Do you know how to integrate them with fastai?

Doubt regarding inception v3 monkey patching

In cnn_archs notebook you mentioned we need to monkey patch the forward pass(excluding final layers). But it seems like you've also excluded initial transform_input part as well.

if self.transform_input:
            x = x.clone()
            x[:, 0] = x[:, 0] * (0.229 / 0.5) + (0.485 - 0.5) / 0.5
            x[:, 1] = x[:, 1] * (0.224 / 0.5) + (0.456 - 0.5) / 0.5
            x[:, 2] = x[:, 2] * (0.225 / 0.5) + (0.406 - 0.5) / 0.5

Although I tried to add these lines to your notebook and it ran without any issue. But is there any specific reason to skip this part ?

How can we cut the last few layers of nasnet??

Hi,

I appreciate this github repo.
When we want to cut the last few layers of nasnet,
you recommend to change it into an identity function.
I researched, but I can't do this.
Could you tell me how to do this??

Thanks in advance for your help.

Why is my learn.summary() wrong?

Hi, thanks for creating these examples!

I used the cadene alexnet and dog/cat breed dataset to create a learner. learn.model prints the right model, but learn.summary() prints the wrong model head. Any idea what's going wrong?

pretrained='imagenet'

def alexnet_cadene(*args):
    model = pretrainedmodels.__dict__['alexnet'](pretrained=pretrained)
    sz = pretrainedmodels.pretrained_settings['alexnet']['imagenet']['input_size'][-1]
    data.sz = data.one_batch()[0].size()[-1]
    if data.sz != sz:
        raise ValueError(f'data size should be {sz} but is instead {data.sz}')    
    model.last_linear.out_features = data.c
    all_layers = list(model.children())
    model = nn.Sequential(all_layers[0], nn.Sequential(Flatten(), *all_layers[1:]))    
    return model

arch_summary(lambda _: alexnet_cadene()) # overall
arch_summary(lambda _: next(alexnet_cadene().children())) # body
arch_summary(lambda _: list(alexnet_cadene().children())[1]) # head

learn = create_cnn(data,alexnet_cadene,custom_head=children(alexnet_cadene())[1],metrics=error_rate,
                  split_on= lambda m: (m[0][0][6],m[1],m[1][7]))

get_groups(nn.Sequential(*learn.model[0][0], *learn.model[1]), learn.layer_groups)

print(learn.layer_groups)

print(learn.model) 
print(learn.summary()) # why is the model summary wrong?
# last linear layer says 1000 classes when my dataset has 37

Inception v3

Hi, were you able to get a Learner to work for either torchvision inception_v3 or cadene inceptionv3? I'm able to use Learner directly rather than create_cnn, but learn.summary() fails with AttributeError: 'NoneType' object has no attribute 'shape'

unet_learner giving error with seresnext101

Hi,
When I am trying to use unet_learner of fastai with se_resnext101_32x4d I am getting below error:
`---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
1 # Fit one cycle of 6 epochs with max lr of 1e-3
----> 2 learn.fit_one_cycle(6)

/opt/conda/lib/python3.6/site-packages/fastai/train.py in fit_one_cycle(learn, cyc_len, max_lr, moms, div_factor, pct_start, final_div, wd, callbacks, tot_epochs, start_epoch)
20 callbacks.append(OneCycleScheduler(learn, max_lr, moms=moms, div_factor=div_factor, pct_start=pct_start,
21 final_div=final_div, tot_epochs=tot_epochs, start_epoch=start_epoch))
---> 22 learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks)
23
24 def lr_find(learn:Learner, start_lr:Floats=1e-7, end_lr:Floats=10, num_it:int=100, stop_div:bool=True, wd:float=None):

/opt/conda/lib/python3.6/site-packages/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks)
198 callbacks = [cb(self) for cb in self.callback_fns + listify(defaults.extra_callback_fns)] + listify(callbacks)
199 if defaults.extra_callbacks is not None: callbacks += defaults.extra_callbacks
--> 200 fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks)
201
202 def create_opt(self, lr:Floats, wd:Floats=0.)->None:

/opt/conda/lib/python3.6/site-packages/fastai/basic_train.py in fit(epochs, learn, callbacks, metrics)
99 for xb,yb in progress_bar(learn.data.train_dl, parent=pbar):
100 xb, yb = cb_handler.on_batch_begin(xb, yb)
--> 101 loss = loss_batch(learn.model, xb, yb, learn.loss_func, learn.opt, cb_handler)
102 if cb_handler.on_batch_end(loss): break
103

/opt/conda/lib/python3.6/site-packages/fastai/basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler)
24 if not is_listy(xb): xb = [xb]
25 if not is_listy(yb): yb = [yb]
---> 26 out = model(*xb)
27 out = cb_handler.on_loss_begin(out)
28

/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)

/opt/conda/lib/python3.6/site-packages/fastai/layers.py in forward(self, x)
134 for l in self.layers:
135 res.orig = x
--> 136 nres = l(res)
137 # We have to remove res.orig to avoid hanging refs and therefore memory leaks
138 res.orig = None

/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)

/opt/conda/lib/python3.6/site-packages/fastai/layers.py in forward(self, x)
148 "Merge a shortcut with the result of the module by adding them or concatenating thme if dense=True."
149 def init(self, dense:bool=False): self.dense=dense
--> 150 def forward(self, x): return torch.cat([x,x.orig], dim=1) if self.dense else (x+x.orig)
151
152 def res_block(nf, dense:bool=False, norm_type:Optional[NormType]=NormType.Batch, bottle:bool=False, **conv_kwargs):

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 256 and 128 in dimension 2 at /opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/generic/THCTensorMath.cu:71
`
Below is the code I am using:

`SZ = 256
BS=16

MODEL_PATH = "/kaggle/working/"

data = (SegmentationItemList.from_folder(path=path/'train')
.split_by_rand_pct(0.2)
.label_from_func(lambda x : str(x).replace('train', 'masks'), classes=[0, 1])
.add_test((path/'test').ls(), label=None)
.transform(get_transforms(), size=SZ, tfm_y=True)
.databunch(path=Path('.'), bs=BS)
.normalize(imagenet_stats))

def se_resnext101(pretrained=True):
pretrained = 'imagenet' if pretrained else None
model = pretrainedmodels.dict'se_resnext101_32x4d'
model.load_state_dict(torch.load("../input/preptrainedmode-weight/se_resnext101_32x4d-3b2fe3d8.pth"))
return model

learn = unet_learner(data, se_resnext101, pretrained=True,path=MODEL_PATH,
cut=-2, split_on=lambda m: (m[0][4], m[1]),metrics=[dice])

learn.fit_one_cycle(6) # this line is giving above error`

Thanks in advance for your help!

name 'arch_summary' is not defined

my code in jupyter notebook:

from torchvision.models import *
import pretrainedmodels

from fastai.vision import *
from fastai.vision.models import *
from fastai.vision.learner import model_meta

from utils import *
import sys

def se_resnet50(pretrained=False):
    pretrained = 'imagenet' if pretrained else None
    model = pretrainedmodels.se_resnet50(pretrained=pretrained)
    return model

arch_summary(se_resnet50)

got the error below:

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-72-da671789676a> in <module>
     14     return model
     15 
---> 16 arch_summary(se_resnet50)

NameError: name 'arch_summary' is not defined

AssertionError when using nasnetalarge + fastai

Hi,

When I am trying to use nasnetalarge I am getting AssertionError. Below is the code and error details.

Code

def nasnetalarge(pretrained=True): pretrained = 'imagenet' if pretrained else None model = pretrainedmodels.__dict__['nasnetalarge'](pretrained=pretrained) return model
This is causing error:
learn = create_cnn(data, nasnetalarge, pretrained=True,path=MODEL_PATH, cut=-2, split_on=lambda m: (m[0][4], m[1]))

Error:
`AssertionError Traceback (most recent call last)
in ()
1 learn = create_cnn(data, nasnetalarge, pretrained=True,path=MODEL_PATH,
----> 2 cut=-2, split_on=lambda m: (m[0][4], m[1]))

/opt/conda/lib/python3.6/site-packages/fastai/vision/learner.py in create_cnn(data, arch, cut, pretrained, lin_ftrs, ps, custom_head, split_on, bn_final, **kwargs)
53 "Build convnet style learners."
54 meta = cnn_config(arch)
---> 55 body = create_body(arch, pretrained, cut)
56 nf = num_features_model(body) * 2
57 head = custom_head or create_head(nf, data.c, lin_ftrs, ps=ps, bn_final=bn_final)

/opt/conda/lib/python3.6/site-packages/fastai/vision/learner.py in create_body(arch, pretrained, cut, body_fn)
30 def create_body(arch:Callable, pretrained:bool=True, cut:Optional[int]=None, body_fn:Callable[[nn.Module],nn.Module]=None):
31 "Cut off the body of a typically pretrained model at cut or as specified by body_fn."
---> 32 model = arch(pretrained)
33 if not cut and not body_fn: cut = cnn_config(arch)['cut']
34 return (nn.Sequential(*list(model.children())[:cut]) if cut

in nasnetalarge(pretrained)
1 def nasnetalarge(pretrained=True):
2 pretrained = 'imagenet' if pretrained else None
----> 3 model = pretrainedmodels.dict'nasnetalarge'
4 return model

/opt/conda/lib/python3.6/site-packages/pretrainedmodels/models/nasnet.py in nasnetalarge(num_classes, pretrained)
613 settings = pretrained_settings['nasnetalarge'][pretrained]
614 assert num_classes == settings['num_classes'],
--> 615 "num_classes should be {}, but is {}".format(settings['num_classes'], num_classes)
616
617 # both 'imagenet'&'imagenet+background' are loaded from same parameters

AssertionError: num_classes should be 1000, but is 1001`

On separate note,From where I can call Arch_summary?
arch_summary(resnet34)
I am getting error:
`NameError Traceback (most recent call last)
in ()
----> 1 arch_summary(resnet34)

NameError: name 'arch_summary' is not defined`

Use pretrainedmodels + fastai in Kaggle kernels

Hi ,

When I tried using resnext101 in kaggle kernel

learn = create_cnn(data, resnext101_32x4d, pretrained=False,
                  cut=-2, split_on=lambda m: (m[0][6], m[1]))

I got below error:
`---------------------------------------------------------------------------
OSError Traceback (most recent call last)
in ()
1 learn = create_cnn(data, resnext101_32x4d, pretrained=False,
----> 2 cut=-2, split_on=lambda m: (m[0][6], m[1]))

/opt/conda/lib/python3.6/site-packages/fastai/vision/learner.py in create_cnn(data, arch, cut, pretrained, lin_ftrs, ps, custom_head, split_on, bn_final, **kwargs)
57 head = custom_head or create_head(nf, data.c, lin_ftrs, ps=ps, bn_final=bn_final)
58 model = nn.Sequential(body, head)
---> 59 learn = Learner(data, model, **kwargs)
60 learn.split(ifnone(split_on,meta['split']))
61 if pretrained: learn.freeze()

in init(self, data, model, opt_func, loss_func, metrics, true_wd, bn_wd, wd, train_bn, path, model_dir, callback_fns, callbacks, layer_groups)

/opt/conda/lib/python3.6/site-packages/fastai/basic_train.py in post_init(self)
144 "Setup path,metrics, callbacks and ensure model directory exists."
145 self.path = Path(ifnone(self.path, self.data.path))
--> 146 (self.path/self.model_dir).mkdir(parents=True, exist_ok=True)
147 self.model = self.model.to(self.data.device)
148 self.loss_func = ifnone(self.loss_func, self.data.loss_func)

/opt/conda/lib/python3.6/pathlib.py in mkdir(self, mode, parents, exist_ok)
1244 self._raise_closed()
1245 try:
-> 1246 self._accessor.mkdir(self, mode)
1247 except FileNotFoundError:
1248 if not parents or self.parent == self:

/opt/conda/lib/python3.6/pathlib.py in wrapped(pathobj, *args)
385 @functools.wraps(strfunc)
386 def wrapped(pathobj, *args):
--> 387 return strfunc(str(pathobj), *args)
388 return staticmethod(wrapped)
389

OSError: [Errno 30] Read-only file system: '../input/models'`

What is way around of this?

EfficientNet implementation in pretrainedmodels

Hi,
Can we implement new EfficientNet in this package?
Details: [https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html]

I tried implementing it with Fastai, but getting error.
Below is the code:
`!pip install efficientnet_pytorch
from efficientnet_pytorch import EfficientNet

def efficientnet(model_name='efficientnet-b0',**kwargs):
return EfficientNet.from_pretrained(model_name).to(device)

getting error here

learn = cnn_learner(data_large, efficientnet, metrics=accuracy, model_dir=MODEL_PATH)`

Error:
`---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
----> 1 learn = cnn_learner(data_large, efficientnet, metrics=accuracy, model_dir=MODEL_PATH)

/opt/conda/lib/python3.6/site-packages/fastai/vision/learner.py in cnn_learner(data, base_arch, cut, pretrained, lin_ftrs, ps, custom_head, split_on, bn_final, init, concat_pool, **kwargs)
95 meta = cnn_config(base_arch)
96 model = create_cnn_model(base_arch, data.c, cut, pretrained, lin_ftrs, ps=ps, custom_head=custom_head,
---> 97 split_on=split_on, bn_final=bn_final, concat_pool=concat_pool)
98 learn = Learner(data, model, **kwargs)
99 learn.split(split_on or meta['split'])

/opt/conda/lib/python3.6/site-packages/fastai/vision/learner.py in create_cnn_model(base_arch, nc, cut, pretrained, lin_ftrs, ps, custom_head, split_on, bn_final, concat_pool)
81 split_on:Optional[SplitFuncOrIdxList]=None, bn_final:bool=False, concat_pool:bool=True):
82 "Create custom convnet architecture"
---> 83 body = create_body(base_arch, pretrained, cut)
84 if custom_head is None:
85 nf = num_features_model(nn.Sequential(*body.children())) * (2 if concat_pool else 1)

/opt/conda/lib/python3.6/site-packages/fastai/vision/learner.py in create_body(arch, pretrained, cut)
53 def create_body(arch:Callable, pretrained:bool=True, cut:Optional[Union[int, Callable]]=None):
54 "Cut off the body of a typically pretrained model at cut (int) or cut the model as specified by cut(model) (function)."
---> 55 model = arch(pretrained)
56 cut = ifnone(cut, cnn_config(arch)['cut'])
57 if cut is None:

in efficientnet(model_name, **kwargs)
1 def efficientnet(model_name='efficientnet-b0',**kwargs):
----> 2 return EfficientNet.from_pretrained(model_name).to(device)

/opt/conda/lib/python3.6/site-packages/efficientnet_pytorch/model.py in from_pretrained(cls, model_name)
185 @classmethod
186 def from_pretrained(cls, model_name):
--> 187 model = EfficientNet.from_name(model_name)
188 load_pretrained_weights(model, model_name)
189 return model

/opt/conda/lib/python3.6/site-packages/efficientnet_pytorch/model.py in from_name(cls, model_name, override_params)
179 @classmethod
180 def from_name(cls, model_name, override_params=None):
--> 181 cls._check_model_name_is_valid(model_name)
182 blocks_args, global_params = get_model_params(model_name, override_params)
183 return EfficientNet(blocks_args, global_params)

/opt/conda/lib/python3.6/site-packages/efficientnet_pytorch/model.py in check_model_name_is_valid(cls, model_name, also_need_pretrained_weights)
201 num_models = 4 if also_need_pretrained_weights else 8
202 valid_models = ['efficientnet_b'+str(i) for i in range(num_models)]
--> 203 if model_name.replace('-','
') not in valid_models:
204 raise ValueError('model_name should be one of: ' + ', '.join(valid_models))

AttributeError: 'bool' object has no attribute 'replace'`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.