Coder Social home page Coder Social logo

deepvoltaire / autoaugment Goto Github PK

View Code? Open in Web Editor NEW
1.4K 22.0 202.0 30.13 MB

Unofficial implementation of the ImageNet, CIFAR 10 and SVHN Augmentation Policies learned by AutoAugment using pillow

License: MIT License

Jupyter Notebook 99.84% Python 0.16%

autoaugment's Introduction

AutoAugment - Learning Augmentation Policies from Data

Unofficial implementation of the ImageNet, CIFAR10 and SVHN Augmentation Policies learned by AutoAugment, described in this Google AI Blogpost.

Update July 13th, 2018: Wrote a Blogpost about AutoAugment and Double Transfer Learning.

Tested with Python 3.6. Needs pillow>=5.0.0

Examples of the best ImageNet Policy


Example

from autoaugment import ImageNetPolicy
image = PIL.Image.open(path)
policy = ImageNetPolicy()
transformed = policy(image)

To see examples of all operations and magnitudes applied to images, take a look at AutoAugment_Exploration.ipynb.

Example as a PyTorch Transform - ImageNet

from autoaugment import ImageNetPolicy
data = ImageFolder(rootdir, transform=transforms.Compose(
                        [transforms.RandomResizedCrop(224), 
                         transforms.RandomHorizontalFlip(), ImageNetPolicy(), 
                         transforms.ToTensor(), transforms.Normalize(...)]))
loader = DataLoader(data, ...)

Example as a PyTorch Transform - CIFAR10

from autoaugment import CIFAR10Policy
data = ImageFolder(rootdir, transform=transforms.Compose(
                        [transforms.RandomCrop(32, padding=4, fill=128), # fill parameter needs torchvision installed from source
                         transforms.RandomHorizontalFlip(), CIFAR10Policy(), 
			 transforms.ToTensor(), 
                         Cutout(n_holes=1, length=16), # (https://github.com/uoguelph-mlrg/Cutout/blob/master/util/cutout.py)
                         transforms.Normalize(...)]))
loader = DataLoader(data, ...)

Example as a PyTorch Transform - SVHN

from autoaugment import SVHNPolicy
data = ImageFolder(rootdir, transform=transforms.Compose(
                        [SVHNPolicy(), 
			 transforms.ToTensor(), 
                         Cutout(n_holes=1, length=20), # (https://github.com/uoguelph-mlrg/Cutout/blob/master/util/cutout.py)
                         transforms.Normalize(...)]))
loader = DataLoader(data, ...)

Results with AutoAugment

Generalizable Data Augmentations

Finally, we show that policies found on one task can generalize well across different models and datasets. For example, the policy found on ImageNet leads to significant improvements on a variety of FGVC datasets. Even on datasets for which fine-tuning weights pre-trained on ImageNet does not help significantly [26], e.g. Stanford Cars [27] and FGVC Aircraft [28], training with the ImageNet policy reduces test set error by 1.16% and 1.76%, respectively. This result suggests that transferring data augmentation policies offers an alternative method for transfer learning.

CIFAR 10

CIFAR10 Results

CIFAR 100

CIFAR10 Results

ImageNet

ImageNet Results

SVHN

SVHN Results

Fine Grained Visual Classification Datasets

SVHN Results

autoaugment's People

Contributors

alihassanijr avatar deepvoltaire avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autoaugment's Issues

How to train a model?

How to use the policy given in the article to train our model? What should the training process be like? The test accuracy of the model trained according to my own thinking is very low,please answer

Pretrained imagenet models?

Hi,

Would it be possible to release the pretrained imagenet autoaugment models? In particular, the resnet50 one would be very helpful. I am looking to use it for further downstream work.

thanks

Hi, why do you divide the magnitude of `translate operation` by 331?

Hi,

Thanks a lot for your kindly publishing the code base !!

"translateY": np.linspace(0, 150 / 331, 10),

I noticed that, in this line, the magnitude of translate operation is divided by 331. Would you please tell me the reason you implement this?

By the way, is the meanings of the magnitudes and their explanations in the paper of learning augs for object detection the same as in this paper? Could I simply modify the combination of the policies in this codebase to implement the augmentation method proposed in that paper?

I am looking for your generous guidance :)

Can't pickle local object ''SubPolicy.__init__.<locals>.<lambda>'’

Hi, I meet a problem when I run AutoAugment and I can't find any solution by google.

My environment is different from the tested one. Well, I will feel lucky if it is not a complex compatibility issue.

My environment is as below:
python: 3.7
pytorch: 1.10
Pillow: 5.4.1

File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/site-packages/torch/utils/data/dataloader.py",
line 469, in init
w.start()
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32,
in init
super().init(process_obj)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/popen_fork.py", line 20, in init
self._launch(process_obj)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47,
in _launch
reduction.dump(process_obj, fp)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object "SubPolicy.init.<locals>.<lambda>"

Why is there no arg of cutoff for autocontrast at this line?

"autocontrast": lambda img, magnitude: ImageOps.autocontrast(img),

Hi,

I notice that for autocontrast, the magnitude is not used, and for ImageOps.autocontrast, there is an args named cutoff, which controls the degree of contrast. If we set cutoff=0, the original image would be returned by this function. Does this mean that the ImageOps.autocontrast is not used at all in both the auto-ml search phase and the verification phase(experiments on imagenet dataset) ?

best sub-policies?

In autoaugment.py.
best Sub-policies.How to determine the parameters.Do these parameters also use my own data?

cifar10

Why train a model directly with cifar10 policy given in the article, but the test accuracy of the model is very low ?

Underfitting problem

When I used auto augment with my custom NN arch with CIFAR-10 dataset, I got more testing accuracy and less training accuracy. I tried many things and it happens when I implement autoaugmentation only, the under fitting problem, Can please explain why this happening?
Just looking for answers...

Make dates in readme locale agnostic

There are one or more calendar dates in the readme. They're written in a locale specific style, and this can lead to needless confusion.For example, July 11 can be confused with November 7. Please rewrite the dates in a neutral format, e.g. as in this comment.

A error

TypeError: transform() got an unexpected keyword argument 'fillcolor'

Hello, how can I solve this problem?

Installation?

This is a great tool - but wondering how to go about installing it to run with my pytorch models?

ImageNet performance?

Hi, does anyone gets the performance on ImageNet with the provided autoaugment?
Here is my results with autoaugment using official implementation, compared to official results, no impressive improvements are got?
Results of ResNet50,101,152 in terms of top1/5 accuracy:
official without autoaugment: 76.15/92.87, 77.37/93.56, 78.31/94.06.
mine with autoaugment: 75.33/92.45, 77.57/93.78, 78.51/94.07.
Update: all the above results are tested with training epochs as 90, a longer one such as 270 used in the paper may help get the reported results.

[Bug] Missing random.choice in rotate

Hi,
Thank you for the wonderful work!
Just wanna share a tiny concern. It seems to me that here the magitude should be magnitude * random.choice([-1, 1], otherwise the rotate will always be in one direction.

"rotate": lambda img, magnitude: rotate_with_fill(img, magnitude),

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.