Coder Social home page Coder Social logo

torchmetal's People

Contributors

akreal avatar egrefen avatar eugenelet avatar marccoru avatar marcociccone avatar msiam avatar sevro avatar tristandeleu avatar

Stargazers

 avatar

Watchers

 avatar

Forkers

iron-beagle sevro

torchmetal's Issues

Add references/citations

The citation of the original paper and repository is maintained, since we are reusing code from other datasets (Meta Dataset #5) we should also give them credit for the work in the citation file.

Add Documentation

Torchmetal is brutally without documentation on its inner workings and hard to understand. Across the board this needs to be fixed.

Add Meta-Dataset and VTAB

Is your feature request related to a problem? Please describe.

The current set of included datasets are too constrained, limited in diversity, or only weakly related to representation quality. Meta-Dataset (MD) and the Visual Task Adaptation Benchmark define good representations as those that adapt to diverse, unseen tasks with few examples. These datasets really push all current datasets provided by this library into the toy domain.

Implementing this feature will allow for rapidly testing and comparing models and confidently say which will perform better in real world situations.

Describe the solution you'd like

The google-research/meta-dataset repostiroy is Apache License Version 2.0 which is compatible with our MIT License so the quickest path to adding this feature will likely be to integrate the code in a way that is compatible with this libraries abstractions.

The biggest problem will be that the code (being from google) uses TensorFlow and is concerned with Python 2 compatibility which we do not care about supporting (only Python 3.6+). From their installation instructions:

Meta-Dataset is generally compatible with Python 2 and Python 3, but some parts of the code may require Python 3. The code works with TensorFlow 2, although it makes extensive use of tf.compat.v1 internally.

It looks like the code makes use of TFRecords which will need to be removed.

Describe alternatives you've considered

If migrating that code is not worth the effort the datasets required to create Meta-Dataset can be used from existing PyTorch implementations already in this dataset, TorchVision Datasets, or other PyTorch libraries. The dataloader and/or dataset combining logic will likely still be of interest at least as a reference implementation.

I have also considered developing a novel dataset, but the time to add this dataset will add significant value in a much shorter timeframe so that should be revisited later.

Behavior Changes

This feature is purely additive to the library and should not change behavior of any existing code or make any breaking API changes.

Additional context

Add any other context or screenshots about the feature request here.

Review metadatset reader

Currently the implementation of metadataset in Iron-beagle/torchmetal on branch datasets does not regard the code of the reader in metadataset. This reader appears to have functionality that will be needed. Specifically in the episode_representation_generator. Both the run_counter and the logic regarding the remaining variable seem to indicate behavior that needs to be mimicked outside the already implemented sampler

Move gradient update helper out of utils

Is your enhancement request related to a problem? Please describe.

This is a function that will either be used in nearly every project dependent on this library or looked at as documentation on how updating in the inner loop is done. The utils module should not even exist but having a prominent function in a totally generic package makes discoverability of the functionality far harder than it should be and also suggests it is less important and/or for internal use. See function linked below.

https://github.com/sevro/torchmetal/blob/2508a5bfc2bbdd0e398c76b3bd35874a13aa9004/torchmetal/utils/gradient_based.py#L8

Describe the solution you'd like

This function should be moved out of utils into a module that is clearly for gradient updates.

Behavior Changes

This will be an API breaking change.

Padding Speed / Pad Size

Currently the Padding for batch size is done using a torch collate function in ironbeagle-torchmetal BatchMetaCollatePad. Its call hard-codes the largest batch size, 500 currently. Also, the only tested image shape has been concrete through transforms in the helpers.

The desired behavior is to Pad according to the largest batch and largest image efficiently. Finding this across all tasks in an episode needs to be implemented and used in the BatchMetaCollatePad

Torchvision InterpolationMode warnings

When updating PyTorch to the current latest 1.8.1 and torchvision to the corresponding 0.9.1 calling torchmeta.transforms.Rotation causes the warning:

~/.cache/pypoetry/virtualenvs/retrainer-TPV0pAOM-py3.8/lib/python3.8/site-packages/torchvision/transforms/functional.py:942: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.

Complete steps to reproduce the bug

Use transforms with Torchvision 0.9.1 that interpolate an image.

Expected behavior

No warnings, preferably by the ability to pass **kwargs through.

Environment

  • OS: Ubuntu 20.04
  • Library Version 0.1.0

Additional context

Pulling a full stack trace of the warning got me the exact method:

  File "~/.cache/pypoetry/virtualenvs/retrainer-TPV0pAOM-py3.8/lib/python3.8/site-packages/torchmeta/transforms/augmentations.py", line 34, in __call__
    return F.rotate(image, self.angle % 360, self.resample,

It seems this is the new API. Easy fix is to pass generic **kwargs instead of specific ones which I have already tested, although this be needed for other transforms also. This would be a change more robust to future API changes and flexible for users also I think. I have been using this library a lot for some time now, so let me know if you would be interested in a PR for this.


Copied from: tristandeleu/pytorch-meta#131

Refactor `utils` away

Refactor utils package away.

Behavior Changes

This will be a major breaking change to the API.

Test failures

A number of dataset, dataloader, and wrappers tests fail.

Complete steps to reproduce the bug

poetry run pytest

Expected behavior

All tests pass.

Environment

  • OS: Ubuntu 20.04
  • Library Version: fork torchmeta 1.7 before changes, and dev branch
  • Using the poetry environment
  • The lock file has not been changed on dev

Additional context

Test output:

============================= test session starts ==============================
platform linux -- Python 3.8.3, pytest-5.4.3, py-1.10.0, pluggy-0.13.1
rootdir: /home/datenstrom/workspace/src/github.com/sevro/torchmetal
collected 275 items

tests/test_torchmetal.py .                                               [  0%]
tests/datasets/test_datasets_helpers.py ..F.......F.......F.......F..... [ 12%]
..F.......F.....                                                         [ 17%]
tests/modules/test_activation.py ........................                [ 26%]
tests/modules/test_container.py ..                                       [ 27%]
tests/modules/test_conv.py ............                                  [ 31%]
tests/modules/test_linear.py ........                                    [ 34%]
tests/modules/test_module.py ..                                          [ 35%]
tests/modules/test_parallel.py ..........                                [ 38%]
tests/modules/test_sparse.py ........                                    [ 41%]
tests/toy/test_toy.py ...........                                        [ 45%]
tests/transforms/test_splitters.py ..                                    [ 46%]
tests/utils/test_dataloaders.py ......F.......F.......F.......F.......F. [ 61%]
......F......                                                            [ 65%]
tests/utils/test_gradient_based.py ...                                   [ 66%]
tests/utils/test_matching.py ....                                        [ 68%]
tests/utils/test_prototype.py ...                                        [ 69%]
tests/utils/test_r2d2.py ............................................... [ 86%]
.............                                                            [ 91%]
tests/utils/test_wrappers.py ..F.......F.......F.....                    [100%]

=================================== FAILURES ===================================
________________ test_datasets_helpers[train-1-tieredimagenet] _________________

name = 'tieredimagenet', shots = 1, split = 'train'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/datasets/test_datasets_helpers.py:21: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
________________ test_datasets_helpers[train-5-tieredimagenet] _________________

name = 'tieredimagenet', shots = 5, split = 'train'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/datasets/test_datasets_helpers.py:21: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
_________________ test_datasets_helpers[val-1-tieredimagenet] __________________

name = 'tieredimagenet', shots = 1, split = 'val'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/datasets/test_datasets_helpers.py:21: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
_________________ test_datasets_helpers[val-5-tieredimagenet] __________________

name = 'tieredimagenet', shots = 5, split = 'val'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/datasets/test_datasets_helpers.py:21: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
_________________ test_datasets_helpers[test-1-tieredimagenet] _________________

name = 'tieredimagenet', shots = 1, split = 'test'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/datasets/test_datasets_helpers.py:21: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
_________________ test_datasets_helpers[test-5-tieredimagenet] _________________

name = 'tieredimagenet', shots = 5, split = 'test'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/datasets/test_datasets_helpers.py:21: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
___________ test_datasets_helpers_dataloader[train-1-tieredimagenet] ___________

name = 'tieredimagenet', shots = 1, split = 'train'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers_dataloader(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/utils/test_dataloaders.py:93: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
___________ test_datasets_helpers_dataloader[train-5-tieredimagenet] ___________

name = 'tieredimagenet', shots = 5, split = 'train'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers_dataloader(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/utils/test_dataloaders.py:93: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
____________ test_datasets_helpers_dataloader[val-1-tieredimagenet] ____________

name = 'tieredimagenet', shots = 1, split = 'val'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers_dataloader(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/utils/test_dataloaders.py:93: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
____________ test_datasets_helpers_dataloader[val-5-tieredimagenet] ____________

name = 'tieredimagenet', shots = 5, split = 'val'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers_dataloader(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/utils/test_dataloaders.py:93: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
___________ test_datasets_helpers_dataloader[test-1-tieredimagenet] ____________

name = 'tieredimagenet', shots = 1, split = 'test'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers_dataloader(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/utils/test_dataloaders.py:93: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
___________ test_datasets_helpers_dataloader[test-5-tieredimagenet] ____________

name = 'tieredimagenet', shots = 5, split = 'test'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('shots', [1, 5])
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers_dataloader(name, shots, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=shots,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/utils/test_dataloaders.py:93: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
_____________ test_datasets_helpers_wrapper[train-tieredimagenet] ______________

name = 'tieredimagenet', split = 'train'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers_wrapper(name, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=1,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/utils/test_wrappers.py:21: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
______________ test_datasets_helpers_wrapper[val-tieredimagenet] _______________

name = 'tieredimagenet', split = 'val'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers_wrapper(name, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=1,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/utils/test_wrappers.py:21: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
______________ test_datasets_helpers_wrapper[test-tieredimagenet] ______________

name = 'tieredimagenet', split = 'test'

    @pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
    @pytest.mark.parametrize('name', helpers.__all__)
    @pytest.mark.parametrize('split', ['train', 'val', 'test'])
    def test_datasets_helpers_wrapper(name, split):
        function = getattr(helpers, name)
        folder = os.getenv('TORCHMETAL_DATA_FOLDER')
        download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
    
>       dataset = function(folder,
                           ways=5,
                           shots=1,
                           test_shots=15,
                           meta_split=split,
                           download=download)

tests/utils/test_wrappers.py:21: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
torchmetal/datasets/helpers.py:168: in tieredimagenet
    return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
    dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
    dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
    self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
    download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'

    def download_file_from_google_drive(file_id, root, filename=None, md5=None):
        """Download a Google Drive file from  and place it in root.
    
        Args:
            file_id (str): id of file to be downloaded
            root (str): Directory to place downloaded file in
            filename (str, optional): Name to save the file under. If None, use the id of the file.
            md5 (str, optional): MD5 checksum of the download. If None, do not check
        """
        # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
        import requests
        url = "https://docs.google.com/uc?export=download"
    
        root = os.path.expanduser(root)
        if not filename:
            filename = file_id
        fpath = os.path.join(root, filename)
    
        os.makedirs(root, exist_ok=True)
    
>       if os.path.isfile(fpath) and check_integrity(fpath, md5):
E       NameError: name 'check_integrity' is not defined

torchmetal/datasets/utils.py:67: NameError
=============================== warnings summary ===============================
tests/modules/test_parallel.py::test_dataparallel_params_maml[model0]
  /home/datenstrom/.cache/pypoetry/virtualenvs/torchmetal-hZltVvxe-py3.8/lib/python3.8/site-packages/torch/cuda/nccl.py:48: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
    if not isinstance(inputs, collections.Container) or isinstance(inputs, torch.Tensor):

tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[train-1-omniglot]
tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[train-5-omniglot]
tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[val-1-omniglot]
tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[val-5-omniglot]
tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[test-1-omniglot]
tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[test-5-omniglot]
tests/utils/test_dataloaders.py::test_overflow_length_dataloader
  /home/datenstrom/.cache/pypoetry/virtualenvs/torchmetal-hZltVvxe-py3.8/lib/python3.8/site-packages/torchvision/transforms/functional.py:942: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
    warnings.warn(

-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================== short test summary info ============================
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[train-1-tieredimagenet]
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[train-5-tieredimagenet]
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[val-1-tieredimagenet]
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[val-5-tieredimagenet]
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[test-1-tieredimagenet]
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[test-5-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[train-1-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[train-5-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[val-1-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[val-5-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[test-1-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[test-5-tieredimagenet]
FAILED tests/utils/test_wrappers.py::test_datasets_helpers_wrapper[train-tieredimagenet]
FAILED tests/utils/test_wrappers.py::test_datasets_helpers_wrapper[val-tieredimagenet]
FAILED tests/utils/test_wrappers.py::test_datasets_helpers_wrapper[test-tieredimagenet]
================= 15 failed, 260 passed, 8 warnings in 15.24s ==================

Document how to create a new dataset

Problem

There is currently no documentation on how to create a new dataset.

Solution

Create a tutorial walking through creation of a new dataset.

Behavior Changes

This will not change the behavior of the library.

Additional context

This issue is blocked on initial CI/CD setup which includes documentation generation and deployment.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.