emerald-ai / torchmetal Goto Github PK
View Code? Open in Web Editor NEWA library for Meta-Learning and Few-Shot Learning with PyTorch.
License: MIT License
A library for Meta-Learning and Few-Shot Learning with PyTorch.
License: MIT License
The citation of the original paper and repository is maintained, since we are reusing code from other datasets (Meta Dataset #5) we should also give them credit for the work in the citation file.
Torchmetal is brutally without documentation on its inner workings and hard to understand. Across the board this needs to be fixed.
This repository is a hard fork of tristandeleu/pytorch-meta and many issues, pull requests, and feature requests on that repository will still be valid here. At least the most relevant issues should be migrated and linked back to their original issue on the pytorch-meta issue tracker.
Migrate the CI/CD from Travis to GitLab since we have GPU/CUDA capable GitLab runners.
See the GitLab documentation on using their CI/CD pipelines with GitHub repositories for detailed information on setting this up.
https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/github_integration.html
The current set of included datasets are too constrained, limited in diversity, or only weakly related to representation quality. Meta-Dataset (MD) and the Visual Task Adaptation Benchmark define good representations as those that adapt to diverse, unseen tasks with few examples. These datasets really push all current datasets provided by this library into the toy domain.
Implementing this feature will allow for rapidly testing and comparing models and confidently say which will perform better in real world situations.
The google-research/meta-dataset repostiroy is Apache License Version 2.0 which is compatible with our MIT License so the quickest path to adding this feature will likely be to integrate the code in a way that is compatible with this libraries abstractions.
The biggest problem will be that the code (being from google) uses TensorFlow and is concerned with Python 2 compatibility which we do not care about supporting (only Python 3.6+). From their installation instructions:
Meta-Dataset is generally compatible with Python 2 and Python 3, but some parts of the code may require Python 3. The code works with TensorFlow 2, although it makes extensive use of tf.compat.v1 internally.
It looks like the code makes use of TFRecords which will need to be removed.
If migrating that code is not worth the effort the datasets required to create Meta-Dataset can be used from existing PyTorch implementations already in this dataset, TorchVision Datasets, or other PyTorch libraries. The dataloader and/or dataset combining logic will likely still be of interest at least as a reference implementation.
I have also considered developing a novel dataset, but the time to add this dataset will add significant value in a much shorter timeframe so that should be revisited later.
This feature is purely additive to the library and should not change behavior of any existing code or make any breaking API changes.
Add any other context or screenshots about the feature request here.
Pages need to be updated to emerald-ai
group.
Lines 6 to 8 in 2508a5b
Currently the implementation of metadataset in Iron-beagle/torchmetal on branch datasets
does not regard the code of the reader in metadataset. This reader appears to have functionality that will be needed. Specifically in the episode_representation_generator
. Both the run_counter
and the logic regarding the remaining
variable seem to indicate behavior that needs to be mimicked outside the already implemented sampler
This is a function that will either be used in nearly every project dependent on this library or looked at as documentation on how updating in the inner loop is done. The utils
module should not even exist but having a prominent function in a totally generic package makes discoverability of the functionality far harder than it should be and also suggests it is less important and/or for internal use. See function linked below.
This function should be moved out of utils into a module that is clearly for gradient updates.
This will be an API breaking change.
Currently the Padding for batch size is done using a torch collate function in ironbeagle-torchmetal BatchMetaCollatePad
. Its call
hard-codes the largest batch size, 500
currently. Also, the only tested image shape has been concrete through transforms in the helpers.
The desired behavior is to Pad according to the largest batch and largest image efficiently. Finding this across all tasks in an episode needs to be implemented and used in the BatchMetaCollatePad
When updating PyTorch to the current latest 1.8.1
and torchvision to the corresponding 0.9.1
calling torchmeta.transforms.Rotation
causes the warning:
~/.cache/pypoetry/virtualenvs/retrainer-TPV0pAOM-py3.8/lib/python3.8/site-packages/torchvision/transforms/functional.py:942: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
Use transforms with Torchvision 0.9.1
that interpolate an image.
No warnings, preferably by the ability to pass **kwargs
through.
Pulling a full stack trace of the warning got me the exact method:
File "~/.cache/pypoetry/virtualenvs/retrainer-TPV0pAOM-py3.8/lib/python3.8/site-packages/torchmeta/transforms/augmentations.py", line 34, in __call__
return F.rotate(image, self.angle % 360, self.resample,
It seems this is the new API. Easy fix is to pass generic **kwargs
instead of specific ones which I have already tested, although this be needed for other transforms also. This would be a change more robust to future API changes and flexible for users also I think. I have been using this library a lot for some time now, so let me know if you would be interested in a PR for this.
Copied from: tristandeleu/pytorch-meta#131
Refactor utils
package away.
This will be a major breaking change to the API.
As title says, see:
Line 10 in 2508a5b
A number of dataset, dataloader, and wrappers tests fail.
poetry run pytest
All tests pass.
dev
branchdev
Test output:
============================= test session starts ==============================
platform linux -- Python 3.8.3, pytest-5.4.3, py-1.10.0, pluggy-0.13.1
rootdir: /home/datenstrom/workspace/src/github.com/sevro/torchmetal
collected 275 items
tests/test_torchmetal.py . [ 0%]
tests/datasets/test_datasets_helpers.py ..F.......F.......F.......F..... [ 12%]
..F.......F..... [ 17%]
tests/modules/test_activation.py ........................ [ 26%]
tests/modules/test_container.py .. [ 27%]
tests/modules/test_conv.py ............ [ 31%]
tests/modules/test_linear.py ........ [ 34%]
tests/modules/test_module.py .. [ 35%]
tests/modules/test_parallel.py .......... [ 38%]
tests/modules/test_sparse.py ........ [ 41%]
tests/toy/test_toy.py ........... [ 45%]
tests/transforms/test_splitters.py .. [ 46%]
tests/utils/test_dataloaders.py ......F.......F.......F.......F.......F. [ 61%]
......F...... [ 65%]
tests/utils/test_gradient_based.py ... [ 66%]
tests/utils/test_matching.py .... [ 68%]
tests/utils/test_prototype.py ... [ 69%]
tests/utils/test_r2d2.py ............................................... [ 86%]
............. [ 91%]
tests/utils/test_wrappers.py ..F.......F.......F..... [100%]
=================================== FAILURES ===================================
________________ test_datasets_helpers[train-1-tieredimagenet] _________________
name = 'tieredimagenet', shots = 1, split = 'train'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/datasets/test_datasets_helpers.py:21:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
________________ test_datasets_helpers[train-5-tieredimagenet] _________________
name = 'tieredimagenet', shots = 5, split = 'train'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/datasets/test_datasets_helpers.py:21:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
_________________ test_datasets_helpers[val-1-tieredimagenet] __________________
name = 'tieredimagenet', shots = 1, split = 'val'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/datasets/test_datasets_helpers.py:21:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
_________________ test_datasets_helpers[val-5-tieredimagenet] __________________
name = 'tieredimagenet', shots = 5, split = 'val'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/datasets/test_datasets_helpers.py:21:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
_________________ test_datasets_helpers[test-1-tieredimagenet] _________________
name = 'tieredimagenet', shots = 1, split = 'test'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/datasets/test_datasets_helpers.py:21:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
_________________ test_datasets_helpers[test-5-tieredimagenet] _________________
name = 'tieredimagenet', shots = 5, split = 'test'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/datasets/test_datasets_helpers.py:21:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
___________ test_datasets_helpers_dataloader[train-1-tieredimagenet] ___________
name = 'tieredimagenet', shots = 1, split = 'train'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers_dataloader(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/utils/test_dataloaders.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
___________ test_datasets_helpers_dataloader[train-5-tieredimagenet] ___________
name = 'tieredimagenet', shots = 5, split = 'train'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers_dataloader(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/utils/test_dataloaders.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
____________ test_datasets_helpers_dataloader[val-1-tieredimagenet] ____________
name = 'tieredimagenet', shots = 1, split = 'val'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers_dataloader(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/utils/test_dataloaders.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
____________ test_datasets_helpers_dataloader[val-5-tieredimagenet] ____________
name = 'tieredimagenet', shots = 5, split = 'val'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers_dataloader(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/utils/test_dataloaders.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
___________ test_datasets_helpers_dataloader[test-1-tieredimagenet] ____________
name = 'tieredimagenet', shots = 1, split = 'test'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers_dataloader(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/utils/test_dataloaders.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
___________ test_datasets_helpers_dataloader[test-5-tieredimagenet] ____________
name = 'tieredimagenet', shots = 5, split = 'test'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('shots', [1, 5])
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers_dataloader(name, shots, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=shots,
test_shots=15,
meta_split=split,
download=download)
tests/utils/test_dataloaders.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
_____________ test_datasets_helpers_wrapper[train-tieredimagenet] ______________
name = 'tieredimagenet', split = 'train'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers_wrapper(name, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=1,
test_shots=15,
meta_split=split,
download=download)
tests/utils/test_wrappers.py:21:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
______________ test_datasets_helpers_wrapper[val-tieredimagenet] _______________
name = 'tieredimagenet', split = 'val'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers_wrapper(name, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=1,
test_shots=15,
meta_split=split,
download=download)
tests/utils/test_wrappers.py:21:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
______________ test_datasets_helpers_wrapper[test-tieredimagenet] ______________
name = 'tieredimagenet', split = 'test'
@pytest.mark.skipif(not is_local, reason='Requires datasets downloaded locally')
@pytest.mark.parametrize('name', helpers.__all__)
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
def test_datasets_helpers_wrapper(name, split):
function = getattr(helpers, name)
folder = os.getenv('TORCHMETAL_DATA_FOLDER')
download = bool(os.getenv('TORCHMETAL_DOWNLOAD', False))
> dataset = function(folder,
ways=5,
shots=1,
test_shots=15,
meta_split=split,
download=download)
tests/utils/test_wrappers.py:21:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
torchmetal/datasets/helpers.py:168: in tieredimagenet
return helper_with_default(TieredImagenet, folder, shots, ways,
torchmetal/datasets/helpers.py:35: in helper_with_default
dataset = klass(folder, num_classes_per_task=ways, **kwargs)
torchmetal/datasets/tieredimagenet.py:89: in __init__
dataset = TieredImagenetClassDataset(root, meta_train=meta_train,
torchmetal/datasets/tieredimagenet.py:128: in __init__
self.download()
torchmetal/datasets/tieredimagenet.py:181: in download
download_file_from_google_drive(self.gdrive_id, self.root,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
file_id = '1g1aIDy2Ar_MViF2gDXFYDBTR-HYecV07'
root = '/home/datenstrom/workspace_ssd/datasets/torchmetal/tieredimagenet'
filename = 'tiered-imagenet.tar', md5 = 'e07e811b9f29362d159a9edd0d838c62'
def download_file_from_google_drive(file_id, root, filename=None, md5=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url
import requests
url = "https://docs.google.com/uc?export=download"
root = os.path.expanduser(root)
if not filename:
filename = file_id
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
> if os.path.isfile(fpath) and check_integrity(fpath, md5):
E NameError: name 'check_integrity' is not defined
torchmetal/datasets/utils.py:67: NameError
=============================== warnings summary ===============================
tests/modules/test_parallel.py::test_dataparallel_params_maml[model0]
/home/datenstrom/.cache/pypoetry/virtualenvs/torchmetal-hZltVvxe-py3.8/lib/python3.8/site-packages/torch/cuda/nccl.py:48: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
if not isinstance(inputs, collections.Container) or isinstance(inputs, torch.Tensor):
tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[train-1-omniglot]
tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[train-5-omniglot]
tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[val-1-omniglot]
tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[val-5-omniglot]
tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[test-1-omniglot]
tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[test-5-omniglot]
tests/utils/test_dataloaders.py::test_overflow_length_dataloader
/home/datenstrom/.cache/pypoetry/virtualenvs/torchmetal-hZltVvxe-py3.8/lib/python3.8/site-packages/torchvision/transforms/functional.py:942: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
warnings.warn(
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================== short test summary info ============================
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[train-1-tieredimagenet]
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[train-5-tieredimagenet]
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[val-1-tieredimagenet]
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[val-5-tieredimagenet]
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[test-1-tieredimagenet]
FAILED tests/datasets/test_datasets_helpers.py::test_datasets_helpers[test-5-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[train-1-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[train-5-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[val-1-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[val-5-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[test-1-tieredimagenet]
FAILED tests/utils/test_dataloaders.py::test_datasets_helpers_dataloader[test-5-tieredimagenet]
FAILED tests/utils/test_wrappers.py::test_datasets_helpers_wrapper[train-tieredimagenet]
FAILED tests/utils/test_wrappers.py::test_datasets_helpers_wrapper[val-tieredimagenet]
FAILED tests/utils/test_wrappers.py::test_datasets_helpers_wrapper[test-tieredimagenet]
================= 15 failed, 260 passed, 8 warnings in 15.24s ==================
There is currently no documentation on how to create a new dataset.
Create a tutorial walking through creation of a new dataset.
This will not change the behavior of the library.
This issue is blocked on initial CI/CD setup which includes documentation generation and deployment.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.