Coder Social home page Coder Social logo

fastai / docker-containers Goto Github PK

View Code? Open in Web Editor NEW
169.0 12.0 38.0 224 KB

Docker images for fastai

Home Page: https://hub.docker.com/u/fastai

License: Apache License 2.0

Dockerfile 4.22% Python 5.08% Shell 85.14% Ruby 5.55%
docker machine-learning mlops conda data-science fastai

docker-containers's Introduction

Build CI Containers

Docker Containers For fast.ai

This repository builds various docker images used for continuous integration for fastai on a recurring schedule defined in this repo's workflow files. You must install Docker before using this project.

These Docker containers are useful for testing scenarios that require reproducibility. Some familiarity with Docker is assumed before using these containers. For a gentle introduction to Docker, see this blog post.


Miscellaneous Resources & Tips

  • Save the state of a running container by first finding the Container ID of your running container with docker ps. After you have located the relevant ID, you can use docker commit to save the state of the container for later use.

  • Mount a local directory into your Docker image so that you can access files that are genearated when you exit your container with the -v flag.

  • Read this blog post.

  • Read this book to dive deeper into Docker.

docker-containers's People

Contributors

alisondavey avatar amy-tabb avatar chusc123 avatar feynmanliang avatar hamelsmu avatar jph00 avatar keeran avatar rtmlp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-containers's Issues

course-v4 and fastbook folders inside fastcore

Hi,

In the fastdotai/fastai-course docker image, the course-v4 and fastbook folders are located inside the fastcore directory instead of /workspace directory.

If the decision is to keep the folders inside the fastcore, then please close this issue.

If not, the issue is because of this line in the dockerfile located in fastai-build

RUN /bin/bash -c "if [[ $BUILD == 'course' ]] ; then echo \"Course Build\" && cd fastai && pip install . && cd ../fastcore && pip install . && git clone https://github.com/fastai/fastbook --depth 1 && git clone https://github.com/fastai/course-v4 --depth 1; fi"
RUN echo '#!/bin/bash\njupyter notebook --ip=0.0.0.0 --port=8888 --allow-root --no-browser' >> run_jupyter.sh
COPY download_testdata.py ./
COPY extract.sh ./
RUN chmod u+x extract.sh
RUN chmod u+x run_jupyter.sh

After the pip install of fastcore library, the script doesn't navigate to workspace directory. I can make the change and raise a PR if you think this is an issue

Thanks
Ravi

fastai/fastai:latest, CUDA & cuDNN Unavailable

Hello (again). I'm using the fastai:latest container for some neural network inferencing. (I see it was updated yesterday.) I seem to be unable to access my GPU through the container. I'm on a laptop with a GTX 3070 Max-Q and a Ryzen 9 5900HS, with Docker running on WSL2 Debian. Here is a sample Dockerfile:

FROM fastai/fastai:latest

RUN pip install --no-cache-dir --upgrade pip \
 && pip install --no-cache-dir onnxruntime-gpu opencv-python-headless

ENTRYPOINT ["/bin/bash", "-c"]

I test this using nvidia-smi and python -c "import torch; print(torch.cuda.is_available(), torch.backends.cudnn.is_available())". The Nvidia tool properly detects my hardware and the correct versions of the drivers as they are on Windows 11, but the Python statements both return False. (The pip installs may be omitted, producing the same result.)

The following containers work without modification:

docker run --rm --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker run -it --rm --gpus all --entrypoint /bin/bash pytorch/torchserve:latest-gpu

These show the same nvidia-smi output with the Python statements returning True.

After some searching yesterday, I figure that my fastai container has duplicate versions of some Nvidia driver(s). I will update this if I find a solution. Any suggestions or tips are appreciated.

Jekyll image uses Ruby 3.0 which is not compatable with Jekyll

I was getting the following errors when using https://github.com/fastai/nbdev/blob/master/docker-compose.yml

jekyll_1    | Thank you for installing html-pipeline!
jekyll_1    | You must bundle Filter gem dependencies.
jekyll_1    | See html-pipeline README.md for more details.
jekyll_1    | https://github.com/jch/html-pipeline#dependencies
jekyll_1    | -------------------------------------------------
jekyll_1    | Configuration file: /data/docs/_config.yml
jekyll_1    |             Source: /data/docs
jekyll_1    |        Destination: /data/docs/_site
jekyll_1    |  Incremental build: disabled. Enable with --incremental
jekyll_1    |       Generating... 
jekyll_1    |       Remote Theme: Using theme fastai/nbdev-jekyll-theme
jekyll_1    |    GitHub Metadata: No GitHub API authentication could be found. Some fields may be missing or have incorrect data.
jekyll_1    |                     done in 1.239 seconds.
jekyll_1    | jekyll 3.9.0 | Error:  no implicit conversion of Hash into Integer
jekyll_1    | /var/lib/gems/3.0.0/gems/pathutil-0.16.2/lib/pathutil.rb:502:in `read': no implicit conversion of Hash into Integer (TypeError)
jekyll_1    |   from /var/lib/gems/3.0.0/gems/pathutil-0.16.2/lib/pathutil.rb:502:in `read'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/utils/platforms.rb:75:in `proc_version'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/utils/platforms.rb:40:in `bash_on_windows?'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/build.rb:77:in `watch'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/build.rb:43:in `process'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/serve.rb:93:in `block in start'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/serve.rb:93:in `each'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/serve.rb:93:in `start'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/serve.rb:75:in `block (2 levels) in init_with_program'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/mercenary-0.3.6/lib/mercenary/command.rb:220:in `block in execute'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/mercenary-0.3.6/lib/mercenary/command.rb:220:in `each'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/mercenary-0.3.6/lib/mercenary/command.rb:220:in `execute'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/mercenary-0.3.6/lib/mercenary/program.rb:42:in `go'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/mercenary-0.3.6/lib/mercenary.rb:19:in `program'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/exe/jekyll:15:in `<top (required)>'
jekyll_1    |   from /usr/local/bin/jekyll:25:in `load'
jekyll_1    |   from /usr/local/bin/jekyll:25:in `<main>'

Based on this post, Ruby3 is not compatible with Jekyll.

https://talk.jekyllrb.com/t/error-no-implicit-conversion-of-hash-into-integer/5890

Exact version of ruby fastai/jekyll was using:

 docker run -it --rm  fastai/jekyll:latest
root@fddbcff7d3ad:~# ruby --version
ruby 3.0.2p107 (2021-07-07 revision 0db68f0233) [x86_64-linux-gnu]
root@fddbcff7d3ad:~#

fastai/fastai-course and RuntimeError: DataLoader worker (pid 47) is killed by signal

Hello, I am trying to setup a fastai learning environment on my linux computer which has a nvidia 1080 TI gpu using docker containers as pointed to on fast.ai site. When I tried bring up fastai/fastai-course and run the cell to download data files in 01_intro file in fastbook folder it erred with the following message: " RuntimeError: DataLoader worker (pid 47) is killed by signal: Bus
error. It is possible that dataloader's workers are out of shared
memory. Please try to raise your shared memory limit.". Any help would be appreciated. I am suspecting that it may be a limitation of the virtual storage size in the image. Should I use fastdotai/fastai image and -v to a folder on the host computer to circumvent this error?

How to use the pre-trained optimal model to predict my own dataset?

Hello, thank you for sharing your code and results. I would like to ask you a question about how to use your optimal model to predict my own dataset. I have converted my dataset from csv file to pt file, but when I try to use it for prediction, I encounter an error that says the dataset cannot be input to the model. Could you please give me some guidance on how to solve this problem? Thank you very much!

Please Review

@muellerzr / @jph00 can you please review this README and let me know if you have any thoughts or comments?

@giacomov please take a look and let me know your thoughts. Many thanks for your prior art which I built on top of. I put a note with a link to your name in the README.

How to get PyTorch 1.8 supported fastai docker image

Hi,

I tried pulling fastdotai/fastai:2021-02-11 which gave 1.7.1 version, while trying the latest tag version gave torch 1.10.0

Are there any tags that can be used to get 1.8 version specifically?

Command:
docker run --gpus 1 fastdotai/fastai:2021-02-11 python -c "import torch;print(torch.__version__)"

Thank you!

Graphviz and Course v4 Notebooks

I'm using these containers to follow along with the v4 course and had the following issues:

  1. Graphviz is not included. If you use the v4 course notebooks, you get an error to install graphviz. Please consider adding this to the image. I have to open a terminal in jupyter and type conda install graphviz every time to get the notebooks to work.
  2. Previous course notebooks are included not current course notebooks. This container should include the commented notebooks here: https://github.com/fastai/fastbook and the uncommented notebooks here: https://github.com/fastai/course-v4 instead of the course v3 notebooks.

https://hub.docker.com/r/fastdotai/fastai-course:latest not pulling fastbook

When I pull the docker image from:

https://hub.docker.com/r/fastdotai/fastai-course:latest

The fastbook directory and all its contents are missing. Am I misunderstanding that it should have that directory?

Edit: Please disregard, I used " sudo docker pull fastdotai/fastai-course" to pull the docker image but then to start the container I copy pasted "docker run --gpus all -p 8888:8888 fastdotai/fastai ./run_jupyter.sh" which caused it to pull the non-course image and start that, which explains why there was no course material. Apologies!

Containers can not access resources and causes example code to fail

When going through the fastbook I can not execute commands because of connection issues.

The command I used to launch the docker container:

docker run --gpus all -p 8888:8888 fastdotai/fastai-course ./run_jupyter.sh


ERRORS:

# 01_intro.ipynb
#hide
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()

causes this error:

WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f52023b2130>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/fastbook/
ERROR: Could not find a version that satisfies the requirement fastbook (from versions: none)
ERROR: No matching distribution found for fastbook

further down

#id first_training
#caption Results from the first training
from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'

def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
    path, get_image_files(path), valid_pct=0.2, seed=42,
    label_func=is_cat, item_tfms=Resize(224))

learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)

causes

ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /fast-ai-imageclas/oxford-iiit-pet.tgz (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fd3306788b0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))

Could not find kaggle.json. Make sure it's located in /root/.kaggle. Or use the environment method.

I'm running the following Docker container:

REPOSITORY                                               TAG                             IMAGE ID
fastdotai/fastai-course                                  latest                          020ece1e8fca

The notebook notebooks/fastbook/09_tabular.ipynb produces an error when running the cell

#hide
from fastbook import *
from kaggle import api
from pandas.api.types import is_string_dtype, is_numeric_dtype, is_categorical_dtype
from fastai.tabular.all import *
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from dtreeviz.trees import *
from IPython.display import Image, display_svg, SVG

pd.options.display.max_rows = 20
pd.options.display.max_columns = 8

... the error:

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
/tmp/ipykernel_126/1277492647.py in <module>
      1 #hide
      2 from fastbook import *
----> 3 from kaggle import api
      4 from pandas.api.types import is_string_dtype, is_numeric_dtype, is_categorical_dtype
      5 from fastai.tabular.all import *

/opt/conda/lib/python3.8/site-packages/kaggle/__init__.py in <module>
     21 
     22 api = KaggleApi(ApiClient())
---> 23 api.authenticate()

/opt/conda/lib/python3.8/site-packages/kaggle/api/kaggle_api_extended.py in authenticate(self)
    162                 config_data = self.read_config_file(config_data)
    163             else:
--> 164                 raise IOError('Could not find {}. Make sure it\'s located in'
    165                               ' {}. Or use the environment method.'.format(
    166                                   self.config_file, self.config_dir))

OSError: Could not find kaggle.json. Make sure it's located in /root/.kaggle. Or use the environment method.

CUDA broken?

Hi

I recently updated my docker image to use the latest build (March 2021) Docker image and it seems to have killed my CUDA (11.3). I am able to run fastai inside docker but it complains that that pytorch hasn't been compiled for CUDA.

I checked to see if cuda was available and it is no longer working with the (March 2021) build.

My previous docker image works and I check the previous version (Dec 2020) and they are working fine.

There are a few changes in the new version

  • replacing the pytorch/pytorch with fastai/fastai (imports).
  • apt-get and pip removal.

I am using Amy-Tabb's docker file that builds on the fastdotai dockerfile.

I was wondering if you could provide some advice on what may be happening.

Thanks

fastai/fastai image no longer comes with fastai installed

Ironically, the current fastai/fastai docker image doesn't come with the fastai library. The same is probably true for the fastai/codespaces image, but I haven't checked it.

This is due to lines introduced in commits ec9a4cb and a6ac6ff, where script.sh attempts to install a package called neptune using pip:

pip install ipywidgets fastai neptune wandb pydicom tensorboard captum

PYPI doesn't contain a package called neptune (maybe the intention was neptune-cli?), so none of the packages are installed.

05_pet_breeds.ipynb: RuntimeError: stack expects each tensor to be equal size

I'm running the following Docker container:

REPOSITORY                                               TAG                             IMAGE ID
fastdotai/fastai-course                                  latest                          020ece1e8fca

The notebook notebooks/fastbook/05_pet_breeds.ipynb produces an error when running the following cell:

#hide_output
pets1 = DataBlock(blocks = (ImageBlock, CategoryBlock),
                 get_items=get_image_files, 
                 splitter=RandomSplitter(seed=42),
                 get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'))
pets1.summary(path/"images")

The error is:

Setting-up type transforms pipelines
Collecting items from /root/.fastai/data/oxford-iiit-pet/images
Found 7390 items
2 datasets of sizes 5912,1478
Setting up Pipeline: PILBase.create
Setting up Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}

Building one sample
  Pipeline: PILBase.create
    starting from
      /root/.fastai/data/oxford-iiit-pet/images/english_cocker_spaniel_145.jpg
    applying PILBase.create gives
      PILImage mode=RGB size=391x500
  Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}
    starting from
      /root/.fastai/data/oxford-iiit-pet/images/english_cocker_spaniel_145.jpg
    applying partial gives
      english_cocker_spaniel
    applying Categorize -- {'vocab': None, 'sort': True, 'add_na': False} gives
      TensorCategory(18)

Final sample: (PILImage mode=RGB size=391x500, TensorCategory(18))


Collecting items from /root/.fastai/data/oxford-iiit-pet/images
Found 7390 items
2 datasets of sizes 5912,1478
Setting up Pipeline: PILBase.create
Setting up Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}
Setting up after_item: Pipeline: ToTensor
Setting up before_batch: Pipeline: 
Setting up after_batch: Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1}

Building one batch
Applying item_tfms to the first sample:
  Pipeline: ToTensor
    starting from
      (PILImage mode=RGB size=391x500, TensorCategory(18))
    applying ToTensor gives
      (TensorImage of size 3x500x391, TensorCategory(18))

Adding the next 3 samples

No before_batch transform to apply

Collating items in a batch
Error! It's not possible to collate your items in a batch
Could not collate the 0-th members of your tuples because got the following shapes
torch.Size([3, 500, 391]),torch.Size([3, 374, 500]),torch.Size([3, 375, 500]),torch.Size([3, 279, 300])
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/tmp/ipykernel_45/2530811607.py in <module>
      4                  splitter=RandomSplitter(seed=42),
      5                  get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'))
----> 6 pets1.summary(path/"images")

/opt/conda/lib/python3.8/site-packages/fastai/data/block.py in summary(self, source, bs, show_batch, **kwargs)
    188         why = _find_fail_collate(s)
    189         print("Make sure all parts of your samples are tensors of the same size" if why is None else why)
--> 190         raise e
    191 
    192     if len([f for f in dls.train.after_batch.fs if f.name != 'noop'])!=0:

/opt/conda/lib/python3.8/site-packages/fastai/data/block.py in summary(self, source, bs, show_batch, **kwargs)
    182     print("\nCollating items in a batch")
    183     try:
--> 184         b = dls.train.create_batch(s)
    185         b = retain_types(b, s[0] if is_listy(s) else s)
    186     except Exception as e:

/opt/conda/lib/python3.8/site-packages/fastai/data/load.py in create_batch(self, b)
    141         elif s is None:  return next(self.it)
    142         else: raise IndexError("Cannot index an iterable dataset numerically - must use `None`.")
--> 143     def create_batch(self, b): return (fa_collate,fa_convert)[self.prebatched](b)
    144     def do_batch(self, b): return self.retain(self.create_batch(self.before_batch(b)), b)
    145     def to(self, device): self.device = device

/opt/conda/lib/python3.8/site-packages/fastai/data/load.py in fa_collate(t)
     48     b = t[0]
     49     return (default_collate(t) if isinstance(b, _collate_types)
---> 50             else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
     51             else default_collate(t))
     52 

/opt/conda/lib/python3.8/site-packages/fastai/data/load.py in <listcomp>(.0)
     48     b = t[0]
     49     return (default_collate(t) if isinstance(b, _collate_types)
---> 50             else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
     51             else default_collate(t))
     52 

/opt/conda/lib/python3.8/site-packages/fastai/data/load.py in fa_collate(t)
     47     "A replacement for PyTorch `default_collate` which maintains types and handles `Sequence`s"
     48     b = t[0]
---> 49     return (default_collate(t) if isinstance(b, _collate_types)
     50             else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
     51             else default_collate(t))

/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py in default_collate(batch)
     53             storage = elem.storage()._new_shared(numel)
     54             out = elem.new(storage)
---> 55         return torch.stack(batch, 0, out=out)
     56     elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
     57             and elem_type.__name__ != 'string_':

/opt/conda/lib/python3.8/site-packages/fastai/torch_core.py in __torch_function__(self, func, types, args, kwargs)
    338         convert=False
    339         if _torch_handled(args, self._opt, func): convert,types = type(self),(torch.Tensor,)
--> 340         res = super().__torch_function__(func, types, args=args, kwargs=kwargs)
    341         if convert: res = convert(res)
    342         if isinstance(res, TensorBase): res.set_meta(self, as_copy=True)

/opt/conda/lib/python3.8/site-packages/torch/tensor.py in __torch_function__(cls, func, types, args, kwargs)
    960 
    961         with _C.DisableTorchFunction():
--> 962             ret = func(*args, **kwargs)
    963             return _convert(ret, cls)
    964 

RuntimeError: stack expects each tensor to be equal size, but got [3, 500, 391] at entry 0 and [3, 374, 500] at entry 1

fastdotai versus fastai username at Docker Hub

I'm playing around with these Docker images, by mistake I pulled fastai/fastai instead of fastdotai/fastai .

There are links to both from this repo -- to fastdotai in the README, and on the right, to fastai, but more occurrences of fastdotai in the README.

Which is the preferred username to pull from? Thanks.

fastai/fastai:latest, apt-get update, GitHub PPA missing Release file for Ubuntu Jammy

Hello!

I'm using the Docker container fastai:fastai/latest for a project I'm working on. I need to install the onnxruntime-gpu and opencv-python-headless packages using pip. I thought I needed some extra dependencies from apt, so I attempted to update the package list before installing, but this threw an error.

Dockerfile:

FROM fastai/fastai:latest

RUN apt-get update

ENTRYPOINT ["/bin/bash", "-c"]

This fails as so:

Terminal Output
user@computer:/mnt/c/Users/user/Desktop/test$ docker build -t test .
[+] Building 5.8s (6/6) FINISHED
 => [internal] load build definition from Dockerfile                                                              0.0s
 => => transferring dockerfile: 204B                                                                              0.0s
 => [internal] load .dockerignore                                                                                 0.0s
 => => transferring context: 2B                                                                                   0.0s
 => [internal] load metadata for docker.io/fastai/fastai:latest                                                   0.7s
 => CACHED [1/3] FROM docker.io/fastai/fastai:latest@sha256:2cf8a1564a65c14dd90670e4a5796cb24352f9d27676932dc2c3  0.0s
 => [2/3] WORKDIR /app                                                                                            0.0s
 => ERROR [3/3] RUN apt-get update                                                                                5.1s
------
 > [3/3] RUN apt-get update:
#5 0.445 Ign:1 https://cli.github.com/packages jammy InRelease
#5 0.464 Err:2 https://cli.github.com/packages jammy Release
#5 0.464   404  Not Found [IP: 185.199.111.153 443]
#5 0.481 Get:3 http://archive.ubuntu.com/ubuntu jammy InRelease [270 kB]
#5 0.603 Get:4 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
#5 0.750 Get:5 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [114 kB]
#5 0.814 Get:6 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [99.8 kB]
#5 0.877 Get:7 http://archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [266 kB]
#5 0.936 Get:8 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages [1792 kB]
#5 1.147 Get:9 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [322 kB]
#5 1.151 Get:10 http://archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [17.5 MB]
#5 2.293 Get:11 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [4644 B]
#5 2.303 Get:12 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [132 kB]
#5 2.628 Get:13 http://archive.ubuntu.com/ubuntu jammy/restricted amd64 Packages [164 kB]
#5 2.640 Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [650 kB]
#5 2.695 Get:15 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [265 kB]
#5 2.717 Get:16 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [363 kB]
#5 2.748 Get:17 http://archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [7791 B]
#5 2.749 Get:18 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [6909 B]
#5 3.658 Get:19 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [340 kB]
#5 4.567 Reading package lists...
#5 5.019 E: The repository 'https://cli.github.com/packages jammy Release' does not have a Release file.
------
executor failed running [/bin/sh -c apt-get update]: exit code: 100

The build fails because the GitHub CLI PPA has no release file for Ubuntu 22.04 LTS ‘Jammy Jellyfish.’ In the Dockerfile, I remove this repository before pulling updates, which fixes the build error.

Dockerfile:

FROM fastai/fastai:latest

RUN add-apt-repository -r https://cli.github.com/packages
RUN apt-get update

ENTRYPOINT ["/bin/bash", "-c"]

I had one of those moments where the original problem was fixed while I worked on this nested problem. I thought I should bring it to your attention nonetheless.

I'd also like to say: I've been using fastai for over a year now. Many thanks for developing such a great module along with fastbook and nbdev. They've all been great assets as I've learned about neural networks and modern software development.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.