Comments (10)
From my experience, I think that the restriction working directory = build context suggests itself at for the early stages of development. This is because
- Dockerfiles refer to resources (e.g. via copy/add) with paths relative to the build context
- R scripts may refer to other files with paths relative to the working directory
- R users often supply file paths relative to the R working directory
- For sessionInfo reproduction, we execute R-scripts locally and create the Dockerfile both in one step.
from containerit.
I'm not sure about the parameter names for 2, 3 and 5. We could also have copy
parameter that takes a list of files. I would suggest to use one copy
parameter with the following options:
script
(the default, copies only the script file passed to the function), same ascopy_script
script_dir
(copies all files in the script dir including the script), same ascopy_parent
- a
list
which is treated as a list of file paths, same ascopy_files
This way we do not have to handle potential conflicts between these options. And in the end, a user can always manually add copy statements anyway.
We also need a parameter for the copy destination within the container. My suggestion is /payload
as a default.
Regarding 4: What about a helper function that creates a command instruction from the filename?
dockerfile(scriptfile = "myscript.R", ... , copy_destination= "/payload", cmd = create_cmd_runscript("myscript.R"))
[I am not sure this even works, my memory might fail me here.]
from containerit.
@nuest I updated the checklist
For copy destination I would use either "payload/" or "/payload/".
I did the first thing for now, so that the file lands in the R working directory instead of the file system root. Docker requires a 'folder' to end with a trailing slash, otherwise the data will be written to a file named "payload" and filenames are not preserved.
from containerit.
The following R commands now reproduce the Dockerfile below. All files and folders that are in the same directory as the R script are copied to the image. The directory structure of the workspace is reproduced in the directory 'payload' (default workdir). The cmd parameter makes the same script execute when running the container with default parameters. Any suggestions?
R Code
df=dockerfile("simple_test_script_resources/simple_test.R", copy = "script_dir", cmd = CMD_Rscript("simple_test_script_resources/simple_test.R"))
Dockerfile:
FROM rocker/r-ver:3.3.2
MAINTAINER "matthiashinz"
COPY ["simple_test_script_resources", "payload/simple_test_script_resources/"]
COPY ["simple_test_script_resources/simple_test.R", "simple_test_script_resources/test_table.csv", "payload/simple_test_script_resources/"]
COPY ["simple_test_script_resources/test_subfolder", "payload/simple_test_script_resources/test_subfolder/"]
COPY ["simple_test_script_resources/test_subfolder/testresource", "payload/simple_test_script_resources/test_subfolder/"]
WORKDIR payload/
CMD ["Rscript", "--save", "simple_test_script_resources/simple_test.R"]
from containerit.
Why the --save
option?
When you switch to the WORKDIR
before all the copy statements, you only have to write "payload" once. Can you make that simplification?
I do not understand the issue with default parameters. Can you give examples?
from containerit.
-
The --save option saves all created R objects to RData... But well, since the container normally stops after execution, I can remove it either way...
-
simplification should be no problem
-
What I mean with default parameters (it's not an issue, it's a description):
If I calldocker run "myImageName"
(orharbor::docker_run(...)
), the container will execute the script according to the CMD statement and then terminate. But thedocker run
command can also override that statement, e.g. usingdocker run "myImageName" R
ordocker run "myImageName" /bin/bash
from containerit.
Well, I figured out by testing that the following Dockerfile does exactly the same job (it also copies the subfolder and resources). Kind of strange, that this simple solution did not ocure to me in first place, even though I read the Dockerfile Documentation...
FROM rocker/r-ver:3.3.2
MAINTAINER "matthiashinz"
WORKDIR payload/
COPY ["simple_test_script_resources", "simple_test_script_resources/"]
CMD ["Rscript", "simple_test_script_resources/simple_test.R"]
from containerit.
https://sysreqs.r-hub.io/
Seems to be down at the moment. I get 502 Bad Gateway
Edit: looks like they change something at the moment. I will check that tomorrow again.
from containerit.
Shall we throw an error if the supplied script does not have an extension ".R" or just assume it is an R script if the extension is unknown?
What shall we do if the 'copy' parameter has a list of files and directories, but the R script is not included in this list? Do nothing?
from containerit.
Regarding extension: assume it is an R script, whatever the extension.
Regarding copy: then copy these files and directories. If the user things he knows better, he probably does.
from containerit.
Related Issues (20)
- File not found using CMD_RScript to run the R script on start-up HOT 1
- E: Repository 'http://cdn-fastly.deb.debian.org/debian testing InRelease' changed its 'Codename' value from 'buster' to 'bullseye' HOT 1
- Evaluate dockerfiler for creation of Dockerfile
- Custom COPY additions not exported HOT 3
- Fix installed package detection for plumber image
- dockerfile adding a make depedency on Windows vs not on OSX HOT 3
- Compare remotes::install_version(..) with versions::install.versions(..)
- Add support for Alpine base images
- Add option to use hadley/requirements instead of document execution
- Add support for renv HOT 2
- Evaluate rstudio/r-system-requirements
- Re-use output parameters of R Markdown headers as part of CMD_Render
- Typos in containerit.Rmd HOT 8
- Add option to use attachment for finding dependencies
- Reticulate and python dependencies
- Create a target factory / integration with the targetopia
- Try out system requirements function from package remotes
- Switch tests to r-ci
- DESCRIPTION without Depends
- Sysreqs calls are too many if lots of deps HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from containerit.