Comments (10)
dir2:
would be one less option, OTOH that would force users who keep producers and consumers in sync to care.
So, --dir-output-version
seems more appropriate, adding that cognitive burden only on those (probably rarer) users who need compatibility across long time horizons.
from image.
For the third option - would we have to commit to perpetually support the old versions, or could we just offer a ~12 month migration period and remove support for the old version after that point?
from image.
A request was to target old stable releases (which wouldn’t be updated with a newer consumer). I’m not sure how strong that commitment should be. @achilleas-k ?
from image.
If we start versioning now, old clients would still break, right?
I feel/think we will find solutions to add new features without breaking backwards compat.
from image.
From our side, our guarantee is that Image Builder (osbuild) can produce images for any version of RHEL, CentOS, and Fedora currently in support. In the worst case, the producer of the dir:
(the build host) can be running the most up-to-date version of the container tools (skopeo, in our case) and the consumer (the build-root for the target OS) is the oldest version of supported RHEL.
Given the lifetime of the .10 releases (5 years), that's a pretty long support lifecycle that we haven't had to deal with yet.
To answer the question directly, "perpetual" support would be nice for us, but I understand that might not be convenient or desirable for you. A shorter support lifetime on the order of a major RHEL release would be workable if we had a long enough deprecation time to work around it.
from image.
If we start versioning now, old clients would still break, right?
I feel/think we will find solutions to add new features without breaking backwards compat.
If the producer of a dir
can create the old format (and drop any features of a newer format in the process), I think that would be enough for us.
For everyone's benefit here, and for posterity, a brief description of our use case:
We produce images by pulling a container using the tools on the host into a cache, then they're copied into the build root and are then pulled into the target OS image's tree using tools from the target image's repo sources. When the OS is live, the container is run using tools from the OS itself.
In other words:
- On the host (running potentially up-to-date software) we run
skopeo copy <remote> <cache>
- In the build root (running potentially old software) we run
skopeo copy <cache> <containers store>
- On the live OS, the user uses podman to run the container.
The requirement here is for skopeo
in step 1 (new) to be able to produce a dir
that can be read by skopeo
in step 2 (old).
The compatibility between 2 and 3 is assumed since they use the same versions of software.
from image.
If we start versioning now, old clients would still break, right?
dir
had a version number file added the first time we made a breaking change. So that would not be new.
(But, I’m afraid I only noticed this now currently that version file is write-only, and existing clients don’t reject unknown versions. #1876 .)
Fundamentally, the structure of dir:
, producing an ~immutable artifact, without any direct communication between producers and consumers, means that it must be the producer’s responsibility to create data that the consumer can support (i.e. the producer must somehow know, and determine, the version to produce).
I feel/think we will find solutions to add new features without breaking backwards compat.
Is that basically the “Do nothing and hope that dir: is already good enough” approach?
Or is the proposal to never alter format of existing files, but to possibly add more files to the generated directory?
Suppose a future producer version adds new files to the generated directory. Old consumer versions don’t look for those files (and don’t fail if unexpected files are found), so they would not break, but they would also not consume the new files.
Did we, or did we not, break compatibility, by creating an image that old consumers can’t consume the way the producers understood them?
Pragmatically, that depends both on the nature of the files (are they nice icons for a GUI console, or mandatory firewall settings?) and on the nature of the operation (is the old client running the container, where losing icons is OK, or is the old client a part of the official product publishing pipeline, where losing icons is a failure to publish the intended artifact?).
It seems to me skopeo-new copy something: dir:dir1 && skopeo-old copy dir:dir1 dir:dir2 && skopeo-new dir:dir2 something:
will basically always break if we add new files. Now, why would anyone do that is certainly a valid question to ask.
But given the request that motivates this issue, to support skopeo-new copy docker: dir && skopeo-old copy dir: containers-storage:
, and the increasing prevalence to use containers in multi-stage build pipelines, I’m not sure we can rule any version combination out.
from image.
Or is the proposal to never alter format of existing files, but to possibly add more files to the generated directory?
Yes, I had that in mind. I am not at all set on that but thought to throw the idea out to discuss it.
from image.
It’s a definitely a good point.
Arguably the skopeo-old copy dir: dir:
case is not a compatibility failure at all, because skopeo-old
not copying the new files does ”exactly, and correctly, what the skopeo-old
version was designed to do”.
OTOH if the net effect is losing data, something in that sequence should at least recognize that it might happen, if not prevent that from happening. Currently we require explicit copy --remove-signatures
when copying signed images to transports that don’t support signatures, and dropping signatures might not have that bad consequences compared to dropping some new sandboxing configuration or the like.
One option would be two producer options like --dir-consumable-by=1.1
(create data that version 1.1 can consume, possibly losing features) vs. --dir-fully-supported-by=1.2
(fail creating the data if any part would not be recognized by version 1.2). Ugh. That feels to me like an extreme overkill.
from image.
I think it's possible to make newer clients smart enough to be able to detect such potential data loss.
OTOH if the net effect is losing data, something in that sequence should at least recognize that it might happen
Totally agree. Assuming we find a way to have some persistent metadata that doesn't alter the digest/identify of an image, new clients should be able to detect such a data loss. For instance, the metadata claims the existence of an absent file.
from image.
Related Issues (20)
- Docker client code can no longer talk to the latest verson of the docker daemon 25.0.0 HOT 5
- Allow empty OCI configs for artifacts HOT 9
- policy.json overwrite not honouring $XDG_CONFIG_HOME HOT 3
- Podman cannot pull image from local registry HOT 4
- copy.Options.EnsureCompressionVariantsExist doesn’t detect existing variants with zstd:chunked
- support multiple sigstore keys HOT 6
- How can I copy from a tar file stream HOT 7
- "slices" module only in go 1.21 HOT 1
- platform.WantedPlatforms is noisy on macOS HOT 7
- Cannot pull sigstore signed image with podman HOT 4
- Error inspecting local manifest-lists HOT 6
- Incorrect syntax highlighting in containers-transports.5
- Why do we get the whole image when inspect with docker daemon? HOT 2
- Support sigstore BYO PKI verification HOT 1
- Support more arbitrary credential helper executable names? HOT 4
- OCI image index loose the artifactType property on copy HOT 4
- zstd:chunked and layer encryption don’t make sense together HOT 3
- How to change the policy.json default path? HOT 1
- Mirrors are ignored when registry is blocked HOT 11
- Support Witnessing Sigstore Signing with a Timestamp Authority Server HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from image.