Comments (6)
This was on my mind today alongside the thoughts I recorded in #517. Since this issue was written we've moved away from requiring that resources and secrets live in crossplane-system
, but many do by convention in our examples.
It does seem like concrete resources should be cluster scoped, but given many resource types require the API caller to specify credentials at creation time and (per #64) offer no way to read back said credentials retroactively I can't think of any way to avoid writing out a secret at resource creation time.
If my proposal in #518 were accepted resources could be cluster scoped, but they'd still need to write their credentials to the specified namespaced secret, which feels a little awkward.
from crossplane.
In Kubernetes PVC model, PV and StorageClass are cluster-scoped but the PVC, the one consumer actually interacts, is namespaced. In this picture there is no credentials because volume binding operation doesn't require one. But we do require credentials for cloud resources to be used, Secret
resource is the first option that comes to mind for such use, but that's namespaced and it doesn't make sense for credentials to live in namespace but the actual resource to be cluster-scoped.
At the lowest level, we got Managed Resource
and its Secret
in crossplane-system
. These two are tightly coupled, so, it makes sense to have both to be cluster-scoped. However, Secret
is a Kubernetes native resource that is defined as namespaced.
However, Secret
is just base64 encoded data, there is nothing really secret. I think we can come up with a CRD that is cluster-scoped and shares the exact same fields/properties with a normal Secret
kind. This way we got Managed Resource
(PV) and ClusterSecret
as cluster-scoped and tightly coupled. Whenever a Claim
in namespace user1
is bound to Managed Resource
, we mirror that ClusterSecret
to a Secret
object in namespace user1
.
Basically Managed Resource
and ClusterSecret
will share the same lifecycle. Also, Claim
and Secret
will share the same lifecycle.
One could go with even having the credentials in a status
field of Managed Resource
and mirror the Secret
in user1
from these fields but I see a scenario where all admins are able to manage Managed Resource
but not see the content of ClusterSecret
due to fine-grained RBAC definitions. So, keeping them separate makes more sense to me.
from crossplane.
However, Secret is just base64 encoded data, there is nothing really secret. I think we can come up with a CRD that is cluster-scoped and shares the exact same fields/properties with a normal Secret kind.
I had the same thought as part of my draft design document for "resource usages", which build on Secrets in Kubernetes. I recommend taking a read of that document in #564. I decided to back away from that direction after discussion with @bassam, the notes from which are captured in #564 (comment).
One could go with even having the credentials in a status field of Managed Resource and mirror the Secret in user1 from these fields but I see a scenario where all admins are able to manage Managed Resource but not see the content of ClusterSecret due to fine-grained RBAC definitions. So, keeping them separate makes more sense to me.
I agree that we should not put sensitive data in the status field of managed resources. Kubernetes Secrets exist in part simply to ensure sensitive data is stored separately from the resources that consume them in order to slightly reduce the chances of you checking them into revision control or similar.
That said, today we store non-sensitive connection data like endpoints, ports, etc in both the managed resource status and its connection secret. I'd like to see us only store sensitive data in secrets.
from crossplane.
I wasn't aware of the existing patterns around using namespaced secrets. I agree that adhering to that would be more intuitive for users. However, the linked pattern doesn't exactly match our use case.
In the Ceph RBD example, there are 2 secrets defined at StorageClass
, meaning they encapsulate all the Managed
resources that are produced:
adminId: kube
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
userId: kube
userSecretName: ceph-secret-user
userSecretNamespace: default
Admin is the user that operator uses to authorize in order to provision a volume. As far as I understood, user is the one that allows mounting any volume that is provisioned via this StorageClass
. In our case; admin is provided via Provider
kind, however, user is different for each individual resource. So, we can't statically point to one secret in the ResourceClass
and be done with it. We have one for each Managed
and they are supposed to be copied to the desired namespace where Claim
lives, by crossplane.
If we simply have to use Secret
type, then crossplane-system
makes sense. We can suggest to users that they set their RBAC in a way that only crossplane accesses that crossplane-system
namespace where only Managed
resource secrets live. Other options for choosing namespace that appear in the existing patterns; kube-system
is for kubernetes itself and default
might be too open to users.
All Managed
resources would be cluster-scoped, but their credentials would live in crossplane-system
. When a Claim
is being made, the secret is copied from crossplane-system
to user1
by crossplane. This assumes that we make Managed
and ResourceClass
CRDs cluster-scoped.
I'll update this issue while I experiment this setup.
from crossplane.
I wanted to capture some of the recent thoughts we've had around this topic here, because our confidence that cluster scoped managed resources is the right direction is wavering a little.
It stands to reason that we'll want to group Crossplane managed resources into logical containers, and potentially apply . We probably want to allow some flexibility in how this is done. For example an infrastructure administrator might want to group resources by cloud provider, by cloud region, or by cloud project. Some example scenarios:
- A "logical VPC". Any managed resources created in this logical container can communicate with each other through some as-yet undefined connectivity mechanism, even if they're all in different cloud providers and regions.
- The typical 'dev', 'staging', and 'prod' environments. All dev managed resources are created in one logical container, which has its own set of resource classes that specify what shape development managed resources should take on. An app operator could schedule a
KubernetesApplication
to the "dev environment", resulting in the application being scheduled to one of theKubernetesCluster
managed resources in the dev logical container. Ditto for staging and production.
Kubernetes namespaces seem like a natural way to group (aka namespace) managed resources logically, so perhaps it would make sense to scrap the crossplane-system
namespace, but have managed resources continue to be namespaced for logical grouping.
CC @bassam to ensure this approximates your thinking around this.
from crossplane.
Our documentation and examples now all reflect our thinking from #92 (comment). I'm going to close this out.
from crossplane.
Related Issues (20)
- Cache `*Unstructured` objects HOT 4
- Cannot resolve package dependencies: missing dependencies: [xpkg.upbound.io/upbound/provider-family-aws] HOT 2
- Support Rendering Default Values from XRD in Crossplane CLI
- Validating Claims Against XRD Schema in Crossplane CLI During Render HOT 2
- Don't report error status for deleted resources in crossplane beta trace
- Add watch support to crossplane beta trace
- Cannot change the version of a composed resource in the default `Resources` mode HOT 2
- Promote `top` crossplane CLI subcommand to GA HOT 1
- Add more validation checks in `validate` command
- `validate` should handle Kubernetes validation libraries evolving over time
- protobuf-schemas step always failing on older release branches
- Promote `init` command to GA
- Can't patch from composed resource status to the environment HOT 1
- Add a `crossplane lint xrd.yaml` command HOT 2
- Function SDK Release Cadence & Version Compatibility HOT 1
- Crossplane fails to synchronize claims with XRs HOT 11
- Document requirements to make functions with `requirements` backward compatible
- Provide a Channel of Communication to the Claim HOT 1
- Function spec.package does not use cluster local image cache HOT 8
- Issue with capitalization in printcolumn names HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from crossplane.