NOTE: Some of our documentation is on the website kubestellar.io, some is here.
Imagine KubeStellar as a post office for your Kubernetes resources. When you drop packages at the post office, they don't open them; they deliver them to the right recipients. Similarly, KubeStellar works like this for your Kubernetes resources. Instead of running resources right away, KubeStellar safely stores and sends resources to selected clusters across the globe—whether they're in public clouds, private clouds, or on the edge of your network. It's a super useful tool for spreading your Kubernetes resources wherever you need them without disrupting your existing tools and workflows.
How does KubeStellar resist the temptation to run your Kubernetes resources right away? KubeStellar accepts your applied resources in a special staging area (virtual cluster) where pods can't be created. Then, at your direction, KubeStellar transfers your applied resources to remote clusters where they can create pods and other required resource dependencies. KubeStellar does this using many different lightweight virtual cluster providers (Kind, KubeFlex, KCP, etc.) to create this special staging area.
KubeStellar is an innovative way to stage inactive Kubernetes resources and then apply them to any cluster to run. KubeStellar introduces a native way to expand, optimize, and protect your Kubernetes resources from individual cluster misconfiguration, utilization, and failure.
Don't change anything, just add KubeStellar!
- Centrally apply Kubernetes resources for selective deployment across multiple clusters
- Use standard Kubernetes native deployment tools (kubectl, Helm, Kustomize, ArgoCD, Flux); no resource bundling required
- Discover dynamically created objects created on remote clusters
- Make disconnected cluster operation possible
- Scale with 1:many and many:1 scenarios
- Remain compatible with cloud-native solutions
Much of our documentation is made to be viewed on GitHub. See it there. Release X.Y.Z
is the commit with Git tag vX.Y.Z
.
TBD
We ❤️ our contributors! If you're interested in helping us out, please head over to our Contributing guide and be sure to look at main
or the release of interest to you.
This community has a Code of Conduct. Please make sure to follow it.
There are several ways to communicate with us:
Instantly get access to our documents and meeting invites at http://kubestellar.io/joinus
- The
#kubestellar-dev
channel in the Kubernetes Slack workspace - Our mailing lists:
- kubestellar-dev for development discussions
- kubestellar-users for discussions among users and potential users
- Subscribe to the community calendar for community meetings and events
- The kubestellar-dev mailing list is subscribed to this calendar
- See recordings of past KubeStellar community meetings on YouTube
- See upcoming and past community meeting agendas and notes
- Browse the shared Google Drive to share design docs, notes, etc.
- Members of the kubestellar-dev mailing list can view this drive
- There is an outdated website at documentation; nothing there is valid now, some ideas are being re-used.
- Follow us on:
- LinkedIn - #kubestellar
- Medium - kubestellar.medium.com
Thanks go to these wonderful people:
We are a Cloud Native Computing Foundation sandbox project.
Kubernetes and the Kubernetes logo are registered trademarks of The Linux Foundation® (TLF).
© 2022-2024. The KubeStellar Authors.