soda-cdm / kahu Goto Github PK
View Code? Open in Web Editor NEWKahu is part of SODA Container Data Management(CDM). Kahu provides seamless backup/restore for Kubernetes resources and data
License: Apache License 2.0
Kahu is part of SODA Container Data Management(CDM). Kahu provides seamless backup/restore for Kubernetes resources and data
License: Apache License 2.0
please add MetadataLocation as Required field.
Originally posted by @PravinRanjan10 in #49 (comment)
Issue/Feature Description:
Kahu supports restore API fro the backup data.
The API should support for resource restore
Why this issue to fixed / feature is needed(give scenarios or use cases):
The feature is need for restore backuped up resources
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
**Issue/Feature Description:
To backup resources of specific Kind in a specific order, better to provide an option –ordered-resources to specify a mapping Kinds to an ordered list of specific resources of that Kind.
Resource names are separated by commas and their names are in format ‘namespace/resourcename’.
For cluster scope resource, simply use resource name. Key-value pairs in the mapping are separated by semi-colon.
Kind name is in plural form.
**Why this issue to fixed / feature is needed(give scenarios or use cases):
if the user wants to backup up pod-1 before pod-2 or to respect the order based on the users application dependencies
**How to reproduce, in case of a bug: NA
**Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
We can add the below parameter in the type BackupSpec struct { ...} and handle accordingly
// OrderedResources specifies the backup order of resources of specific Kind.
// The map key is the resource name and value is a list of object names separated by commas.
// Each resource name has format "namespace/objectname". For cluster resources, simply use "objectname".
// +optional
// +nullable
OrderedResources map[string]string json:"orderedResources,omitempty"
Issue/Feature Description:
Expose gRPC interfaces fro meta service.
Why this issue to fixed / feature is needed(give scenarios or use cases):
Meta service manages backup and restore of metadata of the resource getting backedup. The exposed interface gets used by Backup and restore controller for backup and restore scenario of resource metadata
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA
Issue/Feature Description:
Test Case Precondition:
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Add missing deepcopy autogenerated code for restore list CRD
Why this issue to fixed / feature is needed(give scenarios or use cases):
Kubebuilder tag used instead of deepcopy tags
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA
Issue/Feature Description:
Testcase Precondition:
a) Verify that backup stage is Finished and State is Completed
b) Verify the resource list in status shows all the required resources
2. In step 3, verify that
a) tar file is created with name of backup
b) After untar file, All deployments and pods are backuped. but point to note, the pod which
has owner reference will not be backuped
3. In step4, verify that
a) No pods of test-ns1 and test-ns2 are restored
Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make surepod is properly backed up and restored
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any
useful information for this issue)
@kalaiselvikks76
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
Controller with business logic which includes:
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
When backup is performed in kahu for csi volumes, csi volume snapshots are created through csi drivers.
Kahu also creates its own volume snapshot crds to keep track of the backup.
When backup is deleted, these snapshot objects are expected to be deleted. But in current flow, these objects still exists after backup delete
Expectation: These snapshot objects to be deleted during backup delete
Why this issue to fixed / feature is needed(give scenarios or use cases):
This issue will result in lot of stale objects to be present in the system which will consume unnecessary system resources
How to reproduce, in case of a bug:
1)Kahu is deployed with nfs provider and openebs zfs provider
Backup is created for an application having csi volume provisioned from openebs zfs
Backup is deleted but snapshot objects are still not deleted
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Current make in kahu build runs go imports but does to check the install part of goimports
go lang tools version can be maintained in go mod file through which specific version of go import can be used
goimport version update can be done on need basis and after analysis
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Kahu has storage provider framework to dynamically plugin any provider to perform either metadata or volume backup
By default, Kahu supports NFS provider for metadata backup
This issue is opened to discuss on integrating S3 based storage backend with kahu
This involves
Analysis of all the interfaces provided by metadata service
Development of provider to achieve all the above interface realisation to interact with s3
NFS provider can be used as reference for this development
minio can be used as one of the option to verify this integration
Why this issue to fixed / feature is needed(give scenarios or use cases):
This integration will help to seamlessly use any s3 based backend with kahu
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Testcase Precondition:
kind: Deployment
isRegex: true
excludeResources:
Feature Description:
LINSTOR is Open source software designed to manage block storage devices for large Linux server clusters.
Linstor has csi plugin which bridge between Kubernetes and LINSTOR’s replicated block storage
Reference link (https://linbit.com/blog/linstor-csi-plugin-for-kubernetes/)
This issue is to opened to discuss/handle the integration of LINSTOR CSI driver with Kahu so that volumes provisioned through LINSTOR backend can be backuped and restore as needed
This can be achieved through below stages
Why this issue to fixed / feature is needed(give scenarios or use cases):
This integration will help to provide backup/restore for LINSTOR provisioned volumes
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Test Case Precondition:
Test Case steps:
Expected Result:
Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and
restored
How to reproduce, in case of a bug:
Other Notes / Environment Information: Please give the env information, log link or any
useful information for this issue
**Issue/Feature Description:
Currently the Kahu user(Application Admin) has to create the backups manually.
So here i am proposing a feature /requirement i.e. Schedule based backups, so the Kahu system schedules the backups from the user provided backup policy or schedule policy .
The Main Requirements are as follows
Kahu system will provide the schedule based backups
currently hooks, resource filtering etc are tightly coupled with the backup & restore, so planning to abstract such resources which can be used across the system & especially Hooks are related to particular application or micro service so better to have separation from backup so its very easy to Enhance & Manage
So the below abstractions will be added to the data protection mgmt
a) ResourceSet : which will identify the group of applications or related application which user want to take backup
b) Hooks
c) Schedule Policy
The Schedule service will takes the backups based on the application admin provided Backup Policy or Schedule Policy
The Schedule based Data Protection Management may need to the Job Management to create the schedule backups
User can use any backup copy to restore the same.
Please do find the below link which is running document related to this requirements ( WIP)
https://docs.google.com/document/d/11wOj1EGTveBMtD5IKipx8EGhc2bn99O3/edit#heading=h.gjdgxs
**Why this issue to fixed / feature is needed(give scenarios or use cases):
Schedule based backups, so user once create policy & binds the policy with the backup so user need not required to worry about the backups.
User can use backup copies to restore based on his requirement.
**How to reproduce, in case of a bug: NA
**Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
what about validation errors, warnings, errors, etc. ?
Originally posted by @PravinRanjan10 in #28 (comment)
Issue/Feature Description:
This issue describes the addition of Add identity service for providers
Identity service will be similar to the identity service provided by CSI for csi drivers
Below are the identity service interfaces
service Identity {
rpc GetProviderInfo(GetProviderInfoRequest)
returns (GetProviderInfoResponse) {}
rpc GetProviderCapabilities(GetProviderCapabilitiesRequest)
returns (GetProviderCapabilitiesResponse) {}
rpc Probe(ProbeRequest)
returns (ProbeResponse) {}
}
These interfaces will be called by either metadata service or volume service and respective providers will be registered
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
The issue captures high-level ideas for a Go framework for testing Kahu APIs. The framework, referred to as e2e-framework provides ways that makes it easy to define tests functions that can programmatically test define Kahu APIs. The two overall goals of the new framework are to allow developers to quickly and easily assemble end-to-end tests and to provide a collection of support packages to help with interacting with the K8S API-server.
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
There should be proper test framework which makes developers to write testcases easily
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
**Issue/Feature Description:
Integrate the restic ( for more information https://restic.net/) to the kahu, so that we cn move the data fom the source/production storage to the tar storages /providers
**Why this issue to fixed / feature is needed(give scenarios or use cases):
Currently kahu doesn't support the data movement feature.
So we can integrate the https://restic.net/ and move the data from the source storage volumes of a particular applications
**How to reproduce, in case of a bug: NA
**Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
https://restic.net/
https://restic.net/blog/2022-08-25/restic-0.14.0-released/
Issue/Feature Description:
Test Case Precondition:
Issue/Feature Description:
Adding support for generic controller
Why this issue to fixed / feature is needed(give scenarios or use cases):
Support generic controller
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA
Testcase Precondition:
Testcase steps:
Expected Result:
In step 2,
a) Verify that backup stage is Finished and State is Completed
b) Verify the resource list in status shows all the required resources
In step 3, verify that
a) tar file is created with name of backup
b) After untar file, All deployments are backed up
In step4, verify that
a) Backed up deployment is up in namespace test-ns
Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and
restored
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any
useful information for this issue)
Pls add one Readme file with all the support provided by this make file
Originally posted by @sushanthakumar in #26 (comment)
Issue/Feature Description:
With addition of kahu csi snapshot support, kahu service controller will watch on csi snapshot CRD objects to identify the volume backup completion.
In current kahu deployment, csi snapshots are not installed by default
Kahu service will fail to watch the necessary CRD objects and act upon them which will impact in backup/restore functionality.
Why this issue to fixed / feature is needed(give scenarios or use cases):
So snapshot CRDs to be deployed as prerequisite . This will need deploy guide update as well.
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Testcase Precondition:
name: backup-Kahu-0001
spec:
includeNamespaces: [test-ns1,test-ns2]
metadataLocation: nfs
includeResources:
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Test Case Precondition:
Test Case steps:
Expected Result:
Issue/Feature Description:
Testcase Precondition:
Testcase steps:
Expected Result:
Issue/Feature Description:
Adding support for client, lister, informer, deepcopy generation for supported CRDs
Why this issue to fixed / feature is needed(give scenarios or use cases):
The issue can handle autogeneration of client code
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
This issue contains the Proto file changes for identity and metadata provider interfaces
2 services are considered here
Pending interface:
Volume interfaces : Interfaces to handle backup/restore functionality for persistent volume
Attachment:
providerservice.proto.txt
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
Controller with business logic which includes:
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
KAHU project under the soda-cdm organisation deals with data protection aspects of container data management on Kubernetes .
Currently Kahu supports different volume backup and backup location with Provider framework. The framework provides a plugin approach to integrated Volume/BackupLocation driver in runtime itself.
Currently Volume/BackupLocation drivers are deployed either as “Deployment” or “Statefulset”. The solution works perfectly if the drive uses APIs to connect with respective servers. But it would not work
Why this issue to fixed / feature is needed(give scenarios or use cases):
Data protection is a scheduled operation. So running a driver process and waiting for a backup/restore request to process is a waste of memory and CPU cycles.
An alternative approach is to make accept volume backup driver and backup location driver as Pod template and schedule a Kubernetes job to process backup/restore request
The issue is submitted to discuss an alternate approach (Job based data protection) to handle data protection.
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Initial draft is compiled here
Test Case Precondition:
Test Case steps:
Expected Result:
Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and restored
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
When a pod,deployment,daemonset or statefulset are associated with a configmap or a secret and backed up configmaps and secrets are not getting backed up along with backed up resource
Why this issue to fixed / feature is needed(give scenarios or use cases):
configmaps and secrets should be backed up along with pod so that when restored ,pods come up
How to reproduce, in case of a bug:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod-val-config
spec:
containers:
- name: test-container
image: registry.k8s.io/busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_TYPE
restartPolicy: Never
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
bring up these yamls
Then try to backup the pod above using the below yaml
apiVersion: kahu.io/v1beta1
kind: Backup
metadata:
name: backup-demo-resource-pod-with-config
spec:
includeNamespaces: [default]
metadataLocation: nfs
includeResources:
- name: dapi-test-pod-val-config
kind: Pod
isRegex: false
Now see the backed up resources the configmap will not be backed up
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Feature handle kahu build system
Why this issue to fixed / feature is needed(give scenarios or use cases):
Improve kahu build experience
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA
Issue/Feature Description:
Testcase Precondition:
Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and restored.
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
**Issue/Feature Description:
Now a days most of the applications deployed based on operator framework or used the CRDs in general . So without supporting the CRDs & CRs its difficult to support shuch kind of apps.
So Kahu Should support the CRDs , CRs backup & Restore.
Wait for CustomResourceDefinitions to be ready before restoring CustomResources. Also refresh the resource list from the Kubernetes API server after restoring CRDs in order to properly restore CRs.
**Why this issue to fixed / feature is needed(give scenarios or use cases):
This feature is very important to support operator based apps or apps which uses CRDs etc
**How to reproduce, in case of a bug: NA
**Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue) NA
Testcase Precondition:
Testcase steps:
(use kubectl create -f <backup.yaml>)
apiVersion: kahu.io/v1
kind: Backup
metadata:
name: backup-Kahu-0001
spec:
includeNamespaces: [test-ns]
metadataLocation: nfs
excludeResources:
Expected Result:
Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and
restored
How to reproduce, in case of a bug:
Other Notes / Environment Information: Please give the env information, log link or any
useful information for this issue
Issue/Feature Description:
In current kahu code, all the crd's used are at namespace level.
They should be changed to Cluster level
Also kubebuilder and genclient tags need to be updated in specification
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Currently data protection operations are requested with kubectl
.
With kubectl command, API requests are little overwhelming.
The issue is opened to discuss kahu client support
Why this issue to fixed / feature is needed(give scenarios or use cases):
The issue is a feature request for the community to develop a user friendly kahu client.
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
Currently, Kahu release processes are manual. This leads to
The Issue is initial analysis/request for automating Kahu release process.
Why this issue to fixed / feature is needed(give scenarios or use cases):
Following are some uses cases of release automations
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA
Issue/Feature Description:
The issue is to track Restore and Backup cleanup in resource delete scenario.
With deletion of Backup/Restore object, controller should trigger respective resource deletion.
The deletion event likely gets missed if controller is not available to receive and resource get deleted in background.
Possible solution:
Add finalizers to wait until specific conditions are met before it fully deletes resources marked for deletion
Why this issue to fixed / feature is needed(give scenarios or use cases):
For graceful cleanup for Backup and Restore CRDs
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Issue/Feature Description:
This issue explains about what are the options for kahu to deploy in kubernetes environment.
It explains the helm way, some of the limitations with helm for kahu scenario, some workarounds.
Also introduces the operator way for more effective, scalable deployment
Request community to go through this analysis and provide feedback
Kahu_deployment_tool_analaysis.pptx
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.