Coder Social home page Coder Social logo

soda-cdm / kahu Goto Github PK

View Code? Open in Web Editor NEW
52.0 52.0 35.0 50.35 MB

Kahu is part of SODA Container Data Management(CDM). Kahu provides seamless backup/restore for Kubernetes resources and data

License: Apache License 2.0

Makefile 1.00% Go 95.87% Shell 2.91% Dockerfile 0.09% HTML 0.13%

kahu's People

Contributors

amitroushan avatar joseph-v avatar pravinranjan10 avatar skdwriting avatar sodagitops avatar sri-hari avatar sushanthakumar avatar vineela1999 avatar wisererik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kahu's Issues

Controller with business logic which includes Part-1

Issue/Feature Description:

Why this issue to fixed / feature is needed(give scenarios or use cases):
Controller with business logic which includes:

  1. IncludedNamespaces
  2. ExcludedNamespaces
  3. IncludedResources
  4. ExcludedResources
  5. Content Collections:
    • Pod resource collections

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Automate release process

Issue/Feature Description:
Currently, Kahu release processes are manual. This leads to

  • Slow release velocity
  • Release are error prone
  • Lack of standardisation

The Issue is initial analysis/request for automating Kahu release process.

  • Integrate goreleaser (Suggestion)
  • Document release process with goreleaser
  • Integrate release process with CI
  • Document CI trigger and release process

Why this issue to fixed / feature is needed(give scenarios or use cases):
Following are some uses cases of release automations

  • Reduction in Errors
  • Increased Release Velocity
  • Higher ROI
  • Standardized Release Strategy
  • Enhanced Data Security
  • Consistency
  • Audits and Reports Provide Useful Insights

How to reproduce, in case of a bug:
NA

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA

Create Backup of all pods in multiple namespaces and then restore the backup

Test Case Precondition:

  1. Kahu project installed in given name-space (test-kahu) 2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider c. nfs-server
  2. Metadata location is created already
  3. Namespace test-ns1, test-ns2 is created and contains some of the kubernetes resources.

Test Case steps:

  1. Create below backup CR using kubectl command (use kubectl create -f <backup.yaml>)
    apiVersion: kahu.io/v1
    kind: Backup
    metadata:
    name: backup-Kahu-0001
    spec:
    includeNamespaces: [test-ns1,test-ns2]
    metadataLocation: nfs includeResources:
  • name: kind: Pod
    isRegex: true
  1. Use kubectl describe backup -n test-kahu
  2. Get inside nfs server pod and check for the content inside mount path given (use command kubectl exec -ti -n test-kahu /bin/sh)
  3. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001 Spec:
    backupName: backup-Kahu-0001 namespaceMapping:
    test-ns1 : restore-ns1
    test-ns2 : restore-ns2

Expected Result:

  1. In step 2,
    a) Verify that backup stage is Finished and State is Completed
    b) Verify the resource list in status shows all the required resources
  2. In step 3, verify that
    a) tar file is created with name of backup
    b) After untar file, All pods in the namespace test-ns1 and test-ns2 are backuped.
  3. In step4, verify that
    a) Backed up pods are up in new namespace restore-ns1 and restore-ns2 respectively.

Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and restored
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

keys of BackupControllerFlags should be read from BackupLocation

Issue/Feature Description:

  1. Currently these flags key is taken from constant, we should read from BackupLocation.
    BackupControllerFlags{
    MetaServicePort: controllers.DefaultMetaServicePort,
    MetaServiceAddress: controllers.DefaultMetaServiceAddress,
    }

Why this issue to fixed / feature is needed(give scenarios or use cases):

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Schedule based backup as part of the Kahu Data protection Management

**Issue/Feature Description:

Currently the Kahu user(Application Admin) has to create the backups manually.
So here i am proposing a feature /requirement i.e. Schedule based backups, so the Kahu system schedules the backups from the user provided backup policy or schedule policy .

The Main Requirements are as follows

  1. Kahu system will provide the schedule based backups

  2. currently hooks, resource filtering etc are tightly coupled with the backup & restore, so planning to abstract such resources which can be used across the system & especially Hooks are related to particular application or micro service so better to have separation from backup so its very easy to Enhance & Manage

So the below abstractions will be added to the data protection mgmt
a) ResourceSet : which will identify the group of applications or related application which user want to take backup

b) Hooks

c) Schedule Policy

  1. The Schedule service will takes the backups based on the application admin provided Backup Policy or Schedule Policy

  2. The Schedule based Data Protection Management may need to the Job Management to create the schedule backups

  3. User can use any backup copy to restore the same.

Please do find the below link which is running document related to this requirements ( WIP)

https://docs.google.com/document/d/11wOj1EGTveBMtD5IKipx8EGhc2bn99O3/edit#heading=h.gjdgxs

**Why this issue to fixed / feature is needed(give scenarios or use cases):

Schedule based backups, so user once create policy & binds the policy with the backup so user need not required to worry about the backups.

User can use backup copies to restore based on his requirement.

**How to reproduce, in case of a bug: NA

**Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Adding support for restore api

Issue/Feature Description:

Kahu supports restore API fro the backup data.

The API should support for resource restore

  • Namespace filter
  • Resource filter
  • Ressurce prefix
  • Label selector
  • Status of restore
  • create, delete and get API support
  • CRD definitions

Why this issue to fixed / feature is needed(give scenarios or use cases):
The feature is need for restore backuped up resources

How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Create Backup of test-ns namespace deployments and then restore the created backup with only name contains "kahu", keyword but exclude the name "kahu restore-deployment" deployments.

Issue/Feature Description:
Testcase Precondition:

  1. Kahu project installed in given name-space (test-kahu)
  2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider
    c. nfs-server
  3. Metadata location is created already
  4. Namespace test-ns is created and contains
    some of the kubernetes resources
    5.Namespace restore-ns is created
    Testcase steps:
  5. Create below backup CR using kubectl command
    (use kubectl create -f <backup.yaml>)
    apiVersion: kahu.io/v1
    kind: Backup
    metadata:
    name: backup-Kahu-0001
    spec:
    includeNamespaces: [test-ns]
    metadataLocation: nfs
    includeResources:
  • name:
    kind: Deployment
    isRegex: true
  1. Use kubectl describe backup -n test-kahu
  2. Get inside nfs server pod and check for the content inside mount path given (use command kubectl exec -ti -n test-kahu /bin/sh) 4. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001
    Spec:
    backupName: backup-Kahu-0001
    Expected Result:
  3. In step 2,
    a) Verify that backup stage is Finished and State is Completed
    b) Verify the resource list in status shows all the required resources
  4. In step 3, verify that
    a) tar file is created with name of backup
    b) After untar file, All deployments are backed up
  5. In step4, verify that
    a) Backed up deployment is up in namespace test-ns

Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and restored.

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

@kalaiselvikks76

Support Kahu build system

Issue/Feature Description:
Feature handle kahu build system

  • Basic static code checker (go fmt, go vet, goimports)
  • Build binaries
  • Build images

Why this issue to fixed / feature is needed(give scenarios or use cases):
Improve kahu build experience
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA

Implement API and generate CRD's for backup

Issue/Feature Description:

Why this issue to fixed / feature is needed(give scenarios or use cases):

  1. Implement API spec
  2. Generate CRD's
  3. generate clientset/informers etc.

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Integrate the Restic to move the data to different providers

**Issue/Feature Description:
Integrate the restic ( for more information https://restic.net/) to the kahu, so that we cn move the data fom the source/production storage to the tar storages /providers

**Why this issue to fixed / feature is needed(give scenarios or use cases):
Currently kahu doesn't support the data movement feature.
So we can integrate the https://restic.net/ and move the data from the source storage volumes of a particular applications

**How to reproduce, in case of a bug: NA

**Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

https://restic.net/
https://restic.net/blog/2022-08-25/restic-0.14.0-released/

Add identity service for providers

Issue/Feature Description:
This issue describes the addition of Add identity service for providers
Identity service will be similar to the identity service provided by CSI for csi drivers

Below are the identity service interfaces
service Identity {
rpc GetProviderInfo(GetProviderInfoRequest)
returns (GetProviderInfoResponse) {}

rpc GetProviderCapabilities(GetProviderCapabilitiesRequest)
returns (GetProviderCapabilitiesResponse) {}

rpc Probe(ProbeRequest)
returns (ProbeResponse) {}
}

These interfaces will be called by either metadata service or volume service and respective providers will be registered

Why this issue to fixed / feature is needed(give scenarios or use cases):

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Adding missing autogenerated code for restore list

Issue/Feature Description:
Add missing deepcopy autogenerated code for restore list CRD

Why this issue to fixed / feature is needed(give scenarios or use cases):

Kubebuilder tag used instead of deepcopy tags

How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA

Integrate LINSTOR csi driver with Kahu

Feature Description:
LINSTOR is Open source software designed to manage block storage devices for large Linux server clusters.
Linstor has csi plugin which bridge between Kubernetes and LINSTOR’s replicated block storage
Reference link (https://linbit.com/blog/linstor-csi-plugin-for-kubernetes/)

This issue is to opened to discuss/handle the integration of LINSTOR CSI driver with Kahu so that volumes provisioned through LINSTOR backend can be backuped and restore as needed

This can be achieved through below stages

  1. Backup the volumes at LINSTOR backend though csi snapshots and restore using snapshots
  2. Csi snapshots can be moved to specified location such as S3 using Restic and can be restored from location

Why this issue to fixed / feature is needed(give scenarios or use cases):
This integration will help to provide backup/restore for LINSTOR provisioned volumes

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

CSI snapshot CRD installation to be handled during kahu deployment

Issue/Feature Description:
With addition of kahu csi snapshot support, kahu service controller will watch on csi snapshot CRD objects to identify the volume backup completion.
In current kahu deployment, csi snapshots are not installed by default
Kahu service will fail to watch the necessary CRD objects and act upon them which will impact in backup/restore functionality.

Why this issue to fixed / feature is needed(give scenarios or use cases):
So snapshot CRDs to be deployed as prerequisite . This will need deploy guide update as well.

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Proto file changes for identity and metadata provider interfaces

Issue/Feature Description:
This issue contains the Proto file changes for identity and metadata provider interfaces

2 services are considered here

  1. Identity service : This contains the interfaces to deal with identity information of the provider being registered
  2. Meta Backup service: This contains the interfaces to support metadata backup/restore/delete operation

Pending interface:
Volume interfaces : Interfaces to handle backup/restore functionality for persistent volume

Attachment:
providerservice.proto.txt

Why this issue to fixed / feature is needed(give scenarios or use cases):

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Adding support for generic controller

Issue/Feature Description:
Adding support for generic controller
Why this issue to fixed / feature is needed(give scenarios or use cases):
Support generic controller
How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA

Kahu deployment tool analysis

Issue/Feature Description:
This issue explains about what are the options for kahu to deploy in kubernetes environment.
It explains the helm way, some of the limitations with helm for kahu scenario, some workarounds.
Also introduces the operator way for more effective, scalable deployment

Request community to go through this analysis and provide feedback

Kahu_deployment_tool_analaysis.pptx

Why this issue to fixed / feature is needed(give scenarios or use cases):

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Create Backup of pods and then restore the backup with all Pod, which has labels, app = nginx

Test Case Precondition:

  1. Kahu project installed in given name-space (test-kahu)
  2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider
    c. nfs-server
  3. Metadata location is created already
  4. Namespace test-ns is created and contains some of the kubernetes resources
    5.Namespace restore-ns is created

Test Case steps:

  1. Create below backup CR using kubectl command
    (use kubectl create -f <backup.yaml>)
    apiVersion: kahu.io/v1
    kind: Backup
    metadata:
    name: backup-Kahu-0001
    spec:
    includeNamespaces: [test-ns]
    metadataLocation: nfs
    includeResources:
  • name:
    kind: Pod
    isRegex: true
  1. Use kubectl describe backup -n test-kahu
  2. Get inside nfs server pod and check for the content inside mount path given
    (use command kubectl exec -ti -n test-kahu /bin/sh)
  3. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001
    Spec:
    backupName: backup-Kahu-0001
    namespaceMapping:
    test-ns : restore-ns
    label:
    matchLabels:
    app: nginx
    includeResources:
  • name:
    kind: Pod
    isRegex: true

Expected Result:

  1. In step 2,
    a) Verify that backup stage is Finished and State is Completed
    b) Verify the resource list in status shows all the required resources
  2. In step 3, verify that
    a) tar file is created with name of backup
    b) After untar file, All pods are backuped.
  3. In step4, verify that
    a) Backed up pods only with labels, app = nginx in its name are up in new namespace
    restore-ns
    Why this issue to fixed / feature is needed(give scenarios or use cases):
    This is a testcase which is to automated to make sure deployment is properly backed up and
    restored
    How to reproduce, in case of a bug:
    *Other Notes / Environment Information: (Please give the env information, log link or any
    useful information for this issue)

    @kalaiselvikks76

Develop provider to integrate kahu with S3 based storage backend

Issue/Feature Description:
Kahu has storage provider framework to dynamically plugin any provider to perform either metadata or volume backup
By default, Kahu supports NFS provider for metadata backup

This issue is opened to discuss on integrating S3 based storage backend with kahu

This involves

Analysis of all the interfaces provided by metadata service
Development of provider to achieve all the above interface realisation to interact with s3

NFS provider can be used as reference for this development
minio can be used as one of the option to verify this integration

Why this issue to fixed / feature is needed(give scenarios or use cases):
This integration will help to seamlessly use any s3 based backend with kahu

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Optimize by moving controller's common code like informers, lister to common.go

Issue/Feature Description:

  1. Currently Run() method with listers and informers related code present in backup controller. For optimization we can move these codes to common.go, so that other controllers can use it.

Why this issue to fixed / feature is needed(give scenarios or use cases):

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Adding support for client, lister, informer, deepcopy generation

Issue/Feature Description:

Adding support for client, lister, informer, deepcopy generation for supported CRDs

Why this issue to fixed / feature is needed(give scenarios or use cases):

The issue can handle autogeneration of client code

How to reproduce, in case of a bug:
NA

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Create Backup with deployment but only name contains "kahu", keyword but exclude the name "kahu-restore-deployment" and then restore the created backup

Issue/Feature Description:
Testcase Precondition:

  1. Kahu project installed in given name-space (test-kahu)
  2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider
    c. nfs-server
  3. Metadata location is created already
  4. Namespace test-ns1, test-ns2 is created and contains
    some of the kubernetes resources
    5.Namespace restore-ns is created
    Testcase steps:
  5. Create below backup CR using kubectl command
    (use kubectl create -f <backup.yaml>)
    apiVersion: kahu.io/v1
    kind: Backup
    metadata:
    name: backup-Kahu-0001
    spec:
    includeNamespaces: [test-ns]
    metadataLocation: nfs
    includeResources:
  • name: kahu

kind: Deployment
isRegex: true
excludeResources:

  • name: kahu-restore-deployment
    kind: Deployment
    isRegex: false
  1. Use kubectl describe backup -n test-kahu
  2. Get inside nfs server pod and check for the content inside mount path given
    (use command kubectl exec -ti -n test-kahu /bin/sh)
  3. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001
    Spec:
    backupName: backup-Kahu-0001
    namespaceMapping:
    test-ns : restore-ns
    Expected Result:
  4. In step 2,
    a) Verify that backup stage is Finished and State is Completed
    b) Verify the resource list in status shows all the required resources
  5. In step 3, verify that
    a) tar file is created with name of backup
    b) After untar file, All deployment has "kahu" keywork of default namespace is backuped.
    For example:
  6. kahu-123
  7. 123-kahu
  8. 123-kahu-456
    but it should not contain "kahu-restore-deployment" name
  9. The deployments that are backed up to be up in restore-ns namespace
    Why this issue to fixed / feature is needed(give scenarios or use cases):
    This is a testcase which is to automated to make surepod is properly backed up and restored
    How to reproduce, in case of a bug:
    Other Notes / Environment Information: (Please give the env information, log link or any
    useful information for this issue)

    @kalaiselvikks76

Snapshot objects are not properly cleaned up during backup delete

Issue/Feature Description:
When backup is performed in kahu for csi volumes, csi volume snapshots are created through csi drivers.
Kahu also creates its own volume snapshot crds to keep track of the backup.

When backup is deleted, these snapshot objects are expected to be deleted. But in current flow, these objects still exists after backup delete

Expectation: These snapshot objects to be deleted during backup delete

Why this issue to fixed / feature is needed(give scenarios or use cases):
This issue will result in lot of stale objects to be present in the system which will consume unnecessary system resources

How to reproduce, in case of a bug:
1)Kahu is deployed with nfs provider and openebs zfs provider
image

  1. Backup is created for an application having csi volume provisioned from openebs zfs
    image

  2. Backup is deleted but snapshot objects are still not deleted
    image

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Integrate goimport installation to make file

Issue/Feature Description:
Current make in kahu build runs go imports but does to check the install part of goimports
go lang tools version can be maintained in go mod file through which specific version of go import can be used

goimport version update can be done on need basis and after analysis

Why this issue to fixed / feature is needed(give scenarios or use cases):

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Job based data protection framework

Issue/Feature Description:
KAHU project under the soda-cdm organisation deals with data protection aspects of container data management on Kubernetes .
Currently Kahu supports different volume backup and backup location with Provider framework. The framework provides a plugin approach to integrated Volume/BackupLocation driver in runtime itself.

Currently Volume/BackupLocation drivers are deployed either as “Deployment” or “Statefulset”. The solution works perfectly if the drive uses APIs to connect with respective servers. But it would not work

  • If backup/restore drivers are data movers (like Restic).
  • If volumes are topologically distributed
  • Currently Kubernetes does not support hot mount functionality, So if any Volume backup driver needs “Volume” mount for backup/restore. Current approach will be failure

Why this issue to fixed / feature is needed(give scenarios or use cases):
Data protection is a scheduled operation. So running a driver process and waiting for a backup/restore request to process is a waste of memory and CPU cycles.

An alternative approach is to make accept volume backup driver and backup location driver as Pod template and schedule a Kubernetes job to process backup/restore request

The issue is submitted to discuss an alternate approach (Job based data protection) to handle data protection.

How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Initial draft is compiled here

E2E testing framework for Kahu

The issue captures high-level ideas for a Go framework for testing Kahu APIs. The framework, referred to as e2e-framework provides ways that makes it easy to define tests functions that can programmatically test define Kahu APIs. The two overall goals of the new framework are to allow developers to quickly and easily assemble end-to-end tests and to provide a collection of support packages to help with interacting with the K8S API-server.

Issue/Feature Description:

Why this issue to fixed / feature is needed(give scenarios or use cases):
There should be proper test framework which makes developers to write testcases easily

How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA

Create Backup of a deployment and then restore the backup to the same namespace

Testcase Precondition:

  1. Kahu project installed in given name-space (test-kahu)
  2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider
    c. nfs-server
  3. Metadata location is created already
  4. Namespace test-ns is created and contains
    some of the kubernetes resources
    5.Namespace restore-ns is created

Testcase steps:

  1. Create below backup CR using kubectl command
    (use kubectl create -f <backup.yaml>)
    apiVersion: kahu.io/v1
    kind: Backup
    metadata:
    name: backup-Kahu-0001
    spec:
    includeNamespaces: [test-ns]
    metadataLocation: nfs
    includeResources:
  • name:
    kind: Deployment
    isRegex: true
  1. Use kubectl describe backup -n test-kahu
  2. Get inside nfs server pod and check for the content inside mount path given
    (use command kubectl exec -ti -n test-kahu /bin/sh)
  3. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001
    Spec:
    backupName: backup-Kahu-0001

Expected Result:

  1. In step 2,
    a) Verify that backup stage is Finished and State is Completed
    b) Verify the resource list in status shows all the required resources

  2. In step 3, verify that
    a) tar file is created with name of backup
    b) After untar file, All deployments are backed up

  3. In step4, verify that
    a) Backed up deployment is up in namespace test-ns

Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and
restored

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any
useful information for this issue)

@kalaiselvikks76

kahu should support Custom Resource Definitions , CRs Backup & Restore

**Issue/Feature Description:
Now a days most of the applications deployed based on operator framework or used the CRDs in general . So without supporting the CRDs & CRs its difficult to support shuch kind of apps.
So Kahu Should support the CRDs , CRs backup & Restore.

Wait for CustomResourceDefinitions to be ready before restoring CustomResources. Also refresh the resource list from the Kubernetes API server after restoring CRDs in order to properly restore CRs.

**Why this issue to fixed / feature is needed(give scenarios or use cases):
This feature is very important to support operator based apps or apps which uses CRDs etc

**How to reproduce, in case of a bug: NA

**Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue) NA

Create Backup of a pod in namespace test-ns1,test-ns2 and then restore the backup to the namespace restore-ns from only test-ns1

Issue/Feature Description:
Testcase Precondition:

  1. Kahu project installed in given name-space (test-kahu)
  2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider
    c. nfs-server
  3. Metadata location is created already
  4. Namespace test-ns1, test-ns2 is created and contains
    some of the kubernetes resources
    5.Namespace restore-ns is created
    Testcase steps:
  5. Create below backup CR using kubectl command
    (use kubectl create -f <backup.yaml>)
    apiVersion: kahu.io/v1
    kind: Backup
    metadata:

name: backup-Kahu-0001
spec:
includeNamespaces: [test-ns1,test-ns2]
metadataLocation: nfs
includeResources:

  • name:
    kind: Pod
    isRegex: true
  1. Use kubectl describe backup -n test-kahu
  2. Get inside nfs server pod and check for the content inside mount path given
    (use command kubectl exec -ti -n test-kahu /bin/sh)
  3. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001
    Spec:
    backupName: backup-Kahu-0001
    includeNamespaces: [test-ns1]
    namespaceMapping:
    test-ns1 : restore-ns
    Expected Result:
  4. In step 2,
    a) Verify that backup stage is Finished and State is Completed
    b) Verify the resource list in status shows all the required resources
  5. In step 3, verify that
    a) tar file is created with name of backup
    b) After untar file, All deployments and pods are backuped. but point to note, the pod which
    has owner reference will not be backuped
  6. In step4, verify that
    a) Backed up pod is up in namespace test-ns i.e no changes to be noticed in the pods
    Why this issue to fixed / feature is needed(give scenarios or use cases):
    This is a testcase which is to automated to make surepod is properly backed up and restored
    How to reproduce, in case of a bug:
    Other Notes / Environment Information: (Please give the env information, log link or any
    useful information for this issue)

    @kalaiselvikks76

Create Backup of a pod in namespace test-ns1,test-ns2,test-ns3 and then restore with excluding multiple namespaces test-ns1 and test-ns2

Issue/Feature Description:
Testcase Precondition:

  1. Kahu project installed in given name-space (test-kahu)
  2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider
    c. nfs-server
  3. Metadata location is created already
  4. Namespace test-ns1, test-ns2 is created and contains
    some of the kubernetes resources
    5.Namespace restore-ns is created
    Testcase steps:
  5. Create below backup CR using kubectl command
    (use kubectl create -f <backup.yaml>)
    apiVersion: kahu.io/v1
    kind: Backup
    metadata:
    name: backup-Kahu-0001
    spec:
    includeNamespaces: [test-ns1,test-ns2,test-ns3]
    metadataLocation: nfs
    includeResources:
  • name:
    kind: Pod
    isRegex: true
  1. Use kubectl describe backup -n test-kahu
  2. Get inside nfs server pod and check for the content inside mount path given
    (use command kubectl exec -ti -n test-kahu /bin/sh)
  3. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001
    Spec:
    backupName: backup-Kahu-0001
    excludeNamespaces: [test-ns1,test-ns2]
    namespaceMapping:
    test-ns3 : restore-ns
    Expected Result:
  4. In step 2,

a) Verify that backup stage is Finished and State is Completed
b) Verify the resource list in status shows all the required resources
2. In step 3, verify that
a) tar file is created with name of backup
b) After untar file, All deployments and pods are backuped. but point to note, the pod which
has owner reference will not be backuped
3. In step4, verify that
a) No pods of test-ns1 and test-ns2 are restored
Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make surepod is properly backed up and restored
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any
useful information for this issue)

@kalaiselvikks76

Update CRD scope in api definitions

Issue/Feature Description:
In current kahu code, all the crd's used are at namespace level.
They should be changed to Cluster level

Also kubebuilder and genclient tags need to be updated in specification

Why this issue to fixed / feature is needed(give scenarios or use cases):

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Pod restore with pvc/pv. It always shows processing state

Issue/Feature Description:

Why this issue to fixed / feature is needed(give scenarios or use cases):

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Backup handling for all the scenarios of configmap and secret usage

Issue/Feature Description:
When a pod,deployment,daemonset or statefulset are associated with a configmap or a secret and backed up configmaps and secrets are not getting backed up along with backed up resource
Why this issue to fixed / feature is needed(give scenarios or use cases):
configmaps and secrets should be backed up along with pod so that when restored ,pods come up
How to reproduce, in case of a bug:

apiVersion: v1
kind: Pod
metadata:
  name: dapi-test-pod-val-config
spec:
  containers:
    - name: test-container
      image: registry.k8s.io/busybox
      command: ['sh', '-c', 'echo The app is running! && sleep 3600']
      env:
        - name: SPECIAL_LEVEL_KEY
          valueFrom:
            configMapKeyRef:
              name: special-config
              key: SPECIAL_LEVEL
        - name: SPECIAL_TYPE_KEY
          valueFrom:
            configMapKeyRef:
              name: special-config
              key: SPECIAL_TYPE
  restartPolicy: Never
apiVersion: v1
  kind: ConfigMap
  metadata:
    name: special-config
    namespace: default
  data:
    SPECIAL_LEVEL: very
    SPECIAL_TYPE: charm

bring up these yamls
Then try to backup the pod above using the below yaml

apiVersion: kahu.io/v1beta1
kind: Backup
metadata:
  name: backup-demo-resource-pod-with-config
spec:
  includeNamespaces: [default]
  metadataLocation: nfs
  includeResources:
    - name: dapi-test-pod-val-config
      kind: Pod
      isRegex: false

Now see the backed up resources the configmap will not be backed up

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Expose meta service gRPC interfaces

Issue/Feature Description:
Expose gRPC interfaces fro meta service.

Why this issue to fixed / feature is needed(give scenarios or use cases):
Meta service manages backup and restore of metadata of the resource getting backedup. The exposed interface gets used by Backup and restore controller for backup and restore scenario of resource metadata

How to reproduce, in case of a bug:
NA

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
NA

Not able to restore service as resource

Issue/Feature Description:

Why this issue to fixed / feature is needed(give scenarios or use cases):

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Controller with business logic which includes Part-2

Issue/Feature Description:

Why this issue to fixed / feature is needed(give scenarios or use cases):
Controller with business logic which includes:

  1. Content Collections:
  • Configs (sc, config, secret etc.)

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Create Backup with all Pod, which has labels, app = nginx and then restore the backup

Issue/Feature Description:

Test Case Precondition:

  1. Kahu project installed in given name-space (test-kahu)
  2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider
    c. nfs-server
  3. Metadata location is created already
  4. Namespace test-ns is created and contains some of the kubernetes resources
    5.Namespace restore-ns is created

Test Case steps:

  1. Create below backup CR using kubectl command
    (use kubectl create -f <backup.yaml>)
    apiVersion: kahu.io/v1
    kind: Backup
    metadata:
    name: backup-Kahu-0001
    spec:
    includeNamespaces: [test-ns]
    metadataLocation: nfs
    label:
    matchLabels:
    app: nginx
    includeResources:
  • name:
    kind: Pod
    isRegex: true
  1. Use kubectl describe backup -n test-kahu
  2. Get inside nfs server pod and check for the content inside mount path given
    (use command kubectl exec -ti -n test-kahu /bin/sh)
  3. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001
    Spec:
    backupName: backup-Kahu-0001
    namespaceMapping:
    test-ns : restore-ns

Expected Result:

  1. In step 2,
    a) Verify that backup stage is Finished and State is Completed
    b) Verify the resource list in status shows all the required resources
  2. In step 3, verify that
    a) tar file is created with name of backup
    b) After untar file, All pods only with labels, app = nginx in its name are backuped.
  3. In step4, verify that
    a) Backed up pods are up in new namespace restore-ns

Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and
restored

How to reproduce, in case of a bug:

Other Notes / Environment Information: Please give the env information, log link or any
useful information for this issue

Create Backup with deployment but name that does not contains kahu keyword and then restore the backup

Testcase Precondition:

  1. Kahu project installed in given name-space (test-kahu)
  2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider
    c. nfs-server
  3. Metadata location is created already
  4. Namespace test-ns is created and contains some of the kubernetes resources
    5.Namespace restore-ns is created

Testcase steps:

  1. Create below backup CR using kubectl command

(use kubectl create -f <backup.yaml>)
apiVersion: kahu.io/v1
kind: Backup
metadata:
name: backup-Kahu-0001
spec:
includeNamespaces: [test-ns]
metadataLocation: nfs
excludeResources:

  • name: kahu
    kind: Deployment
    isRegex: true
  1. Use kubectl describe backup -n test-kahu
  2. Get inside nfs server pod and check for the content inside mount path given
    (use command kubectl exec -ti -n test-kahu /bin/sh)
  3. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001
    Spec:
    backupName: backup-Kahu-0001
    namespaceMapping:
    test-ns : restore-ns

Expected Result:

  1. In step 2,
    a) Verify that backup stage is Finished and State is Completed
    b) Verify the resource list in status shows all the required resources
  2. In step 3, verify that
    a) tar file is created with name of backup
    b) After untar file, All deployments only without kahu in its name are backuped.
    For example:
  3. kahu-123
  4. 123-kahu
  5. 123-kahu-456
    Etc. are not backed up
  6. In step4, verify that
    a) Backed up deployments are up in new namespace restore-ns

Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and
restored

How to reproduce, in case of a bug:

Other Notes / Environment Information: Please give the env information, log link or any
useful information for this issue

Design and Implement Controller framework with watch and List

Issue/Feature Description:

Why this issue to fixed / feature is needed(give scenarios or use cases):

  1. Implement Controller framework
  2. Watch and List

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

To backup resources of specific Kind in a specific order

**Issue/Feature Description:

To backup resources of specific Kind in a specific order, better to provide an option –ordered-resources to specify a mapping Kinds to an ordered list of specific resources of that Kind.

Resource names are separated by commas and their names are in format ‘namespace/resourcename’.

For cluster scope resource, simply use resource name. Key-value pairs in the mapping are separated by semi-colon.

Kind name is in plural form.

**Why this issue to fixed / feature is needed(give scenarios or use cases):
if the user wants to backup up pod-1 before pod-2 or to respect the order based on the users application dependencies

**How to reproduce, in case of a bug: NA

**Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

We can add the below parameter in the type BackupSpec struct { ...} and handle accordingly
// OrderedResources specifies the backup order of resources of specific Kind.
// The map key is the resource name and value is a list of object names separated by commas.
// Each resource name has format "namespace/objectname". For cluster resources, simply use "objectname".
// +optional
// +nullable
OrderedResources map[string]string json:"orderedResources,omitempty"

Create Backup and then restore the backup but only name that does contains kahu keyword deployments

Issue/Feature Description:

Testcase Precondition:

  1. Kahu project installed in given name-space (test-kahu)
  2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider
    c. nfs-server
  3. Metadata location is created already
  4. Namespace test-ns is created and contains some of the kubernetes resources
    5.Namespace restore-ns is created

Testcase steps:

  1. Create below backup CR using kubectl command
    (use kubectl create -f <backup.yaml>)
    apiVersion: kahu.io/v1
    kind: Backup
    metadata:
    name: backup-Kahu-0001
    spec:
    includeNamespaces: [test-ns]
    metadataLocation: nfs
  2. Use kubectl describe backup -n test-kahu
  3. Get inside nfs server pod and check for the content inside mount path given
    (use command kubectl exec -ti -n test-kahu /bin/sh)
  4. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001
    Spec:
    backupName: backup-Kahu-0001
    namespaceMapping:
    test-ns : restore-ns
    includeResources:
  • name: kahu
    kind: Deployment
    isRegex: true

Expected Result:

  1. In step 2,
    a) Verify that backup stage is Finished and State is Completed
    b) Verify the resource list in status shows all the required resources
  2. In step 3, verify that
    a) tar file is created with name of backup
    b) After untar file, 3. In step4, verify that
    c) Backed up all and deployments only with kahu in its name
    For example:
  3. kahu-123
  4. 123-kahu
  5. 123-kahu-456
    Etc. are restored up
    Why this issue to fixed / feature is needed(give scenarios or use cases):
    This is a testcase which is to automated to make sure deployment is properly backed up and
    restored
    How to reproduce, in case of a bug:
    Other Notes / Environment Information: (Please give the env information, log link or any
    useful information for this issue)

@kalaiselvikks76

Display of namespace mapping in restore object

Issue/Feature Description:

Why this issue to fixed / feature is needed(give scenarios or use cases):

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Support Kahu client

Issue/Feature Description:
Currently data protection operations are requested with kubectl.
With kubectl command, API requests are little overwhelming.

The issue is opened to discuss kahu client support

  • Operations support for kahu client
  • Different sub operations support
  • Operations options support

Why this issue to fixed / feature is needed(give scenarios or use cases):
The issue is a feature request for the community to develop a user friendly kahu client.

How to reproduce, in case of a bug:
NA
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Create Backup of all resources and then restore the backup of only pods(include single resource)

Issue/Feature Description:
Test Case Precondition:

  1. Kahu project installed in given name-space (test-kahu)
  2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider
    c. nfs-server
  3. Metadata location is created already
  4. Namespace test-ns is created and contains some of the kubernetes resources
    Test Case steps:
  5. Create below backup CR using kubectl command
    (use kubectl create -f <backup.yaml>)
    apiVersion: kahu.io/v1
    kind: Backup
    metadata:
    name: backup-Kahu-0001
    spec:
    includeNamespaces: [test-ns]
    metadataLocation: nfs
  6. Use kubectl describe backup -n test-kahu
  7. Get inside nfs server pod and check for the content inside mount path given
    (use command kubectl exec -ti -n test-kahu /bin/sh)
  8. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001
    Spec:
    backupName: backup-Kahu-0001
    includeResources:
  • name:
    kind: Pod
    isRegex: true
    namespaceMapping:
    test-ns : restore-ns
    Expected Result:
  1. In step 2,
    a) Verify that backup stage is Finished and State is Completed
    b) Verify the resource list in status shows all the required resources
  2. In step 3, verify that
    a) tar file is created with name of backup
    b) After untar file, All resources in the namespace test-ns are backuped.
  3. In step4, verify that
    a) Only pods from the Backed up resources are up in new namespace restore-ns
    Why this issue to fixed / feature is needed(give scenarios or use cases):
    This is a testcase which is to automated to make sure deployment is properly backed up and
    restored
    How to reproduce, in case of a bug:
    Other Notes / Environment Information: (Please give the env information, log link or any
    useful information for this issue)

Create Backup of all resources and then restore the backup of only pods and deployments (include multiple resources)

Issue/Feature Description:
Test Case Precondition:

  1. Kahu project installed in given name-space (test-kahu)
  2. All below pods should be up and running:
    a. backup service
    b. Meta service with nfs provider
    c. nfs-server
  3. Metadata location is created already
  4. Namespace test-ns is created and contains some of the kubernetes resources
    Test Case steps:
  5. Create below backup CR using kubectl command
    (use kubectl create -f <backup.yaml>)
    apiVersion: kahu.io/v1
    kind: Backup
    metadata:
    name: backup-Kahu-0001
    spec:
    includeNamespaces: [test-ns]
    metadataLocation: nfs
  6. Use kubectl describe backup -n test-kahu
  7. Get inside nfs server pod and check for the content inside mount path given
    (use command kubectl exec -ti -n test-kahu /bin/sh)
  8. Create a restore CR on new namespace (restore-ns)
    apiVersion: kahu.io/v1
    kind: Restore
    metadata:
    name: restore-Kahu-0001
    Spec:
    backupName: backup-Kahu-0001
    includeResources:
  • name:
    kind: Pod
    isRegex: true
  • name:
    kind: Deployment
    isRegex: true
    namespaceMapping:
    test-ns : restore-ns
    Expected Result:
  1. In step 2,
    a) Verify that backup stage is Finished and State is Completed
    b) Verify the resource list in status shows all the required resources
  2. In step 3, verify that
    a) tar file is created with name of backup
    b) After untar file, All resources in the namespace test-ns are backuped.
  3. In step4, verify that
    a) Only pods and deployments from the Backed up resources are up in new namespace
    restore-ns
    Why this issue to fixed / feature is needed(give scenarios or use cases):
    This is a testcase which is to automated to make sure deployment is properly backed up and
    restored
    How to reproduce, in case of a bug:
    Other Notes / Environment Information: (Please give the env information, log link or any
    useful information for this issue)

Introduce Finalizers for Backup and Restore cleanup

Issue/Feature Description:
The issue is to track Restore and Backup cleanup in resource delete scenario.

With deletion of Backup/Restore object, controller should trigger respective resource deletion.
The deletion event likely gets missed if controller is not available to receive and resource get deleted in background.

Possible solution:
Add finalizers to wait until specific conditions are met before it fully deletes resources marked for deletion

Why this issue to fixed / feature is needed(give scenarios or use cases):
For graceful cleanup for Backup and Restore CRDs

How to reproduce, in case of a bug:

Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.