Coder Social home page Coder Social logo

googlecloudplatform / elcarro-oracle-operator Goto Github PK

View Code? Open in Web Editor NEW
207.0 37.0 53.0 2.55 MB

El Carro is a new project that offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring.

License: Apache License 2.0

Starlark 11.42% Go 82.42% Shell 5.25% Dockerfile 0.19% Makefile 0.61% Jsonnet 0.12%
backup cloud database hybrid-cloud kubernetes multi-cloud multicloud oracle restore

elcarro-oracle-operator's Introduction

El Carro: The Oracle Operator for Kubernetes

Go Report Card

Run Oracle on Kubernetes with El Carro

El Carro is a new project that offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring.

High Level Overview

El Carro helps you with the deployment and management of Oracle database software in Kubernetes. You must have appropriate licensing rights to allow you to use it with El Carro (BYOL).

With the current release, you download the El Carro installation bundle, stage the Oracle installation software, create a containerized database image (with or without a seed database), and then create an Instance (known as CDB in Oracle parlance) and add one or more Databases (known as PDBs).

After the El Carro Instance and Database(s) are created, you can take snapshot-based or RMAN-based backups and get basic monitoring and logging information. Additional database services will be added in future releases.

License Notice

You can use El Carro to automatically provision and manage Oracle Database Express Edition (XE) or Oracle Database Enterprise Edition (12c and 19c). In each case, it is your responsibility to ensure that you have appropriate licenses to use any such Oracle software with El Carro.

Please also note that each El Carro “database” will create a pluggable database, which may require licensing of the Oracle Multitenant option.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Quickstart

We recommend starting with the quickstart first, but as you become more familiar with El Carro, consider trying more advanced features by following the user guides linked below.

If you have a valid license for Oracle 12c EE or 19c EE and would like to get your Oracle database up and running on Kubernetes, you can follow the quickstart guide for Oracle 12c or the quickstart guide for Oracle 19c.

As an alternative to Oracle 12c EE or 19c EE, you can use Oracle 18c XE which is free to use by following the quickstart guide for Oracle 18c XE instead.

If you prefer to run El Carro locally on your personal computer, you can follow the user guide for Oracle on minikube or the user guide for Oracle on kind.

Preparation

To prepare the El Carro download and deployment, follow this guide.

Provisioning

El Carro helps you to easily create, scale, and delete Oracle databases.

Firstly, you need to create a containerized database image.

You can optionally create a default Config to set namespace-wide defaults for configuring your databases, following this guide.

Then you can create Instances (known as CDBs in Oracle parlance), following this guide. Afterward, create Databases (known as PDBs) and users following this guide.

Backup and Recovery

El Carro provides both storage snapshot based backup/restore and Oracle native RMAN based backup/restore features to support your database backup and recovery strategy.

After the El Carro Instance and Database(s) are created, you can create storage snapshot based backups, following this guide.

You can also create Oracle native RMAN based backups, following this guide.

To restore from a backup, follow this guide.

Data Import & Export

El Carro provides data import/export features based on Oracle Data Pump.

To import data to your El Carro database, follow this guide.

To export data from your El Carro database, follow this guide.

What's More?

There are more features supported by El Carro and more to be added soon! For more information, check logging, monitoring, connectivity, UI, etc.

Contributing

You're very welcome to contribute to the El Carro Project!

We've put together a set of contributing and development guidelines that you can review in this guide.

Support

To report a bug or log a feature request, please open a GitHub issue and follow the guidelines for submitting a bug.

For general questions or community support, we welcome you to join the El Carro community mailing list and ask your question there.

elcarro-oracle-operator's People

Contributors

aa3210 avatar akinfermo avatar alrs avatar angelawan-jiawan avatar ankel avatar aravindous avatar awnand avatar borisdali avatar essbeegoog avatar haohu-hh avatar j-raffety avatar jiechgcp avatar kaoet avatar kchernyshev avatar kurt-google avatar liggitt avatar m-schiefer avatar mfielding avatar nasmart avatar nblxa avatar ncalero-uy avatar pavel-mitrofanov-regtech avatar shaojie-zhang avatar shuhanfan avatar tabbyl21 avatar thakurpriya-google avatar yian-zhou avatar yixinzhangvv avatar zachmorek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elcarro-oracle-operator's Issues

image_build.sh script failing on Mac OS 11.3.1 with expr: syntax error

image_build.sh script failing on Mac OS 11.3.1 (bash & zsh shells) with expr: syntax error.

This is due to use of expr length e.g
% echo ${DBNAME}
PSTG

% echo expr length "${DBNAME}"
expr: syntax error

% echo "${#DBNAME}"
4

Also, Mac OS does not include getopt so command line parameter parsing does not work on Mac OS.

Set hidden Oracle parameters

Is your feature request related to a problem? Please describe.
Unfortunately there are applications that require undocumented hidden Oracle parameters with names starting with the underscore. I would like El Carro to offer the possibility to define these in the Instance resource.

Describe the solution you'd like
It should be possible to set hidden parameters in the Instance resource, preferably but not necessarily similar to how it is done for the regular ones.

Describe alternatives you've considered
Run a custom SQL script during the database image creation.

Additional context
Since it is not possible to look up the type of hidden parameters in V$PARAMETER there should be a way of determining the type otherwise. One possible way could be to let user specify the type, but then hidden parameters will use a different YAML syntax than parameters currently. In such case it could be possible to let user define the scope as well. Otherwise the scope has to default to "spfile".

Unable to build operator docker image. stat oracle/pkg/database/common: file does not exist

Describe the bug
Unable to build operator docker image locally

To Reproduce

cd $PATH_TO_EL_CARRO_REPO
{
export REPO="localhost:5000/oracle.db.anthosapis.com"
export TAG="latest"
export OPERATOR_IMG="${REPO}/operator:${TAG}"
docker build -f oracle/Dockerfile -t ${OPERATOR_IMG} .
docker push ${OPERATOR_IMG}
}
Sending build context to Docker daemon   4.71MB
Step 1/19 : FROM docker.io/golang:1.15 as builder
 ---> 40349a2425ef
Step 2/19 : WORKDIR /build
 ---> Using cache
 ---> b44c2a87f722
Step 3/19 : COPY go.mod go.mod
 ---> Using cache
 ---> c359cdfe04b9
Step 4/19 : COPY go.sum go.sum
 ---> Using cache
 ---> 6f6d2902ef22
Step 5/19 : RUN go mod download
 ---> Using cache
 ---> 8be558325755
Step 6/19 : COPY common common
 ---> Using cache
 ---> 1dd64c7bfbc5
Step 7/19 : COPY oracle/main.go oracle/main.go
 ---> Using cache
 ---> 0a79c9d91f73
Step 8/19 : COPY oracle/version.go oracle/version.go
 ---> Using cache
 ---> a9fbca9b14cf
Step 9/19 : COPY oracle/api/ oracle/api/
 ---> Using cache
 ---> 123c5e7c856e
Step 10/19 : COPY oracle/controllers/ oracle/controllers/
 ---> Using cache
 ---> 7c7a1ff96c61
Step 11/19 : COPY oracle/pkg/agents oracle/pkg/agents
 ---> Using cache
 ---> 9d5ed5ea3f52
Step 12/19 : COPY oracle/pkg/database/common oracle/pkg/database/common
COPY failed: file not found in build context or excluded by .dockerignore: stat oracle/pkg/database/common: file does not exist

Expected behavior
docker build finishes successfully

Resizing of instances (CPU, RAM, Disks)

Is your feature request related to a problem? Please describe.
Currently there is no option to resize an already deployed El Carro Instance. Changes made to databaseResources are not applied to the statefulset.

Describe the solution you'd like
When changing the resources of the instance described in:

  databaseResources:
    requests:
      memory: 64Gi
      cpu: 8

and/or the disk sizes described in:

  disks:
    - name: DataDisk
      size: 210Gi
      storageClass: "standard-rwo"
    - name: LogDisk
      size: 110Gi
      storageClass: "standard-rwo"

the Statefulset and PVCs should be resized accordingly. (+ whatever needs to happen inside Oracle to support this, apart from possibly necessary parameter changes that are under the responsibility of the user, e.g. sga_max_size)

Describe alternatives you've considered
Resizing the PVCs manually does work (Scale down statefulset -> Change PVC -> Scale up statefulset) and the disk space can be used by Oracle afterwards. Manually changing the Statefulset might work as well, but we weren't able to test it extensively for unexpected side-effects.

Thx for considering our request!

Database deletion hangs if the Instance has been deleted first (e.g. via Helm)

Describe the bug
If I first delete the Instance resource, and then one of its Database resources, the deletion of the Database never finishes.

To Reproduce
Steps to reproduce the behavior:

  1. Create an Instance with a Database as follows
---
apiVersion: oracle.db.anthosapis.com/v1alpha1
kind: Instance
metadata:
  name: mydb
spec:
  cdbName: CDB
  cloudProvider: GCP
  databaseResources:
    requests:
      memory: 4.0Gi
  dbDomain: gke
  disks:
    - name: DataDisk
      size: 45Gi
      storageClass: "standard-rwo"
    - name: LogDisk
      size: 55Gi
      storageClass: "standard-rwo"
  edition: Enterprise
  images:
    service: gcr.io/...
  services:
    Backup: true
    Monitoring: true
    Logging: true
  sourceCidrRanges: [ 0.0.0.0/0 ]
  type: Oracle
  version: "19.15"
---
apiVersion: oracle.db.anthosapis.com/v1alpha1
kind: Database
metadata:
  name: mydb
spec:
  instance: mydb
  name: MY_PDB
  admin_password: ...

(update image and password above)

Wait until the Database is Ready.

$ kubectl get databases.oracle.db.anthosapis.com
NAME   INSTANCE   USERS   PHASE   DATABASEREADYSTATUS   DATABASEREADYREASON   USERREADYSTATUS   USERREADYREASON
mydb   mydb               Ready   True                  CreateComplete        True              SyncComplete
  1. Delete the Instance
$ kubectl delete instances.oracle.db.anthosapis.com mydb
instance.oracle.db.anthosapis.com "mydb" deleted
  1. Delete the Database
$ kubectl delete databases.oracle.db.anthosapis.com mydb
database.oracle.db.anthosapis.com "mydb" deleted
^C

The deletion of the database hangs after the message is printed. Operator logs show repeated messages

I1020 06:56:02.378181       1 database_controller.go:129] controllers/Database "msg"="reconciling Database (PDB) deletion..." "Database"={"Namespace":"default","Name":"mydb"}
E1020 06:56:02.379075       1 controller.go:326]  "msg"="Reconciler error" "error"="Instance.oracle.db.anthosapis.com \"mydb\" not found" "controller"="database" "controllerGroup"="oracle.db.anthosapis.com" "controllerKind"="Database" "database"={"name":"mydb","namespace":"default"} "name"="mydb" "namespace"="default" "reconcileID"="ee938527-c9a0-4a3e-ac06-fef33e2869b5"

Expected behavior
My expectation is that since the Instance deletes the underlying CDB altogether, the Database deletion should be immediate. This is even more important when using Helm to install & uninstall a release containing El Carro resources, since Helm does not guarantee any specific order in this case.

Even though I constructed the above minimal case for reproducing the issue with kubectl, the original problem appeared when using helm.

Additional context
Kubernetes versions:

kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.15", GitCommit:"1d79bc3bcccfba7466c44cc2055d6e7442e140ea", GitTreeState:"clean", BuildDate:"2022-09-21T12:11:27Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.12-gke.2300", GitCommit:"e55564cf3a1384026a54920174977659c8c56a50", GitTreeState:"clean", BuildDate:"2022-08-16T09:24:51Z", GoVersion:"go1.16.15b7", Compiler:"gc", Platform:"linux/amd64"}

Workarounds are either:

  • use the "correct" deletion order: first Databases, then Instances
    or:
  • if the Database hangs and cannot be deleted, remove its metadata.finalizers

Configure Affinity and Tolerations for DB pods

Is your feature request related to a problem? Please describe.
Oracle recommends adjusting Linux kernel parameters (https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/changing-kernel-parameter-values.html#GUID-FB0CC366-61C9-4AA2-9BE7-233EB6810A31). Some of these are not namespaced, for instance, fs.aio-max-nr. The default value on GKE nodes (65536) is lower than the Oracle
recommendation. Therefore it is necessary to control scheduling of Instance DB pods only on specifically prepared nodes with affinity.

In addition, tolerations are quite helpful for achieving workload separation.

Describe the solution you'd like
Parameters related to scheduling can be set on the Instance, and will be propagated by the operator to managed StatefulSets and possibly the Deployments as well.
Example:

apiVersion: oracle.db.anthosapis.com/v1alpha1
kind: Instance
metadata:
  name: mydb
spec:
  cdbName: CDB
  cloudProvider: GCP
  databaseResources:
    requests:
      memory: 4.0Gi
  ...
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: app
            operator: In
            values:
            - oracle
  tolerations:
    - key: app
      value: oracle
      effect: NoSchedule

Alternatively only the nodeAffinity part of the affinity element may be supported.

Describe alternatives you've considered

  1. Adjust kernel parameters on all nodes in the GKE cluster, which may cause conflicts.
  2. Use a mutating admission controller to modify pods belonging to StatefulSets managed by El Carro. The downsize is the increased complexity.

Specified size of additional disks on Instance is ignored

Describe the bug
When defining an additional disk in the Instance spec, e.g. like this:

  disks:
    - name: DataDisk
      size: 200Gi
      storageClass: "standard-rwo"
    - name: LogDisk
      size: 100Gi
      storageClass: "standard-rwo"
    - name: DataDisk2
      size: 64Ti
      storageClass: standard-rwo

The disk will get a default size of 100Gi instead of the desired disk size of e.g. 64Ti.

Expected behavior
The disk should be created with the specified size, the same way the size of the default disks can be specified.

Proposed solution
This is the change I did in our fork and it has fixed the problem in our tests.
image
=> Because the new disk does not exist in the defaultDiskSpecs, the default size of 100Gi is used instead of the specified size.

common/pkg/utils/utils.go

func FindDiskSize(diskSpec *commonv1alpha1.DiskSpec, configSpec *commonv1alpha1.ConfigSpec, defaultDiskSpecs map[string]commonv1alpha1.DiskSpec, defaultDiskSize resource.Quantity) resource.Quantity {
	spec, exists := defaultDiskSpecs[diskSpec.Name]

	if !diskSpec.Size.IsZero() {
		return diskSpec.Size
	}

	if configSpec != nil {
		for _, d := range configSpec.Disks {
			if d.Name == diskSpec.Name {
				if !d.Size.IsZero() {
					return d.Size
				}
				break
			}
		}
	}

	if exists {
		return spec.Size
	}
	return defaultDiskSize
}

Go report card doesn't render properly on the landing page + some of the links are broken

1/ Go report card doesn't render properly:

Site cannot be reached: https://goreportcard.com/report/github.com/GoogleCloudPlatform/elcarro-oracle-operator

2/ All three links under https://github.com/GoogleCloudPlatform/elcarro-oracle-operator#backup-and-recovery are broken:

https://github.com/GoogleCloudPlatform/elcarro-oracle-operator/blob/main/docs/content/backup-restore/snapshot-backups
https://github.com/GoogleCloudPlatform/elcarro-oracle-operator/blob/main/docs/content/backup-restore/rman-backups
https://github.com/GoogleCloudPlatform/elcarro-oracle-operator/blob/main/docs/content/backup-restore/restore-from-backups

All three are missing the .md suffix in the URL

Thanks,
Boris.

PDB initialization SQL scripts

Is your feature request related to a problem? Please describe.
The application needs the PDB to contain special changes performed by SYS, for example, permissions on dictionary objects owned by SYS with GRANT OPTION.

Describe the solution you'd like
The Database resource could contain a reference to a SQL script to be executed on PDB creation, either directly embedded as text or as a reference to a ConfigMap.

For example, with the script provided via a ConfigMap:

apiVersion: oracle.db.anthosapis.com/v1alpha1
kind: Database
metadata:
  name: mydb
spec:
  instance: mydb
  name: MY_PDB
  admin_password: ...
  initializationScript:
    configMapRef:
      name: mydb-init-scripts
      key: init.sql

alternatively, just embedded SQL:

  initializationScript:
    sql: |
      GRANT SELECT ON ALL_TABLES TO GPDB_ADMIN WITH GRANT OPTION;

If the script fails, the Database should not reach the Ready state.

Describe alternatives you've considered

  • Build a seeded image that contains PDB$SEED with the customizations (requires use of undocumented parameters).
  • Build a seeded image that contains a customized PDB for cloning; let El Carro create new PDBs as clones of the custom one.

Confusing about platfrom dependency

Describe the bug

The true Kubernetes way is to avoid any dependencies on the underlying platform and allow the application to run the same way on any K8s cluster installation. Also, the nature of the app itself doesn't matter should it be stateful or stateless.

Nowadays, the cloud provider's code is moved out of the tree in the separated external cloud provider's repos. Kubernetes allows dealing with abstract things like Ingress, StorageClass, LoadBalancer services, etc. Such things hide the underlying platform from the applications and allow them to run on any Kubernetes installation, be it Minikube, GCP, or any other way to run k8s. The logic to provision such abstract things is delegated to the appropriate cloud provider binary. This binary knows better what to do to provide a service.

As I see, in the v0.1.0-alpha version there is a small amount of platform depended code and in most cases it is useless and it is a good time to redesign the Elcarro operator to make it more interesting for the community.

Expected behavior
Cloud agnostic way to run ELCarro Oracle Operator.

Additional context
Cloud Providers in Kubernetes

Allow configuring multi-zone/regional topology constraints

Is your feature request related to a problem? Please describe.
We want to be able to modify topology constraints for el-carro resources when running on a multi-zonal cluster. These are defined in the k8s well-known annotations list available https://kubernetes.io/docs/reference/labels-annotations-taints

Describe the solution you'd like
The most straight-forward way to support this from a user perspective would be to simply apply these well-known annotations directly to the instance object (as it is the root resource determining compute and disk elements), and have the operator propagate these down to any created pods/PVCs.

The specific annotations we have in mind are:
topology.kubernetes.io/region
topology.kubernetes.io/zone

Describe alternatives you've considered
There are some even more fine grained topology annotations which we could consider supporting instead of or in addition to these which target nodes/hostnames.

We could also choose to expose these through some portion of the spec instead, but I suspect its easiest to use if we expose these via annotations similar to how users would interact with a Pod/Deployment.

Additional context

Podspec is only applied to the Oracle Statefulset, not to the Oracle Monitor Deployment

Describe the bug
Podspec is only applied to the Oracle Statefulset, not to the Oracle Monitor Deployment.
This causes the monitor pod to never be scheduled onto a node and forever stay in "Pending".

To Reproduce
Deploy an Instance with the following podSpec:
podSpec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: workload operator: In values: - app-instance - key: app operator: In values: - oracle {{- if .Values.machineFamily }} - key: cloud.google.com/machine-family operator: In values: - "{{ .Values.machineFamily }}" {{- end }} tolerations: - key: app value: oracle operator: Equal effect: NoSchedule - key: workload operator: Equal value: app-instance effect: NoSchedule

Only the Oracle pod created by the Statefulset will have the tolerations and affinities applied to it. The Oracle monitor pod does not, but it does have a pod-affinity to the other pod.
This results in the monitor pod not being able to be scheduled onto a node, because it has to run on the same node as the other pod, but doesn't have the required toleration to be allowed onto that node.

Expected behavior
The podspec is applied to both pods.

Support configuring runtimeClassName for instances

Is your feature request related to a problem? Please describe.
If we want to run oracle containers in a sandboxed runtime like gvisor https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods we need additional configuration functionality.

Describe the solution you'd like
It seems like the preferable way to implement this would be to expose an equivalent of pod.spec.runtimeClassName on our instance and propagate it to the statefulset.spec.template.spec.runtimeClassName field.

Describe alternatives you've considered
Another possible option is to implement the manual affinity and taint rules. This is possible with #268

Additional context

Add ability to stop & start database instances

TL;DR

Ability to scale down the database instance when not in use to save compute resources and scale it back up when needed. Ideally automatically supported by helm upgrade.

Is your feature request related to a problem? Please describe.

We have an important use-case, where in a scalable infrastructure environment (e.g. GKE), we want to scale up/down all resources of an application when they don't need to run, in order to save expensive compute resources.

The replica count of the statefulset of the database instance is not tracked in the instance CRD. A "helm upgrade" after a scale-down, to restore all statefulsets/deployments to the replicas defined in the helm chart thus works for all parts of our application, except for El Carro, which remains scaled down to 0 replicas. Scaling the statefulset manually is an option, but also not fully supported by the operator, so it may have negative side-effects.

Describe the solution you'd like
A flag to instruct the El Carro operator to cleanly stop/start the database instance (+ scale the statefulset down/up). If previously set to "scale down" to scale down, a "helm upgrade" should set it back to "scale up" and start the database instance pod again.

The resulting flow might look like:

  1. [CREATE] “helm upgrade --install” -> Instance CRD is deployed with “instance.Spec.IsStopped: false”
  2. [STOP] Set “instance.Spec.IsStopped: true” to stop the DB
  3. [START] “helm upgrade –install” -> Sets “instance.Spec.IsStopped: false” (because that’s the only difference between the CRD in k8s and in the chart), triggering startup of the DB

Thank you for considering this request!

Kind regards,
Maximilian Schiefer (for Regnology)

Additional disks prevent patching the instance

Describe the bug
If an instance spec contains additional disks, it cannot be patched by El Carro. The instance always goes into the PatchingBackupFailure state.

To Reproduce
Steps to reproduce the behavior:

  1. Create an instance with an additional disk and Patching service enabled e.g.:
    spec:
      disks:
      - name: DataDisk
        size: 500Gi
        storageClass: standard-rwo
      - name: LogDisk
        size: 150Gi
        storageClass: standard-rwo
      - name: data1
        size: 1Gi
        storageClass: standard-rwo
    services:
      Patching: true
  2. Update the spec.images.service to a new image.
  3. Watch instance state changes kubectl get instances -w
  4. See it reach the PatchingBackupFailure state

Expected behavior
The instance is patched successfully.

Operator log

E0326 20:22:37.849271       1 instance_controller.go:249] controllers/Instance "msg"="patchingStateMachine failed" "error"="VolumeSnapshot.snapshot.storage.k8s.io \"patching-backup-oracle-191520240326725970651-\" is invalid: [metadata.name: Invalid value: \"patching-backup-oracle-191520240326725970651-\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'), metadata.labels: Invalid value: \"patching-backup-oracle-191520240326725970651-\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')]" "Instance"={"Namespace":"test1","Name":"myinstance"}

Apparently the issue is caused by prePatchBackup where the function GetPVCNameAndMount is used but GetCustomPVCNameAndMount is not.

Adding additional disks to Instance requires manual deletion of the StatefulSet

Describe the bug
When adding additional disks to an instance and rolling out the change with helm upgrade, we have to delete the oracle instance statefulset during the helm upgrade, in order for the changes to be applied.

To Reproduce

  1. Deploy oracle instance
  2. Update the instance spec in the helm chart to add additional disk(s)
  3. Deploy the changes with helm upgrade
  4. Observe that the disks are not being created, because while the Instance is updated, the Operator can't patch the statefulset, as an immutable part of the spec has changed and recreation of the statefulset is required

Expected behavior
The operator should recreate the statefulset by itself to not require manual intervention.

Workaround
Deleting the statefulset manually after executing the helm upgrade resolves the issue, as the operator recreates the statefulset.
However, we can't automate this, as we are waiting for the helm upgrade to finish (with --wait) in our deployment pipeline. Deleting the statefulset before executing helm upgrade doesn't work, because the operator recreates it instantly, before helm can apply the changes.
We need "--wait" during the helm upgrade, because it allows us to determine exactly when the upgrade is fully completed and the app deployment is ready to use.

Security Policy violation Binary Artifacts

This issue was automatically created by Allstar.

Security Policy Violation
Project is out of compliance with Binary Artifacts policy: binaries present in source code

Rule Description
Binary Artifacts are an increased security risk in your repository. Binary artifacts cannot be reviewed, allowing the introduction of possibly obsolete or maliciously subverted executables. For more information see the Security Scorecards Documentation for Binary Artifacts.

Remediation Steps
To remediate, remove the generated executable artifacts from the repository.

Artifacts Found

  • third_party/runtime/libaio.so.1
  • third_party/runtime/libnsl.so.1

Additional Information
This policy is drawn from Security Scorecards, which is a tool that scores a project's adherence to security best practices. You may wish to run a Scorecards scan directly on this repository for more details.


Allstar has been installed on all Google managed GitHub orgs. Policies are gradually being rolled out and enforced by the GOSST and OSPO teams. Learn more at http://go/allstar

This issue will auto resolve when the policy is in compliance.

Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer.

COPY oracle/version.go oracle/version.go file does not exists

Step 8/19 : COPY oracle/version.go oracle/version.go
COPY failed: file not found in build context or excluded by .dockerignore: stat oracle/version.go: file does not exist

Getting the above error while installing the El carro oracle operator in Minikube

Plugins support for Import/Export resources

Is your feature request related to a problem? Please describe.
Currently only GCP Cloud storage is supported for import and export of datapumps:

GcsPath string `json:"gcsPath,omitempty"`

As a result such features are unusable in Enterprise environments with restricted networks and strict rules for data location.

Describe the solution you'd like
I suggest to change the "generic" Import/Export resources to use something like http/https urls and add ability to "load" plugins for different storages on different cloud providers or protocols (nfs, cifs or any other crazy things)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.