Coder Social home page Coder Social logo

kohlstechnology / eunomia Goto Github PK

View Code? Open in Web Editor NEW
149.0 10.0 37.0 7.49 MB

A GitOps Operator for Kubernetes

License: Apache License 2.0

Shell 23.56% Dockerfile 0.88% Go 70.95% Makefile 1.01% Smarty 0.06% Python 3.54%
kubernetes k8s gitops golang kubernetes-operator operator operator-sdk

eunomia's People

Contributors

etsauer avatar gitter-badger avatar gl4di4torrr avatar jboxman avatar jimmyfigiel avatar jkupferer avatar meganmoran avatar raffaelespazzoli avatar rajneesh-ops avatar rhalat avatar safirh avatar sanbornick avatar sbar95 avatar seanmalloy avatar smiley73 avatar vinny-sabatini avatar vjayraghavan avatar warrenvw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eunomia's Issues

Need ability to specify namespace in CR

What happens?

Currently all resources get created within the namespace the job runs in. This makes it impossible to e.g. create a new namespace, create a new SA and CR within it, then have the operator take it from there.

What were you expecting to happen?

We can create resources in any namespace, as long as the SA has sufficient access.
We should extend the CRD and add:

spec:
  namespace: "xxx"

That defaults to the current namespace if it's not defined.
The job templates, and discoverEnvironment.sh and resourceManager.sh need to set the proper context.

This can simply be passed as an ENV variable to the jobs. The variable already exists, but is not consumed.

{{ if .Config.Spec.namespace }}
              value: {{ .Config.Spec.namespace }}
{{ else }}
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace          
{{ end }}

Steps to reproduce:

Try creating a resource in another namespace.

Add To operatorhub.io

What happens?

This operator is not available on operatorhub.io

What were you expecting to happen?

I want to see this operator on operatorhub.io

Steps to reproduce:

Open https://operatorhub.io/

Any errors, stacktrace, logs?

n/a

Environment:

  • Runtime version(Java, Go, Python, etc): na
  • Desktop OS/version: na

Additional comments:

none

Prometheus Metrics For Operator

What happens?

After installing the operator I would like to monitor it using Prometheus metrics, but there are not custom Prometheus metrics exported.

What were you expecting to happen?

I want the operator to expose custom Prometheus metrics, so that I can monitor it. At a minimum I need ...

  • basic Go Lang metrics from the prometheus client
  • a gauge build_info metric that provides the Go version, eunomia version, git sha1, and git branch as labels
  • a counter metric that tracks the number of failed k8s Jobs
  • a counter that tracks failed HTTP POST requests for the GitHub webhook
  • end user documentation for the custom metrics that are exported

Steps to reproduce:

N/A - this is a feature request

Any errors, stacktrace, logs?

No

Environment:

  • Runtime version(Java, Go, Python, etc): N/A
  • Desktop OS/version: N/A

Additional comments:

I believe the operator-sdk is probably(?) providing some default prometheus metrics, but I have not verified if this is working or not.

.go-version

Write a .go-version text file at the top directory, to assist developers in selecting the right Go version when contributing to this project, such as with JetBrains GoLand or gvm.

Push Container Images to Quay

What happens?

Container images for this project are not available on a public container registry.

What were you expecting to happen?

Container images for this project are pushed to https://quay.io/organization/kohlstechnology

Steps to reproduce:

Look at https://quay.io/organization/kohlstechnology, notice that the container images for this project do not exist.

Any errors, stacktrace, logs?

N/A

Environment:

  • Runtime version(Java, Go, Python, etc): n/a
  • Desktop OS/version: n/a

Additional comments:

none

Could we build our base images on an image that uses glibc rather than musl?

The Alpine base image we use for the template processors uses musl c libraries instead of the gnu c libraries. As such, some dynamically linked binaries, such as oc do not work without some re-linking. This is what I had to do to get oc minimally working on our base image:

  && ln -s /lib /lib64 \
  && ln -s /lib64/ld-musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2

This makes me nervous as, while this appears to work, it would not be community supported should we experience issues.

I would suggest we switch to using one of the Universal Base Images as they are rhel based, but still fully community distributable. I think something like ubi8/ubi-minimal would work nicely for us.

Test Gitter Integration

(While submitting an issue briefly describe the problem you are facing or a new feature you would want to see added)

What happens?

...

What were you expecting to happen?

...

Steps to reproduce:

  • ...
  • ...

Any errors, stacktrace, logs?

...

Environment:

  • Runtime version(Java, Go, Python, etc):
  • Desktop OS/version:

Additional comments:

...

Ability to see which commit id was last applied

I would like the ability to see which git commit id was last run. We could add this two places:

  1. In the logs of the job pod, both at the beginning and at the end of a run.
  2. In the .status section of the GitOpsConfig resource being acted on.

errcheck

Bro tip:

  • errcheck with the -blank flag, helps to identify any Go code sections that skip checking error return values.

Pin operator-sdk version to v0.8.1

What happens?

Currently dep is pulling the last version of the operator-sdk from the master branch instead of a specific tag.

What were you expecting to happen?

The Gopkg.toml file is update to pin the operator-sdk version to v0.8.1. The dep ensure -update command is run to update Gopkg.lock.

Steps to reproduce:

Run dep ensure -update

Any errors, stacktrace, logs?

n/a

Environment:

  • Runtime version(Java, Go, Python, etc): n/a
  • Desktop OS/version: n/a

Additional comments:

none

Hashicorp Vault Integration

What happens?

We need to have the ability to pull secrets and inject them into the templating engine. We need to decide on what the best approach for this is.

What were you expecting to happen?

We determine a way to integrate Hashicorp Vault in a way that will allow adding other secrets management solutions in the future.

Create debug image of operator

(While submitting an issue briefly describe the problem you are facing or a new feature you would want to see added)

What happens?

Using distroless means we cannot debug inside the image if something is going wrong.
...

What were you expecting to happen?

Ability to debug when necessary by deploying a debug version of the image with the tools necessary to debug and the operator binary running in debug mode.
...

Steps to reproduce:

N/A

  • ...
  • ...

Any errors, stacktrace, logs?

N/A
...

Environment:

  • Runtime version(Java, Go, Python, etc): N/A
  • Desktop OS/version: N/A

Additional comments:

N/A
...

Ability to push config without direct GitHub access

What happens?

Kubernetes clusters without internal network access can not be managed.

What were you expecting to happen?

Kubernetes clusters that do not have a connection to the internal network, need to be somehow able to receive the configuration from an internal GitHub. Maybe introduce some kind of proxy or push-model.

All scripts and Dockerfiles should use same variables

Currently we have variables defined in many different places (e.g. kubectl version). We need to make sure we're consistent and pull it all from the same sources. Maybe travis.yml is the right place?

What happens?

If we change a version, we have to change it in many places and most likely will forget one.

What were you expecting to happen?

Changing a version is a simple task in one file.

Get Initial Travis Integration Setup

What happens?

Travis CI does not build a push container images to quay.io yet.

What were you expecting to happen?

Travis CI builds code, builds container images, and pushes to quay.io.

Steps to reproduce:

n/a

Any errors, stacktrace, logs?

n/a

Environment:

  • Runtime version(Java, Go, Python, etc): n/a
  • Desktop OS/version: n/a

Additional comments:

none

Replace template image shell scripts with golang program

We currently use a couple of shell scripts to initialize the templating engines and do some other logic. We need to check if it makes sense to replace all of that logic with a golang program, so that we don't have such bloated images. We just have to be careful not to add a lot of additional work and not to distract from the main operator work.

What happens?

  • Large images due to shell scripts

What were you expecting to happen?

  • We have tiny and efficient images

s/make/mage

One way to make the build system more portable and reliable, is to write development commands as Go code with mage, as opposed to make. This reduced the likelihood of surprising build issues, for example when building on local Windows or Mac machines.

e2e Tests Run Through Travis CI

What happens?

e2e test do not currently run through travis CI.

What were you expecting to happen?

e2e tests run through Travis

Steps to reproduce:

N/A

Any errors, stacktrace, logs?

N/A

Environment:

  • Runtime version(Java, Go, Python, etc): Go 1.12
  • Desktop OS/version: N/A

Additional comments:

N/A

Add Variable Hierarchy to GitOpsConfig CR

What happens?

I would like to be able to use a hierarchy of variables for the process template step.

What were you expecting to happen?

In a GitOpsConfig CR I can specify a hierarchy of variables. The template processor will handle processing the hierarchy of variables.

Steps to reproduce:

  • N/A

Any errors, stacktrace, logs?

N/A

Environment:

  • Runtime version(Java, Go, Python, etc): N/A
  • Desktop OS/version: N/A

Additional comments:

Examples of variable hierarchies that already exist in the wild ...

Clean Up CI/CD Pipeline

What happens?

Currently the CI/CD pipeline has various things scattered between .travis.yml, a Makefile, and some shell scripts. It is a bit messy.

What were you expecting to happen?

A clean easy to follow CI/CD pipeline that is easy to modify.

Steps to reproduce:

  • n/a

Any errors, stacktrace, logs?

No.

Environment:

  • Runtime version(Java, Go, Python, etc): Go 1.12.6
  • Desktop OS/version: n/a

Additional comments:

I propose that we move everything into the Makefile and remove all of the shell scripts. The Travis CI config would then be updated to just run various make commands.

Builds are failing due to missing git.apache.org/thrift.git dependency

What happens?

Travis builds are failing due to git.apache.org/thrift.git not being available.

What were you expecting to happen?

Builds complete without problems

Steps to reproduce:

Kick off a build through travis.

Any errors, stacktrace, logs?

go: finding git.apache.org/thrift.git v0.12.0
go: git.apache.org/[email protected]: unknown revision v0.12.0
go: error loading module requirements

Environment:

Travis CI

Cleanup hardcoded v0.0.1 values in repo

In the Makefile, the version is hard-coded within the LDFLAGS
The template processor images are also hardcoded to build on top of the v0.0.1 base image.

What happens?

  • github.com/KohlsTechnology/eunomia/version.Version is always set to v0.0.1
  • The template processor images always build on top of the v0.0.1 image

What were you expecting to happen?

  • github.com/KohlsTechnology/eunomia/version.Version is set dynamically
  • The template processor image builds on top of the correct version image
  • Correct container image tags are pushed to quay.io when GitHub release is created

GitHub WebHook Authentication

What happens?

The GitHub webhook does not support authentication.

What were you expecting to happen?

There is an option to require authentication for the webhook.

Steps to reproduce:

N/A - this is a feature request

Any errors, stacktrace, logs?

No

Environment:

  • Runtime version(Java, Go, Python, etc): n/a
  • Desktop OS/version: n/a

Additional comments:

No

Cluster demo needs better RBAC example

What happens?

The cluster example currently uses service accounts with cluster-admin access for the team seeds. This is obviously not a good idea.

What were you expecting to happen?

We should have a cluster role that has less access than cluster-admin, but can e.g. still do all tasks required by the demo (e.g. management of namespaces).
The example should be changed to use that role for the team service accounts.

Multiple yaml parameter files

What happens?

Currently values can only be provided in one file values.yaml. This could result in a very cluttered and rather large yaml file for some use cases.

What were you expecting to happen?

Ability to have many yaml files in the parameters directory. Yaml files get merged. This can be either a simple approach (just appending files to each other) or it does an actual yaml deep merge using the file naming as the main order. The later should probably be covered under issue #5 .

GitOpsConfig Allow Specifying env vars, params, command, etc to k8s Job

What happens?

I cannot specifying any params to the k8s Jobs that are created by the GitOpsConfig CR.

What were you expecting to happen?

In a GitOpsConfig CR I would like to be able to specify additional params to the k8s Job that is being run.

Steps to reproduce:

  • N/A

Any errors, stacktrace, logs?

N/A

Environment:

  • Runtime version(Java, Go, Python, etc): N/A
  • Desktop OS/version: N/A

Additional comments:

...

End to End tests have OpenShift dependencies

Running the e2e test script on minikube fails as there are openshift dependencies in the test. We should remove them.

What happens?

$ ./e2e-test.sh 
namespace/test-gitops-operator created
Now using project "test-gitops-operator" on server "https://192.168.39.158:8443".
configmap/gitops-templates created
clusterrole.rbac.authorization.k8s.io/gitops-operator created
clusterrolebinding.rbac.authorization.k8s.io/gitops-operator created
service/gitops-operator created
serviceaccount/gitops-operator created
error: unable to recognize "test/deploy/route.yaml": no matches for kind "Route" in version "route.openshift.io/v1"
INFO[0000] Testing operator locally.                    
# github.com/KohlsTechnology/eunomia/test/e2e
test/e2e/ocp_template_test.go:22:2: cannot find package "gitops-operator/pkg/apis/eunomia/v1alpha1" in any of:
	/home/esauer/go/src/github.com/KohlsTechnology/eunomia/vendor/gitops-operator/pkg/apis/eunomia/v1alpha1 (vendor tree)
	/usr/local/go/src/gitops-operator/pkg/apis/eunomia/v1alpha1 (from $GOROOT)
	/home/esauer/go/src/gitops-operator/pkg/apis/eunomia/v1alpha1 (from $GOPATH)
FAIL	github.com/KohlsTechnology/eunomia/test/e2e [setup failed]
Error: failed to exec []string{"go", "test", "./test/e2e/...", "-namespacedMan", "deploy/test/empty.yaml", "-globalMan", "deploy/test/empty.yaml", "-root", "/home/esauer/go/src/github.com/KohlsTechnology/eunomia", "-singleNamespace", "-parallel=1"}: exit status 1

What were you expecting to happen?

Script succeeds and exits 0

Steps to reproduce:

minikube start
./e2e-test.sh

Any errors, stacktrace, logs?

...

Environment:

  • Runtime version(Java, Go, Python, etc): go version go1.10.8 linux/amd64
  • Desktop OS/version: Fedora 29

Additional comments:

...

Research Option To Add Kustomize Step

What happens?

Would like to see if kustomize could be used.

What were you expecting to happen?

Interested in investigating if this project can somehow leverage kustomize

Steps to reproduce:

n/a

Any errors, stacktrace, logs?

n/a

Environment:

  • Runtime version(Java, Go, Python, etc): n/a
  • Desktop OS/version: n/a

Additional comments:

none

images tags are not managed correctly by the travis ci/cd

What happens?

currently all images are created with the static tags v0.0.1

What were you expecting to happen?

when build happens from master images should be tagged with latest
when the build happens because of a release (tag event in git) all images should be tagged with the git tag

GitOpsConfig Supports ResourceHandlingMode "None"

What happens?

I would like ResourceHandlingMode to support None as an option.

What were you expecting to happen?

I do not want the handler step to do anything. The process template step will take care of applying the resources to the k8s cluster.

Steps to reproduce:

  • N/A

Any errors, stacktrace, logs?

N/A

Environment:

  • Runtime version(Java, Go, Python, etc): N/A
  • Desktop OS/version: N/A

Additional comments:

No.

GitHub WebHook(net/http server) Production Readiness

What happens?

n/a

What were you expecting to happen?

The web server for the GitHub webhook is ready for prime time.

Steps to reproduce:

Read the source code ...

https://github.com/KohlsTechnology/eunomia/blob/e4c5cd1b1bd740a84ab01887a15ce9608d36ca2c/pkg/handler/handler.go

https://github.com/KohlsTechnology/eunomia/blob/74313499f092ab09bd2586fddc5e73e4c0648e98/cmd/manager/main.go

Any errors, stacktrace, logs?

n/a

Environment:

  • Runtime version(Java, Go, Python, etc): n/a
  • Desktop OS/version: n/a

Additional comments:

Review this post to get an idea of what needs to be done ... https://blog.cloudflare.com/exposing-go-on-the-internet/

Tests should not rely on connectivity to the internet

When we write e2e tests, we should attempt to decouple the tests from having connectivity to the repo. Not all working environments will have internet access, so we should write our tests to run "disconnected". I believe we could accomplish this by running a git server (e.g. Gogs) in our local test cluster.

What happens?

Test fails, as the job is unable to pull the repository content.

What were you expecting to happen?

Test should succeed.

Steps to reproduce:

  • Turn off wifi
  • Run ./e2e-test.sh

Any errors, stacktrace, logs?

INFO[0000] Testing operator locally.                    
test &{{ } {gitops  test-gitops-operator /apis/eunomia.kohls.io/v1alpha1/namespaces/test-gitops-operator/gitopsconfigs/gitops 183ba73d-827a-11e9-be41-441d546aa67b 9415 1 2019-05-29 21:27:31 -0400 EDT <nil> <nil> map[] map[gitopsconfig.eunomia.kohls.io/initialized:true] [] nil [] } {{https://github.com/KohlsTechnology/eunomia master    example/templates } {https://github.com/KohlsTechnology/eunomia master    example/parameters } [{Change  }] gitops-operator quay.io/kohlstechnology/eunomia-ocp-templates:v0.0.1 CreateOrMerge Delete} {}}+--- FAIL: TestOCPTemplate (60.07s)
	client.go:57: resource type GitOpsConfig with namespace/name (test-gitops-operator/gitops) created
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	util.go:38: Waiting for availability of hello-openshift pod
	ocp_template_test.go:48: timed out waiting for the condition
	client.go:75: resource type GitOpsConfig with namespace/name (test-gitops-operator/gitops) successfully deleted
FAIL
FAIL	github.com/KohlsTechnology/eunomia/test/e2e	60.183s
Error: failed to exec []string{"go", "test", "./test/e2e/...", "-namespacedMan", "deploy/test/empty.yaml", "-globalMan", "deploy/test/empty.yaml", "-root", "/home/esauer/go/src/github.com/KohlsTechnology/eunomia", "-singleNamespace", "-parallel=1"}: exit status 1

Environment:

  • Runtime version(Java, Go, Python, etc): go version go1.10.8 linux/amd64
  • Desktop OS/version: Fedora 29

Additional comments:

...

Eunomia-base image scripts don't handle errors well.

While running through the Hello Worl Yaml sample I get multiple errors. First, it appears the scripts use find and cat, neither of which exist in the image. Additionally, if the script attempts to delete resources that don't exist, either because its looking in the wrong namespace, or because the resources were already deleted, the script fails, leaving the CR stuck in delete mode as the finalizer was never cleaned up.

What happens?

Run through the Hello World YAML example. Then manually delete the resources created:

kubectl delete deployments.apps/hello-world
kubectl delete svc/hello-world

Now delete the cr, and look at the jobs logs from the resulting job.

$ kubectl logs gitopsconfig-hello-world-yaml-w4tydf-nfzj4
Cloning Repositories
Cloning into '/git/templates'...
Cloning into '/git/parameters'...
Setting cluster-ralated environment variable
Context "current" created.
Switched to context "current".
error: the server doesn't have a resource type "route"
Processing Templates
Managing Resources
Context "current" modified.
Switched to context "current".
cat: can't open 'find': No such file or directory
No resources found
cat: read error: Is a directory
No resources found
cat: unrecognized option: i
BusyBox v1.30.1 (2019-06-12 17:51:55 UTC) multi-call binary.

Usage: cat [-nbvteA] [FILE]...

Print FILEs to stdout

	-n	Number output lines
	-b	Number nonempty lines
	-v	Show nonprinting characters as ^x or M-x
	-t	...and tabs as ^I
	-e	...and end lines with $
	-A	Same as -vte
No resources found
cat: can't open '.*\.yaml': No such file or directory
No resources found
Error from server (NotFound): error when deleting "/git/manifests/hello-world.yaml": deployments.apps "hello-world" not found
Error from server (NotFound): error when deleting "/git/manifests/hello-world.yaml": services "hello-world" not found

What were you expecting to happen?

...

Steps to reproduce:

  • ...
  • ...

Any errors, stacktrace, logs?

...

Environment:

  • Runtime version(Java, Go, Python, etc):
  • Desktop OS/version:

Additional comments:

...

Operator Best Practices Refactor

What happens?

We can simplify our code by leveraging the operator-utils library.

What were you expecting to happen?

Leverage https://github.com/redhat-cop/operator-utils to simplify code.

Steps to reproduce:

n/a

Any errors, stacktrace, logs?

n/a

Environment:

  • Runtime version(Java, Go, Python, etc): n/a
  • Desktop OS/version: n/a

Additional comments:

none

Make e2e Tests Work

What happens?

e2e tests are not currently working.

What were you expecting to happen?

e2e tests work and are integrated with Travis CI.

Steps to reproduce:

n/a

Any errors, stacktrace, logs?

n/a

Environment:

  • Runtime version(Java, Go, Python, etc): n/a
  • Desktop OS/version: n/a

Additional comments:

none

Multiple jobs launched on new GitOpsConfig

Eunomia launches multiple jobs for the same GitOpsConfig, it appears it is reacting to the initial job creation and then to its own updates to the GitOpsConfig, such as adding spec.parameterSource.

Role eunomia-operator only works in Eunomia-operator namespace

What happens?

When using the role eunomia-operator in the demo namespace eunomia-hello-world-demo for the eunomia-operator SA, then the pod doesn't fully initialize.

Setting --zap-level=10 shows the error message.

What were you expecting to happen?

The eunomia-operator role should work properly.

Steps to reproduce:

Deploy container with default role and it will not get past the step "Starting the Cmd."

k8s ingress for GitHub WebHook

What happens?

The install for the operator does not include k8s ingress for the GitHub webhook.

What were you expecting to happen?

I would like an easy way to setup and install k8s ingress for the GitHub webhook.

Steps to reproduce:

Install the operator notice that there is no k8s ingress setup for the webhook.

Any errors, stacktrace, logs?

No

Environment:

  • Runtime version(Java, Go, Python, etc): n/a
  • Desktop OS/version: n/a

Additional comments:

No

Provide cluster roles to manage CRs

What happens?

It doesn't seem to be sufficient to be an admin of a namespace (at least on OpenShift 3.11) to view the GitOpsConfig CRs.

What were you expecting to happen?

We should provide roles that will allow viewing and managing CRs. Viewing might be more important, because any of the CRs should be managed via GitOps anyway.

Handle Delete After Push

What happens?

Removing a yaml file from my git repo does not delete the corresponding k8s object(deployment, service, etc) from my cluster. The object just becomes unmanaged.

What were you expecting to happen?

After deleting a k8s yaml file from a git repo. I would like this operator to remove that corresponding k8s object(deployment, service, etc) from my cluster.

Steps to reproduce:

n/a

Any errors, stacktrace, logs?

no

Environment:

  • Runtime version(Java, Go, Python, etc): n/a
  • Desktop OS/version: n/a

Additional comments:

@raffaelespazzoli has some ideas on how to implement this.

On OpenShift, eunomia namespace needs empty node selector annotation

The eunomia operator default install should allow it to schedule on any node in the cluster. This can be achieved in OpenShift by adding the annotation:

openshift.io/node-selector: ''

This should be the default on the eunomia-operator namespace when installed on OpenShift.

WATCH_NAMESPACE appears to be ignored

It appears that eunomia is ONLY looking in the default namespace and nothing else. This doesn't appear to be affected by setting WATCH_NAMESPACE, either to "" or to a specific namespace.

What happens?

$ echo $WATCH_NAMESPACE
gitops-test
[esauer 🎩︎eunomia] (master)$ operator-sdk up local 
INFO[0000] Running the operator locally.                
INFO[0000] Using namespace default.                     

What were you expecting to happen?

$ echo $WATCH_NAMESPACE
gitops-test
[esauer 🎩︎eunomia] (master)$ operator-sdk up local 
INFO[0000] Running the operator locally.                
INFO[0000] Using namespace gitops-test.                     

Steps to reproduce:

Any errors, stacktrace, logs?

...

Environment:

  • Runtime version(Java, Go, Python, etc):
  • Desktop OS/version:

Additional comments:

...

Operator doesn't handle errors well

What happens?

During the testing of the new examples, a couple of problems were observed:

  • The operator sometimes gets stuck and doesn't start new jobs if the previous didn't complete successfully. A restart is required.
  • When there are errors with the repo (no access, wrong branch, missing files, missing template image) and the config doesn't get applied, the GitOpsConfig CRs often can't be deleted afterwards. The Finalizer has to be disabled to remove it.
  • Behavior affects other namespaces also.

Not sure if these are related. I'll keep documenting more behaviors as I do more testing.

What were you expecting to happen?

The Operator gracefully handles any issues, doesn't get "stuck" , and the CR can always be deleted without having to "patch" the resource.

Steps to reproduce:

  • Use the examples and modify one of the CRs to not being able to pull the repo. Leave the trigger as change.
  • Apply the CR and let it spin.
  • Try deleting the CR
  • Try applying an other CR in a different namespace

build-images.sh does not bump latest tag upon GitHub release

What happens?

If you create a new release, only the version tag will be created, latest is not updated.

What were you expecting to happen?

If a release is created, the version tag should be created and the latest tag should be updated.

Steps to reproduce:

Create a release in GitHub and check the quay.io repository for the images. The latest tag will not change

Any errors, stacktrace, logs?

N/A

TravisCI Build Failed For Release v0.0.1

What happens?

Container images did not get built and pushed to quay.io when the v0.0.1 release was created.

What were you expecting to happen?

I expected the container images to be built and pushed to quay.io with the tag v0.0.1.

Steps to reproduce:

Create a release in this repo.

Any errors, stacktrace, logs?

See details here ... https://travis-ci.com/KohlsTechnology/eunomia/jobs/229139174

Skipping a deployment with the script provider because this branch is not permitted: v0.0.1

1.75s$ rvm $(travis_internal_ruby) --fuzzy do ruby -S gem install dpl
Successfully installed dpl-1.10.12
1 gem installed

Installing deploy dependencies
Successfully installed dpl-script-1.10.12
1 gem installed

Preparing deploy

Deploying application
docker login -u [secure] -p [secure] quay.io
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/travis/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
./scripts/build-images.sh quay.io/kohlstechnology
/usr/bin/env: ‘bash -e’: No such file or directory
Makefile:53: recipe for target 'travis-deploy-images' failed
make: *** [travis-deploy-images] Error 127
Script failed with status 2
failed to deploy

Environment:

  • Runtime version(Java, Go, Python, etc): n/a
  • Desktop OS/version: n/a

Additional comments:

n/a

migrate to modules dependency management

currently the project uses dep, but the standard for golang going forward is to use modules for dependency management, so we should do this migration.
documentation and travis CI/CD script should also be updated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.