Coder Social home page Coder Social logo

argoproj / applicationset Goto Github PK

View Code? Open in Web Editor NEW
586.0 30.0 276.0 1.8 MB

The ApplicationSet controller manages multiple Argo CD Applications as a single ApplicationSet unit, supporting deployments to large numbers of clusters, deployments of large monorepos, and enabling secure Application self-service.

Home Page: https://argocd-applicationset.readthedocs.io/

License: Apache License 2.0

Dockerfile 0.34% Makefile 0.94% Go 96.48% Shell 2.24%
argo-cd

applicationset's Introduction

Argo CD ApplicationSet Controller

โš ๏ธ This project code has been moved to the main Argo CD repository

This repository is no longer active. ApplicationSet has been merged with Argo CD and will be released along with it. Further development will happen in Argo CD.

The ApplicationSet controller is a Kubernetes controller that adds support for a new custom ApplicationSet CustomResourceDefinition (CRD). This controller/CRD enables both automation and greater flexibility when managing Argo CD Applications across a large number of clusters and within monorepos, plus it makes self-service usage possible on multitenant Kubernetes clusters.

The ApplicationSet controller provides the ability:

  • To deploy Argo CD Applications to multiple Kubernetes clusters at once
  • To deploy multiple Argo CD applications from a single monorepo
  • Allows unprivileged cluster users (those without access to the Argo CD namespace) to deploy Argo CD applications without the need to involve cluster administrators in enabling the destination clusters/namespaces
  • Best of all, all these features are controlled by only a single instance of an ApplicationSet custom resource, which means no more juggling of multiple Argo CD Application resources to target those multiple clusters/repos!

Unlike with an Argo CD Application resource, which deploys resources from a single Git repository to a single destination cluster/namespace, ApplicationSet uses templated automation to create, modify, and manage multiple Argo CD applications at once.

If you are loving Argo CD and want to use ApplicationSet's automation and templating to take your usage to the next level, give the ApplicationSet controller a shot!

Example Spec:

# This is an example of a typical ApplicationSet which uses the cluster generator.
# An ApplicationSet is comprised with two stanzas:
#  - spec.generator - producer of a list of values supplied as arguments to an app template
#  - spec.template - an application template, which has been parameterized
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: guestbook
spec:
  generators:
  - clusters: {} # This is a generator, specifically, a cluster generator.
  template: 
    # This is a template Argo CD Application, but with support for parameter substitution.
    metadata:
      name: '{{name}}-guestbook'
    spec:
      project: "default"
      source:
        repoURL: https://github.com/argoproj/argocd-example-apps/
        targetRevision: HEAD
        path: guestbook
      destination:
        server: '{{server}}'
        namespace: guestbook

The Cluster generator generates parameters, which are substituted into {{parameter name}} values within the template: section of the ApplicationSet resource. In this example, the cluster generates name and server parameters (containing the name and API URL for the target cluster), which are then substituted into the template's {{name}} and {{server}} values, respectively.

The parameter generation via multiple sources (cluster, list, git repos), and the use of those values within Argo CD Application templates, is a powerful combination. Learn more about generators and template, the Cluster generator and various other ApplicationSet generators, and more, from the ApplicationSet documentation.

Documentation

Take a look at our introductory blog post, Introducing the ApplicationSet Controller for Argo CD.

Check out the complete documentation for a complete introduction, how to setup and run the ApplicationSet controller, how it interacts with Argo CD, generators, templates, use cases, and more.

Community

The ApplicationSet controller is a community-driven project. You can reach the Argo CD ApplicationSet community and developers via the following channels:

We'd love to have you join us!

Development builds

Development builds can be installed by running the following command:

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/applicationset/master/manifests/install.yaml

Commits to the master branch will automatically push new container images to the container registry used by this install, and see this link for automatically updated documentation for these builds. See Development builds for more details.

Development

Learn more about how to setup a development environment, build the ApplicationSet controller, and run the unit/E2E tests.

Our end goal is to provide a formal solution to replace the app-of-apps pattern. You can learn more about the founding principles of the ApplicationSet controller from the original design doc.

This project will initially be maintained separately from Argo CD, in order to allow quick iteration of the spec and implementation, without tying it to Argo CD releases. No promises of backwards compatibility are made, at least until merging into Argo CD proper.

applicationset's People

Contributors

aabouzaid avatar alexmt avatar chetan-rns avatar crenshaw-dev avatar dctrwatson avatar dependabot[bot] avatar dewan-ahmed avatar dgoodwin avatar diegopomares avatar gdubya avatar glyphtech-chrisa avatar hcelaloner avatar ishitasequeira avatar jessesuen avatar jgwest avatar jnpacker avatar jopit avatar jtyr avatar krzysdabro avatar kvendingoldo avatar liorlieberman avatar maruina avatar mgoodness avatar mmerrill3 avatar omerkahani avatar rishabh625 avatar rtnpro avatar shmurata avatar xianlubird avatar yogeek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

applicationset's Issues

Kustomize integration of Application Set controller and Argo CD

Goal:

Create a new kustomize configuration that generates a new install.yaml-style yaml, containing a combined install of ArgoCD+ApplicationSet controller.

This allows end users to quickly kubectl apply themselves an install that contains both.

Steps:

  • In manifests, create a new folder such as namespace-install-with-argo-cd, based on the content of namespace-install
  • Update the kustomize.yaml file under this folder, such that it references the Argo CD kustomize resources
  • Update the hack/update-manifests.sh to generate a new autogenerated-file based on the kustomize result
  • Apply the generated install file and confirm that both applicationset controller and Argo CD work as expected.

Original Issue:

EDIT: I've changed the scope of this issue, such that we are now generating an install.yaml-style file for THIS repository, rather than making the change in the Argo CD repo. The contents is otherwise functionally unchanged. The original issue text follows.

### Goal:
Create a new kustomize configuration in the Argo CD repo, that generates a new `install.yaml`-style yaml, containing a combined install of ArgoCD+ApplicationSet controller (see `argo-cd/hack/update-manifests.sh`). 

This allows end users to quickly kubectl apply themselves an install that contains both.

**Note**: this is a prototype, so don't submit this as a PR against Argo CD yet! Opening the PR will be one of the final integration steps, towards the end of the Argo CD migration process.

### Steps:

Fork argo-cd and create a new branch. In that branch:
- In `argo-cd/manifests`, create a new folder such as `cluster-install-applicationset-integration` (or a name like that, perhaps too long), based on the content of `cluster-install`
- Update the `kustomize.yaml` file under this folder, such that it references the application set kustomize resources (https://github.com/argoproj-labs/applicationset/tree/master/manifests). 
- Update the `hack/update-manifests.sh` to generate a new autogenerated-file based on the kustomize result, for example, see the existing ha on namespace yaml entry which outputs a full install to `namespace-install.yaml`:

echo "${AUTOGENMSG}" > "${SRCROOT}/manifests/ha/namespace-install.yaml"
$KUSTOMIZE build "${SRCROOT}/manifests/ha/namespace-install" >> "${SRCROOT}/manifests/ha/namespace-install.yaml"

- Apply the generated install file and confirm that both applicationset controller and Argo CD work as expected.

Implement additional Cluster generator E2E tests

The proposed E2E test framework allows us to test ApplicationSet features against a live Kubernetes cluster and a live Argo CD instance. This issue tracks adding additional tests that specifically target the cluster generator.

PR #66 already includes a basic test of this generator, but this basic test was not an attempt at an exhaustive test of the functionality.

Should include, at least:

  • Adding a new cluster (secret) should trigger ApplicationSet controller to create a new Application for it

'User guide' docs: the whats/whys/hows of ApplicationSet controller

Write new documentation content, similar to Argo CD's user guide, on how to use ApplicationSet controller.

To include in the docs:

The design proposal is well written, and that content could likely be used for new user-facing documentation of:

  • What ApplicationSet is
    • still experimental, feedback welcome
  • Why does it exist, what problem is ApplicationSet controller looking to solve, versus Argo CD proper
    • scenarios
      • cluster add-ons
      • monorepo
      • self-service
    • issues with the app-of-apps pattern for these use cases
  • How ApplicationSet works
    • what are generators? (general introduction)
    • what are the template fields? (general introduction, eg what are src, dest for)
    • how it interacts with Argo CD proper
    • Note: Don't need to document any specific, detailed examples of the generators as part of this issue, they will be addressed separately as part of this issue

Docs to be written in the same style as Argo CD:

  • Create a docs/ in ApplicationSet repo, docs to be written in markdown
  • Docs to follow the defacto 'style guide' of the existing Argo CD user guide (eg same use of # and ## to denote sections and subsections, and following any other desired traits/idioms of those existing docs)
  • This issue is for writing the content only, no need to setup 'rendering markdown to HTML' step as part of this.

ApplicationSet which generates Applications with matching names should report an error and return

As of this writing, the generateApplications method does not check for applications with duplicate names, and thus multiple generated Applications may use the same name.

As discussed in the comments of the design proposal, this should be reported as an error:

Two application with the same name would be considered an error. In this case error would be saved in status as an error condition.

To reproduce:

Apply the following YAML, and observe that it reports generated 3 applications, even though only 1 Application resource is created:

# The list generator specifies a literal list of argument values to the app spec template.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: guestbook
spec:
  generators:
  - list:
      elements:
      - cluster: engineering-dev
        url: https://1.2.3.4
      - cluster: engineering-prod
        url: https://2.4.6.8
      - cluster: finance-preprod
        url: https://9.8.7.6
  template:
    metadata:
      name: 'matching-guestbook'
    spec:
      project: default
      source:
        repoURL: https://github.com/jgwest/argocd-example-apps.git
        targetRevision: HEAD
        path: guestbook
      destination:
        server: https://kubernetes.default.svc
        namespace: guestbook

In this case, 0 applications should be generated, and an error should be reported in the ApplicationSet status conditions.

Implement additional List generator E2E tests

The proposed E2E test framework allows us to test ApplicationSet features against a live Kubernetes cluster and a live Argo CD instance. This issue tracks adding additional tests that specifically target the list generator.

PR #66 already includes a basic test of this generator, but this basic test was not an attempt at an exhaustive test of the functionality.

Setup release build scripts/artifacts for ApplicationSet controller releases

In order for us to perform an initial release of the ApplicationSet controller, we'll need build artifacts (scripts/makefile tweaks/etc) that do, at least:

  • Performs pre-release check of git repository (see checks in sibling projects, below), container registry credentials
  • Runs unit/E2E tests of current branch, exiting on failure
  • Builds and pushes container image
  • Regenerates the necessary Kubernetes manifests to point that image
  • Creates a git commit containing the above
  • Tags the commit with the release name, and pushes the release
  • Finally, creates a corresponding GitHub release based on that tag

A number of the Argo/Argo-labs sibling projects have tackled this and have existing scripts we can likely borrow from, see for instance what argocd-image-update or Argo CD are doing as part of their release process.

Investigate gaps in existing Application Set unit tests (via coverage stats), and open issues for them

Look through the coverage report and look for (significant) areas of the code that are not thoroughly tested. For each identified issue (or set of related issues), open a new GitHub issue. Re: definition of 'significant', part of this issue is developing an understanding/model for what can currently be considered 'significant' ๐Ÿ˜„ .

If #64 has merged, you can run make test, otherwise:

go test -race -count=1 -coverprofile=coverage.out `go list ./...`

to generate the coverage.out data.

To use this coverage file, generate an HTML report and open in the browser:

go tool cover -html=coverage.out

Implement E2E test framework for testing ApplicationSets against live Kubernetes/Argo CD instance

Similar to the Argo CD E2E tests that run against a live Kubernetes instance, ApplicationSets could benefit from test framework+tests run against live Argo CD and Kubernetes instance and verify expected behaviour.

The proposed test framework API is based on the same BDD-style used by Argo CD (and indeed much of the code is shared between the two).

The tests should run on each PR commit, via a GitHub action (based on the similar Argo CD GitHub action)

Proposal: ApplicationSet Progressive Rollout

Hi all,
I'm opening this issue because I'd like to discuss with the community a possible solution for what I think it's a common issue while doing GitOps.

Consider the following scenario, where you have multiple production clusters across multiple regions. When you use the ApplicationSet all the Application are updated at the same time and with the automated SyncPolicy you are basically doing a global rollout.

What I would like is to introduce the concept of a progressive rollout where you can decide how to rollout you application across those production clusters.

I can see two possible implementation of this idea:

  1. Extend the ApplicationSet specification to support the progressive rollout. The ApplicationSet controller will then be responsible for updating the desired Applications in the desired order. For example, you might want to update a region first, or updating 10% of all your clusters.
  2. Create a new controller with a new CRD dealing with the progressive rollout. The controller watches for ApplicationSet and does something like "argocd app sync " and argocd app wait <MYAPP>.

I think the second approach is much cleaner and doesn't overload the ApplicationSet controller, but I'd like to hear the community thoughts on this.

Kustomize install fails with kubectl

Install fails when using kustomize version 2.X that is bundled with kubectl.

Kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- github.com/argoproj-labs/applicationset/manifests/cluster-install/?ref=567ab7c

Command:

$ kubectl apply -k ./
error: AccumulateTarget: AccumulateTarget: rawResources failed to read Resources: Load from path ../namespace-install failed: '../namespace-install' must be a file (got d='/private/var/folders/pv/r1tbm8n92mx9t46kqx68bz6c0000gn/T/kustomize-667102827/manifests/namespace-install')

Changing resources to bases in namespace-install and cluster-install will fix this.

'User guide' docs: how to use ApplicationSet controller (examples of generators, template fields)

Can either be addressed standalone, or as a follow up to the parent issue

Showcase examples of each of the generators, and an explanations of how those examples work.

To include in the docs:

The design proposal is well written, and the content could likely be used for documentation of:

  • How to use the ApplicationSet controllers:
    • specific examples of the following generators:
      • list generator
      • cluster generator
      • git generator
        • files
        • directories

Fix resource name length issues with generated applications.

At present it is extremely easy to hit resource name length issues with generated applications. We will need to come up with a strategy to effectively truncate and randomize them if they exceed the 253 chars kube resources are limited to.

Additionally we need to watch out for labelling as these are even more restricted, both key and value limited to 63. (assuming we use labels to map applications back to the appset they originated from)

Reconciler error - ApplicationSet is empty

I am running the ArgoCD v1.6.1+159674e via the Operator in OpenShift 4.5.4. When using one of the ApplicationSet examples I get the following error in the argocd-applicationset-controller and the Applications are not created.

I have REDACTED some server name info from the log and the configs.

Log

2020-09-24T11:10:49.333Z	INFO	setup	using argocd namespace	{"namespace": "argo"}
2020-09-24T11:10:52.940Z	INFO	controller-runtime.metrics	metrics server is starting to listen	{"addr": ":8081"}
2020-09-24T11:10:52.943Z	INFO	setup	starting manager
2020-09-24T11:10:52.943Z	INFO	controller-runtime.manager	starting metrics server	{"path": "/metrics"}
2020-09-24T11:10:53.044Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "applicationset", "source": "kind source: /, Kind="}
2020-09-24T11:10:53.145Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "applicationset", "source": "kind source: /, Kind="}
2020-09-24T11:10:53.145Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "applicationset", "source": "kind source: /, Kind="}
2020-09-24T11:10:53.245Z	INFO	controller-runtime.controller	Starting Controller	{"controller": "applicationset"}
2020-09-24T11:10:53.245Z	INFO	controller-runtime.controller	Starting workers	{"controller": "applicationset", "worker count": 1}
time="2020-09-24T11:10:53Z" level=info msg="processing event for cluster secret" name=REDACTED namespace=argo type=createSecretEventHandler
time="2020-09-24T11:10:53Z" level=info msg="listed ApplicationSets" count=1 type=createSecretEventHandler
time="2020-09-24T11:10:53Z" level=info msg="processing event for cluster secret" name=REDACTED namespace=argo type=createSecretEventHandler
time="2020-09-24T11:10:53Z" level=info msg="listed ApplicationSets" count=1 type=createSecretEventHandler
time="2020-09-24T11:10:53Z" level=info msg="generated 2 applications" generator="&{}"
time="2020-09-24T11:10:53Z" level=info msg="generated 2 applications" generator="&{0xc0003a5320}"
time="2020-09-24T11:10:53Z" level=error msg="error generating params" error="ApplicationSet is empty" generator="&{0xc0004594a0}"
2020-09-24T11:10:53.246Z	ERROR	controller-runtime.controller	Reconciler error	{"controller": "applicationset", "request": "argo/appset-poc", "error": "ApplicationSet is empty"}
github.com/go-logr/zapr.(*zapLogger).Error
	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:258
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88

Example used:

A REDACTED server name has the form https://api.my.server.has.a.big.hostname.com:6443

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: appset-poc
spec:
  generators:
  - list:
      elements:
      - cluster: clusterA
        url: REDACTED
      - cluster: clusterB
        url: REDACTED
  template:
    metadata:
      name: 'appset-poc-{{cluster}}'
    spec:
      project: "default"
      source:
        path: k8s-config
        repoURL: INTERNAL_GHE_URL
        targetRevision: HEAD
      destination:
        server: '{{url}}'
        namespace: guestbook

INTERNAL_GHE_URL mentioned above is also a redacted value which is an SSH path [email protected]:my-org/argo-poc.git

At this time I'm assuming that maybe this module isn't ready for OpenShift but that's a total guess.

Timeline for GA (or beta at least) ?

This component seems awesome. It solves clearly multi-cluster deployments challenges. Do you have a timeline to make it generally available (or at least beta) ?

Add go test GitHub action to catch unit test regressions

The Argo CD project uses GitHub actions that run go test (on non-E2E tests) for every PR commit. It would be great to include this here as well, to catch unit test regressions before they make it to master.

This should only run non-E2E tests, as E2E-tests are already covered in a separate GitHub Action in #66

Expand on existing developer docs: how to setup dev env and how to run ApplicationSet controller on local machine

ApplicationSet has a short section on how to develop it, in the parent readme:
https://github.com/argoproj-labs/applicationset#development-instructions

We should expand on this:

  • How to run ApplicationSet controller on your local machine from CLI (rather than run it as a container image on k8s, as the docs currently suggest)
  • How to setup a local Argo CD dev environment (just need to point to relevant Argo CD docs, can probably steal the content from here)
  • Expand on the The following assumes you have: steps that are here, by adding a bit more detail on how to do a k8s deploy of ApplicationSet controller (IMHO the 3 steps are a bit too general, if you are not fully familiar with Argo CD and/or ApplicationSet)

Either on the root README.md page, or as a separate doc page linked from the README.

ApplicationSet should report error conditions/status based on the result of generator/template results

EDIT: See this comment for current design proposal.

When the user-provided generator/template produce invalid Argo CD Applications, we should note that error in the ApplicationSet CRD status field. See Argo CD's ApplicationStatus conditions field.

Likewise I'm sure there are other status fields/conditions that would be useful to expose as part of an ApplicationSet status subtype.

Once implemented, one such validation that would benefit from this is #68

Expose port 8081 in upstream ArgoCD test container, for ApplicationSet controller local development/testing purposes

The ApplicationSet E2E tests (and also if you are running AppllicationSet controller outside k8s) require access to port 8081 in the Argo CD test server container. We should thus upstream this change to Argo CD Makefile:
https://github.com/argoproj-labs/applicationset/blob/e54fae54e494e1623f47be6c893b163da88d493f/docs/E2E-Tests.md#b-ensure-that-port-8081-is-exposed-in-the-argo-cd-test-server-container

Cluster generator cannot select local cluster

The cluster generator currently requires a Kubernetes Secret to exist for each cluster. Argo CD doesn't use a Kubernetes Secret to identify the local cluster the way it does "external" clusters, so it can't be selected.

The easy workaround is to manually create a K8s Secret with the required argocd.argoproj.io/secret-type=cluster annotation and populate it with:

data:
  name: in-cluster
  server: https://kubernetes.default.svc

That Secret can then be annotated and labeled like any other, and the cluster-generator can use those values for templating.

However, that doesn't feel like a good solution in terms of usability.

Move presently unimplemented examples out of examples/ root directory, update proposal links

The examples/ directory of this repo contains a number of examples that are currently not implemented in the code (and are thus examples of how a developer should implement in a future release, not how to use for current users). This may be confusing for new users (and was confusing to me when initially onboarding to ApplicationSet controller), so we should move them to a subdir of examples/ and properly label them.

We can:

  • Create a new examples/proposed directory (other suggestions for name/location are welcome!) that contains examples which are not yet implemented, leaving the root examples/ directory for working examples.
  • Create a markdown file in the proposed directory saying the examples are not implemented in the code yet.
  • The proposal doc contains direct links to these examples, and we would should update the proposal doc to the new location. Note: We don't have edit access to the doc, only suggestion access, but my guess is one of the Argo CD principals should have edit access (perhaps Jesse?) and would be able to approve the changes.

Examples can "graduate" out of proposed as they are implemented.

Calling repo service

For the git generator #7 it makes sense to use the repo service from argocd.

But how do we want to use it?

  1. call it direct, use argocd config map to get the ssh key of the repo
  2. call it indirect, call argocd repo command

we would probably need to call argocd to do the initial sync

@alexmt @xianlubird

The --namespace controller param and NAMESPACE environment variable should override to produce one canonical value

At present the --namespace controller parameter in main.go, and the NAMESPACE environment variable in main.go control different values:

  • The --namespace param is passed to git generator
  • The NAMESPACE is passed to cache.MultiNamespacedCacheBuilder

So:

  • If both param and env are specified, one of them should override the other.
  • That one value should then be used for both the git generator and the cache

ALSO: Need to update the Readme.md with this PR.

Method for overriding resource deletion default behavior

Currently, the AppSet spec enforces deletion of related Applications and their deployed resources by default. This is not consistent with the default behavior of Application in argo-cd. That is, without AppSets, deployed resources related to an Application are not deleted by default when an Application is deleted. This situation presents a rather confusing situation for users of Application and ApplicationSet since their default behavior for resource deletion has diverged.

Proposal

  • Allow for a method to set the default behavior of ApplicationSet to not delete resources when an ApplicationSet is deleted. A runtime flag would be a possible solution to set the desired default behavior.
  • Allow for the controller's default behavior to be overridden or set within the ApplicationSet spec via an attribute setting.

Justification

In some cases, critical infrastructure resources or applications may be synced to a cluster using ApplicationSet which need to be "protected" from deletion in case of human error in ArgoCD configurations. In the event that an unintentional deletion, by whatever means, of AppSets, or Applications that are linked to AppSets, a Production environment state could be lost causing an outage of provided services. This exact scenario occurred on my team where an inappropriate Argo configuration caused deletion of our AppSets resulting in all environments to be wiped clean by Argo. This occurred because of the default behavior of ApplicationSet with respect to deletions and we did not expect that behavior (we expected the same behavior as Application) as a result of our lack of due diligence prior to Production usage. In this specific case, persistent storage which was provisioned via Argo was also lost. In the case of storage, as a sensitive asset type, it is reasonable to expect a way of protecting those resources from erroneous deletion because of their stateful nature.

Synchronize go.(mod/sum) with latest Argo CD release

The ApplicationSet go.(mod/sum) module metadata is currently targeting Argo CD v1.7.6. We should target v1.8.0 and sync the other shared dependency versions to be consistent with latest Argo CD go.mod versions, and then ensure it builds and the tests pass.

No Kind ApplicationSet registered for argoproj.io/v1alpha1

Hello,
I am trying to evaluate ApplicationSet in my k8s cluster and get the following error when we deploy the applicationset

error: no kind "ApplicationSet" is registered for version "argoproj.io/v1alpha1" in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"

applicationset.yaml that I am using is
`apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: guestbook
spec:
generators:

I am simply trying to see if we get an app generated even though i am not using the parameters currently.

Graduating ApplicationSets, and shipping it with ArgoCD

Greetings @OmerKahani @dgoodwin, all contributors & users of argoproj-labs/applicationset .

Thank you for the amazing work on Application Sets. We love it! I'm Shoubhik, an ArgoCD ( and ApplicationSets) user and a member of the Argo steering committee. I, as a member of the ArgoCD community would like to propose graduating this to https://github.com/argoproj and eventually ship it with ArgoCD.

I wanted to provide you a heads-up of the conversations that are happening on this subject in the Argo Contributor Meetings :)

  • We've started working on a proposal led by @jgwest ( who's recently ramped up his contributions )

  • We brainstormed different ways to initiate the integration into the ArgoCD stack( same process, multiple processes, etc )

  • TODO: Decide on the initial integration approach.

  • TODO: Finalize workitems items based on the integration approach chosen.

Would you be able to join the "Contributor office hours" (https://calendar.google.com/calendar/[email protected]) which happen at 1 PM US East Coast time every Thursday ? If not, no worries, I will ensure you have the meeting notes in this Github thread so that we can track the transition.

License for project unspecified

Hello to do a contribution companies normally require the project License to be filled I can see argo uses normally Apache 2.0 I would assume this one will use it also? If so can this please be filled?

Support "any", "one" filters

I see that Only or Any functions are supported in the https://github.com/antonmedv/expr library. We have a use case for using Only and Any in ApplicationSet generator filters.

We have multiple Kubernetes clusters per environment. Sometimes, we'd like to run our workloads, e.g., Cronjobs, only on one of the available clusters in the environment. An any, one filter will be super helpful here.

Implement additional Git Directory generator E2E tests

The E2E test framework allows us to test ApplicationSet features against a live Kubernetes cluster and a live Argo CD instance. This issue tracks adding additional tests that specifically target the git directory generator.

PR #66 already includes a basic test of this generator, but this basic test was not an attempt at an exhaustive test of the functionality.

Add/update a GitHub action to verify that generated manifests/resources match what is in git

Add a GitHub action (or add a check to an existing GH action's build steps, if this makes more sense) to verify that generated manifests/resources match what is in git.

Essentially, steps something like this:

# clone the applicationset repo
mkdir /tmp/pre-make
cp -r ./ /tmp/pre-make

# run generator
make manifests

# compare old and new, and fail if different
diff -r manifests /tmp/pre-make
# check for non-zero error code to indicate that the generated files have changed, then fail the build and output a message saying user should run make manifests and check in the results into their PR

This will catch any uncommitted changes to manifests that the user has made, and which require updated artifacts to be checked in for (CRDs, generated .go files).

For example, this would have caught the regression fixed by this PR.

This should also include resources generated by controller-gen.

NamespaceSet(?)

I've got a use case, I'm not sure this is the right place to fix it, but it is in range.

I'd like to install a common set of resources into many namespaces in my cluster. Like a DaemonSet for namespaces rather than nodes.

Some thoughts:

  • Where my manifests should come from. A vanilla config-map could create a security issue (too many people could edit it).
  • Do I need any kind of resource, or just a deployment (I think just deployment would probably be fine)? Though that'd just boil down to RBAC at the end of the day.

Write tests for clustereventhandler.go

There are currently no tests for clustereventhandler.go, we should add tests against queueRelatedAppGenerators (IMHO testing the other methods is optional).

Panic whenever the controller tries to handle an ApplicationSet

I've done a basic install as per the README with kustomize and then when I kubectl apply the example applicationset.yaml this is what I get:

2020-09-17T23:10:42.939Z	INFO	controller-runtime.metrics	metrics server is starting to listen	{"addr": ":8081"}
2020-09-17T23:10:42.941Z	INFO	setup	starting manager
2020-09-17T23:10:42.941Z	INFO	controller-runtime.manager	starting metrics server	{"path": "/metrics"}
2020-09-17T23:10:43.041Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "applicationset", "source": "kind source: /, Kind="}
2020-09-17T23:10:43.142Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "applicationset", "source": "kind source: /, Kind="}
2020-09-17T23:10:43.142Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "applicationset", "source": "kind source: /, Kind="}
time="2020-09-17T23:10:43Z" level=info msg="processing event for cluster secret" name=argocd-cluster-uat-corporate namespace=argocd type=createSecretEventHandler
2020-09-17T23:10:43.242Z	INFO	controller-runtime.controller	Starting Controller	{"controller": "applicationset"}
2020-09-17T23:10:43.242Z	INFO	controller-runtime.controller	Starting workers{"controller": "applicationset", "worker count": 1}
time="2020-09-17T23:10:43Z" level=info msg="listed ApplicationSets" count=1 type=createSecretEventHandler
time="2020-09-17T23:10:43Z" level=info msg="generated 0 applications" generator="&{}"
time="2020-09-17T23:10:43Z" level=info msg="processing event for cluster secret" name=cluster-kubernetes.default.svc-3396314289 namespace=argocd type=createSecretEventHandler
time="2020-09-17T23:10:43Z" level=info msg="listed ApplicationSets" count=1 type=createSecretEventHandler
time="2020-09-17T23:10:43Z" level=info msg="processing event for cluster secret" name=argocd-cluster-uat-faketest namespace=argocd type=createSecretEventHandler
time="2020-09-17T23:10:43Z" level=info msg="listed ApplicationSets" count=1 type=createSecretEventHandler
time="2020-09-17T23:10:43Z" level=info msg="matched cluster secret" cluster=cluster-kubernetes.default.svc-3396314289
time="2020-09-17T23:10:43Z" level=info msg="matched cluster secret" cluster=argocd-cluster-uat-faketest
time="2020-09-17T23:10:43Z" level=info msg="matched cluster secret" cluster=argocd-cluster-uat-corporate
E0917 23:10:43.243288       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 293 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x177b380, 0x28ed7d0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x82
panic(0x177b380, 0x28ed7d0)
	/usr/local/go/src/runtime/panic.go:967 +0x166
github.com/argoproj-labs/applicationset/pkg/controllers.(*ApplicationSetReconciler).generateApplications(0xc0006b0690, 0x163e1ce, 0xe, 0xc000b43000, 0x14, 0xc00047c090, 0x9, 0x0, 0x0, 0xc00047c09a, ...)
	/workspace/pkg/controllers/applicationset_controller.go:97 +0x4f9
github.com/argoproj-labs/applicationset/pkg/controllers.(*ApplicationSetReconciler).Reconcile(0xc0006b0690, 0xc00047c09a, 0x6, 0xc00047c090, 0x9, 0x0, 0xbfd1191cce79c96a, 0xc000820090, 0xc00074a1b8)
	/workspace/pkg/controllers/applicationset_controller.go:65 +0x2c3
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0007b40c0, 0x17e4180, 0xc0008d3920, 0x0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256 +0x161
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0007b40c0, 0xc000502500)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 +0xae
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0007b40c0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00015a560)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x5f
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00015a560, 0x3b9aca00, 0x0, 0x1, 0xc0006c7980)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc00015a560, 0x3b9aca00, 0xc0006c7980)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:193 +0x305
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x15efe79]
goroutine 293 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x105
panic(0x177b380, 0x28ed7d0)
	/usr/local/go/src/runtime/panic.go:967 +0x166
github.com/argoproj-labs/applicationset/pkg/controllers.(*ApplicationSetReconciler).generateApplications(0xc0006b0690, 0x163e1ce, 0xe, 0xc000b43000, 0x14, 0xc00047c090, 0x9, 0x0, 0x0, 0xc00047c09a, ...)
	/workspace/pkg/controllers/applicationset_controller.go:97 +0x4f9
github.com/argoproj-labs/applicationset/pkg/controllers.(*ApplicationSetReconciler).Reconcile(0xc0006b0690, 0xc00047c09a, 0x6, 0xc00047c090, 0x9, 0x0, 0xbfd1191cce79c96a, 0xc000820090, 0xc00074a1b8)
	/workspace/pkg/controllers/applicationset_controller.go:65 +0x2c3
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0007b40c0, 0x17e4180, 0xc0008d3920, 0x0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256 +0x161
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0007b40c0, 0xc000502500)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 +0xae
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0007b40c0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00015a560)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x5f
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00015a560, 0x3b9aca00, 0x0, 0x1, 0xc0006c7980)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc00015a560, 0x3b9aca00, 0xc0006c7980)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:193 +0x305

"projectset" proposal

Similar to applicationset, have the ability to generate projects based on directory structure

Intermittent test failure in 'TestGitGenerateParamsFromFiles/handles_error_during_getting_repo_file_contents'

The new test TestGitGenerateParamsFromFiles/handles_error_during_getting_repo_file_contents is intermittently failing locally and in GitHub actions.

See, for example, https://github.com/argoproj-labs/applicationset/actions/runs/479210059

To reproduce:

./until_fail.sh  go test  -race -count=1 -timeout 30s -run ^TestGitGenerateParamsFromFiles$ github.com/argoproj-labs/applicationset/pkg/generators

Using a script file which runs the command over and over until it fails, until_fail.sh:

#!/bin/bash

while "$@"; do :; done

Log

[jgw@localhost applicationset]$ ~/until-fail.sh /usr/local/go/bin/go test  -race -count=1 -timeout 30s -run ^TestGitGenerateParamsFromFiles$ github.com/argoproj-labs/applicationset/pkg/generators
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.329s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.328s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.324s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.316s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.318s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.303s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.300s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.307s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.348s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.341s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.306s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.307s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.279s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.307s
ok  	github.com/argoproj-labs/applicationset/pkg/generators	0.308s
WARNING: Package "github.com/golang/protobuf/protoc-gen-go/generator" is deprecated.
	A future release of golang/protobuf will delete this package,
	which has long been excluded from the compatibility promise.

repoPath:  cluster-config/production/config.json
repoPath:  cluster-config/staging/config.json
[map[cluster.address:https://kubernetes.default.svc cluster.name:production cluster.owner:[email protected] key1:val1 key2.key2_1:val2_1 key2.key2_2.key2_2_1:val2_2_1] map[cluster.address:https://kubernetes.default.svc cluster.name:staging cluster.owner:[email protected]]] <nil>
[] paths error
repoPath:  cluster-config/production/config.json
repoPath:  cluster-config/staging/config.json
[] staging config file get content error
--- FAIL: TestGitGenerateParamsFromFiles (0.00s)
    --- FAIL: TestGitGenerateParamsFromFiles/handles_error_during_getting_repo_file_contents (0.00s)
        git_test.go:277: PASS:	GetPaths(string,string,string,string)
        git_test.go:277: FAIL:	GetFileContent(string,string,string,string)
            		at: [git_test.go:247]
        git_test.go:277: PASS:	GetFileContent(string,string,string,string)
        git_test.go:277: FAIL: 2 out of 3 expectation(s) were met.
            	The code you are testing needs to make 1 more call(s).
            	at: [git_test.go:277]
FAIL
FAIL	github.com/argoproj-labs/applicationset/pkg/generators	0.299s
FAIL

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.