Coder Social home page Coder Social logo

eks-multi-cluster-gitops's Introduction

Build Multi-Cluster GitOps system using Amazon EKS, Flux CD, and Crossplane

Introduction

Many organizations are interested in adopting GitOps for the benefits it brings, including increased productivity, enhanced developer experience, improved stability, higher reliability, consistency and standardization, and stronger security guarantees (see Key Benefits of GitOps).

In many cases, organizations use GitOps with Amazon Elastic Kubernetes Service (Amazon EKS) clusters to run stateless application components, and then rely on cloud infrastructure resources sitting outside the EKS clusters to maintain state. These might include databases (such as Amazon RDS, Amazon DynamoDB), messaging systems (such as Amazon SQS), and blob storage (Amazon S3) amongst others. Organizations can benefit from extending the use of GitOps to cover these cloud infrastructure resources as well.

Additionally, organizations often need more than just one EKS cluster. For example, they may require multiple environments to support their SDLC (e.g. production, development, QA, staging, etc.), and to comply with governance rules related to division of responsibilities (e.g. providing each of the organization’s departments with a dedicated set of clusters). This raises the need for a scalable cluster management solution, which cannot be achieved by having a central platform team assuming sole ownership for provisioning, bootstrapping, and managing the organization’s cluster fleet.

A decentralized (self-service) model for cluster management, accompanied by the right governance structures, can be adopted to achieve the desired scalability. In this model, a platform team leverages GitOps to allow application teams to provision and bootstrap clusters themselves. An application team member creates a pull request with the specifications of cluster. The pull request can either go through a list of pre-defined checks and get automatically approved/merged, or it can go to a platform team member who manually review/approves it, then merges it. Once merged, the cluster is automatically provisioned and bootstrapped with the required tooling, as per the specific standards of the organization. After provisioning and the bootstrapping the cluster, application teams can use it for running their applications.

This repo contains the implementation of a multi-cluster GitOps system that addresses the application development teams use cases, as well as the platform teams use cases. It shows how to extend GitOps to cover the deployment and the management of cloud infrastructure resources (e.g. Amazon RDS, Amazon DynamoDB, and Amazon SQS), in addition to the deployment and management of the native Kubernetes resources. It also shows how to use GitOps to perform cluster management activities (e.g. provisioning and bootstrapping a new cluster, upgrading an existing cluster, deleting a cluster, etc.).

Architecture

A hub/spoke model is used to implement the multi-cluster GitOps. As part of the initial setup, an EKS cluster — the management cluster — is manually created using eksctl, and bootstrapped. Then, the other EKS clusters — workload clusters — are created dynamically by the management cluster using GitOps. The clusters bootstrapping and the deployment of various tools and controllers are also performed using GitOps.

This solution uses FluxCD as a GitOps tool, and uses Crossplane as an infrastructure controller. It also uses Sealed Secrets and External Secrets Operator for secrets management — more details about that exist in the following sections.

The architecture of the solution is depicted in the following diagram: Image of Clone button

At a high level, the initial setup of the solution involves:

  • Setting up the Git repos required for the solution.
  • Generating the keys that will be used by Sealed Secrets for encrypting the secret information stored in Git and decypting it before applying to the target cluster. The generated key are stored in AWS Secrets Manager. External Secrets Operator is used for retrieving the keys from AWS Secrets Manager and injecting it into the target cluster.
  • Creating a dedicated IAM role for the Crossplane AWS provider to be installed in the management cluster.
  • Creating the management cluster, and bootstrapping it with Flux.
  • Setting up a ConfigMap in flux-system namespace named cluster-info and populating it with the management cluster details. This ConfigMap definition allows dynamic variable substitutions through Flux Kustomization spec.postBuild.substituteFrom action.

After the initial setup, the Flux controller in the management cluster deploys other controllers that are required in the management cluster — this includes Crossplane, External Secrets Operator, and Sealed Secrets. The Flux controller synchronizes the workload clusters definitions that exist in Git into the management clusters. Then, Crossplane picks up these definitions, and creates the workload clusters. The Flux controller on the management cluster is also responsible for bootstrapping the provisioned workload cluster with its own Flux controller, and the pre-requisites for that.

The Flux controller on each of the workload clusters deploys other tools required on the cluster (e.g. Crossplane, AWS Load Balancer Controller, etc.), and the workloads (applications, microservices, ...) meant to be deployed on the cluster, as defined in Git. The workloads typically consist of standard Kubernetes resources (e.g. Deployment, Service, ConfigMap, Secret, etc.), and cloud infrastructure resources as well (e.g. DynamoDB table, SQS queue, RDS instance, etc.) that are required for the workload to fully function.

One of the architectural decisions made is to deploy a separate Flux and Crossplane controller on each of the workload clusters, rather than having a central Flux and Crossplane controllers in the management clusters that server all the clusters. The key reason behind that is to reduce dependency on the management cluster, and to increase the scalability of the solution — single/central set of controllers in the management cluster would lead to less scalable solution, compared to separate set of controllers per cluster.

Note: The API server endpoint public access is enabled for the management cluster and the workload clusters provisioned by this management cluster. Support for private clusters is on our roadmap.

Git Repositories

The table below lists the proposed repos:

Local Path Git Repo Name Owner Description
gitops-system gitops-system Platform team This repo contains the manifests for all workload clusters, the manifests for the tools installed on the clusters. It also contains the directories that are synced by the Flux controller of each cluster. While this repo is owned by the platform team, application teams may raise pull requests for new clusters they want to create, or changes on existing cluster they want to implement. Platform team reviews and merges pull requests. See the README for more detailed information about its contents.
app-manifests/product-catalog-api-manifests product-catalog-api-manifests Application Team This represents the repo for the backend API of an imaginary product catalog application. It contains a Kustomization and overlays for the API application resources and the cloud infrastructure resources.
app-manifests/product-catalog-fe-manifests product-catalog-fe-manifests Application Team This represents the repo for the frontend of an imaginary product catalog application. It contains a Kustomization and overlays for the frontend application resources.
gitops-workloads gitops-workloads Governance team This repo connects the repos above — it specifies which applications are deployed on which clusters i.e. for deploying a new application or microservice into a cluster, you go to the folder corresponding to the cluster, and add the manifests required for having a Flux Kustomization that syncs the application repo to the target cluster. This repo may be owned by a central governance team, where application teams raise a pull request for deploying their application to a cluster, and the central governance team reviews and merges the pull request. Also, organizations may choose to reduce governance overhead, and keep this repo open for application teams to directly commit into it the manifests required for deploying their application into a cluster. In that case, it is important to deploy and enable the cluster auto scaler on the clusters to automatically scales out and in, based on the workloads deployed into it.

Note: The initial-setup directory is meant for a one-time initialization of the management cluster and does not need to be placed into its own repo.

Secrets Management

One of the key challenges in GitOps is managing secrets. GitOps entails storing the target state of the system in Git — this covers all the manifests required for describing the target state of the system, including secrets/credentials used for different purposes e.g. username/password used for connecting to a database, or an external service, or even the credentials used by the Flux controller for connecting to Git repos. Such credentials cannot be stored in Git in its plain form, it has to be encrypted. For that purpose, the solution includes Sealed Secrets. The secret information that needs to be deployed into the clusters is first encrypted by the user, and a corresponding SealedSecret resource is created — that is what gets committed into Git. The Sealed Secret controller is then responsible for decrypting the SealedSecret resources deployed via GitOps, and transforming them to a Secret resource, and can be referenced by other resources.

Sealed Secrets itself requires public/private keys that are used for encrypting/decrypting secrets. This can be auto-generated by Sealed Secrets controller at start up time, or predefined using a Secret with specific label. A decision was made to predefine the public/private keys used by Sealed Secrets. Otherwise, it would have been challenging to complete the creation and bootstrapping of new workload clusters using GitOps without manual intervention/customisation. The reason is that bootstrapping the workload cluster involves deploying Flux controller, and the Flux controller requires credentials to be able to connect to Git for synchronization. If the public/key used for encrypting/decrypting the Git credentials are auto generated at deployment time, then the process will have to be split into multiple parts with manual intervention or custom implementation — the first part is the creation of the cluster, and the deployment of Sealed Secrets controller into it. Then, the generated public key needs to be retrieved and used for re-encrypting the Git credentials. The re-encrypted credentials need to be committed into Git. Then, the last part would be deploying the Flux controller.

The pre-defined public/private key pair for Sealed Secrets is created as part of the initial setup, and stored in AWS Secrets Manager. External Secrets Operator is used for retrieving the keys in AWS Secrets Manager, and creating a Secret in the cluster that contains the keys, and has the label required for Sealed Secrets.

This repo contains the configuration of the management cluster and Kubernetes manifests representing the workload clusters, their configuration and the applications running within them. While these are represented as directories within this single repositories, the system assumes that they are split into multiple separate repositories - which allows for finer-grained permissions and version control over each separate part. The directories should be divided into the following repositories:

IAM roles for service accounts (IRSA)

IAM roles for service accounts (IRSA) configuration varies widely for different tools. It may not be feasible to individually list the configuration steps for each of the tools. This section provides the overall architecture and lists the main configuration steps for key tools like Crossplane and External Secrets Operator.

There are various parts to get IRSA working. The major ones are:

  1. IAM IDP pointing to the EKS cluster OIDC endpoint
  2. IAM policy with permissions for the controller or application
  3. IAM role with trust policy referring to the federated IDP and conditions on JWT sub claim referring to the Namespace and ServiceAccount of the pod
  4. IAM role policy attachment
  5. ServiceAccount with annotation referring to the IAM role ARN
  6. Pod manifest referring to the annotated ServiceAccount

The management cluster creation process sets up an IAM IDP pointing to the management cluster OIDC endpoint. The first tool to get installed is Crossplane. Once Crossplane is installed it is responsible for setting up IAM objects for the rest of the tools and applications in management cluster and workload clusters including for Crossplane itself installed in the workload clusters. However, the initial IAM setup to get Crossplane configured with IRSA in the management cluster requires setup outside of gitops.

Enable IRSA for crossplane/provider-aws

The ControllerConfig object found here is annotated with the IAM role ARN. You'll notice the placeholder ${ACCOUNT_ID} will be replaced through Flux postBuild substitution. This annotation is implicitly copied into the service account used by the AWS provider controller during init. The ControllerConfig defined earlier is referred in the spec.controllerConfigRef.name field in the Provider object manifest for crossplane/provider-aws. The ProviderConfig object found here controls the authentication mechanism used by AWS controllers to create the XRs. The ProviderConfig uses InjectedIdentity as the value for the field spec.credentials.source. Refer this doc.

For the workload clusters the IAM objects for Crossplane IRSA are created by the management cluster. The tool itself is installed by Flux running in the workload cluster.

Enable IRSA for external-secrets

The IAM role and policies are created through crossplane and can be found here. The ServiceAccount manifest found here uses EKS IRSA annotation to refer to the IAM role ARN. Like earlier here also you'll notice ${ACCOUNT_ID} placeholder that will be replaced through Flux kustomization postBuild substitution. The ServiceAccount is used to configure the authentication mechanism for the SecretStore found here referring to AWS Secrets Manager using the field spec.provider.aws.auth.jwt.serviceAccountRef.name. Refer this doc.

For the workload clusters the IAM objects and external-secrets installation is performed from the management cluster.

Deployment

Please refer to the initial setup for deploying the system. Also refer to scenarios for the instructions related to various scenarios (e.g. creating a new workload cluster, deleting a workload cluster, onboarding a new microservice/application and deploying it to one or more of the workload clusters).

Please refer to Clean-up for un-deploying the system.

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

eks-multi-cluster-gitops's People

Contributors

amazon-auto avatar chris-short avatar iamahgoub avatar iamsouravin avatar redbackthomson avatar rizblie avatar sheetaljoshi avatar tsahiduek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eks-multi-cluster-gitops's Issues

unable to provision cluster in another aws account

when trying to create a staging cluster it is getting provisioned in same account where mgmt cluster is,
I am looking to have the staging and production clusters in different accounts and mgmt in different account for isolating the environments, however they all are getting provisions din same account,
also I am looking to add Karpenter, could you guide the steps on this.

[FEATURE] Use EKS Pod Identity for granting IAM permissions (AWS Load Balancer Controller)

So far, the implementation has been using IRSA for granting IAM permissions needed to tools/workloads. This issue is for migrating to EKS Pod Identity (where applicable). The scope of this issue is limited to AWS LBC.

Changes:

  1. Install EKS Pod Identity Agent add-on. provider-aws-eks has MR for managing EKS add-ons; see https://marketplace.upbound.io/providers/upbound/provider-aws-eks/v1.4.0/resources/eks.aws.upbound.io/Addon/v1beta1.
  2. Change trust policy of the IAM role located at repos/gitops-system/tools-config/aws-load-balancer-controller-iam/.
  3. Remove the ServiceAccount annotation at repos/gitops-system/tools-config/sa.yaml.
  4. Add manifests for ServiceAccount and IAM role association; provider-aws-eks has MR for managing that; see https://marketplace.upbound.io/providers/upbound/provider-aws-eks/v1.4.0/resources/eks.aws.upbound.io/PodIdentityAssociation/v1beta1.

Cloud9 failing to reconnect to instance

After a Cloud9 timeout it is no longer possible to reconnect to the Cloud9 instance.

Appears to be related to a move to SSM for connecting to instances, which requires the correct IAM permissions (specifically the managed policy AmazonSSMManagedInstanceCore) to be included in the EC2 instance IAM role.

Note: Cloud9 now attaches a role with this policy this by default, but we are replacing this default role with the gitops-workshop role plus a restricted set of permissions.

Add explanation for IRSA setup

Make the following changes to explain how IRSA is setup:

  • update README.md with details on IRSA setup including the out of band initial configuration
  • add tool specific manifest details as much as possible

Unnecessary use of GitHub API for management cluster Flux bootstrap

Current instructions for GitHub Flux bootstrap requires the use of a GitHub PAT. This is because the bootstrap is being performed using the GitHub API i.e. using flux bootstrap github.

This can be avoided by using the Git protocol with the existing ssh key instead of the GitHub API e.g.

flux bootstrap git \
  --url=ssh://[email protected]/rizblie/gitops-system.git \
  --private-key-file=/home/ubuntu/.ssh/gitops \
  --components-extra=image-reflector-controller,image-automation-controller \
  --namespace=flux-system \
  --branch=main \
  --path=clusters/mgmt

This is a more generic solution as it does not rely on a GitHub-specific API and can be used with other Git hosting services.

IRSA annotation not getting applied to crossplane aws provider serviceaccount in workload cluster

Problem:
The ControllerConfig annotation for Crossplane AWS Provider installed in the workload cluster is not patched and substituted by Flux Kustomization in gitops-system/clusters/cluster-name/crossplane.yaml. This is resulting in the IRSA role not getting applied to the provider serviceaccount for the AWS API calls in the workload clusters.

Fix:
The patch must be applied on the child Kustomization object for crossplane-aws-provider.

Replace

patches:
    - target:
        group: pkg.crossplane.io
        kind: ControllerConfig
        version: v1alpha1
        name: aws-config
      patch: |-
        - op: replace
          path: /metadata/annotations
          value:
            eks.amazonaws.com/role-arn: arn:aws:iam::${ACCOUNT_ID}:role/crossplane-role-cluster-name
  postBuild:
    substituteFrom:
      - kind: ConfigMap
        name: cluster-info
        optional: false

with

patches:
    - target:
        group: kustomize.toolkit.fluxcd.io
        kind: Kustomization
        version: v1beta2
        name: crossplane-aws-provider
        namespace: flux-system
      patch: |-
        - op: add
          path: /spec/patches
          value:
            - target:
                group: pkg.crossplane.io
                kind: ControllerConfig
                version: v1alpha1
                name: aws-config
              patch: |-
                - op: replace
                  path: /metadata/annotations
                  value:
                    eks.amazonaws.com/role-arn: arn:aws:iam::${ACCOUNT_ID}:role/crossplane-role-commercial-staging
        - op: add
          path: /spec/postBuild
          value:
            substituteFrom:
              - kind: ConfigMap
                name: cluster-info
                optional: false

Move EKS composition out of tools as it is not required for workload clusters

Problem:
The Flux Kustomizations in repos/gitops-system/tools/crossplane gets installed in management cluster and all workload clusters. The directory also contains XRD and XRs for EKS composition which is only required for management cluster. This is redundant for workload clusters.

Fix:
Move the composition out of tools and install only for management cluster. Following steps document the required changes.

  1. move repos/gitops-system/tools/crossplane/crossplane-composition/ to repos/gitops-system/tools-config/
  2. remove repos/gitops-system/tools/crossplane/crossplane-composition.yaml
  3. update repos/gitops-system/tools/crossplane/kustomization.yaml to remove - crossplane-composition.yaml
  4. rename repos/gitops-system/tools-config/crossplane-composition to crossplane-eks-composition
  5. add repos/gitops-system/clusters/mgmt/crossplane-eks-composition.yaml to install in the management cluster
  6. update repos/gitops-system/clusters/mgmt/clusters-config.yaml to depend on sealed-secrets
  7. update repos/gitops-system/clusters/mgmt/external-secrets-iam.yaml to depend on crossplane-eks-composition
  8. update repos/gitops-system/clusters/mgmt/kustomization.yaml to add - crossplane-eks-composition.yaml

[FEATURE] Migrate to the official Upbound AWS provider (EKS cluster)

This issue is for migrating from community AWS provider to the official Upbound AWS provider. The scope of this issue is the EKS cluster composition.

Changes:

  1. Install the official Upbound AWS provider -- this requires changes at repos/gitops-system/tools/crossplane/ i.e. adding the providers installation manifests. Note that the Upbound provider is broken down into service-level providers, and given that EKS cluster creation involves creating several cloud resources, several providers will be required -- this include: provider-aws-eks, provider-aws-vpc, provider-aws-ec2, and provider-aws-iam. A Kustomization has to be added under flux folder of the management cluster repos/gitops-system/cluster/mgmt/ for reconciling the installation manifests.
  2. Updating EKS composition to use the MRs of the new providers -- this requires changes at repos/gitops-system/tools-config/crossplane-eks-composition/; the interface (XRD) should be kept the same.

[FEATURE] Use EKS new access management

Right now, the implementation uses aws-auth ConfigMap for managing access to EKS clusters. This issue is for migrating to EKS new access management announced recently -- see: https://aws.amazon.com/blogs/containers/a-deep-dive-into-simplified-amazon-eks-access-management-controls/.

Update the implementation at repos/gitops-system/tools-config/aws-auth. The official Upbound AWS provider has MR for managing access using the new mechanism. See: https://marketplace.upbound.io/providers/upbound/provider-aws-eks/v1.4.0/resources/eks.aws.upbound.io/ClusterAuth/v1beta1.

Trim mgmt cluster manifests

Flux on the management cluster does not need sealed-secrets to access gitops-system, as the credentials are passed as part of flux bootstrap.

This means that the manifests for the mgmt cluster can be trimmed to include just the crossplane-related and clusters-config kustomizations.

Cannot re-create workload cluster with same name

If a workload cluster is removed, and then re-created with the same name, the cluster secret cluster-name-eks-connectionis not created and this causes reconciliation to fail.

While this is an improbable scenario, it does suggest that the cleanup process following the deletion of the cluster is missing something.

Add IRSA support for cluster tools and applications

Add IRSA support for cluster tools and applications. The changes should enable the following:

  • Remove long lived AWS credentials using access key and secret access key
  • Allow dynamic configuration using variable substitutions
  • Create extension point to support cross account access in the future

[FEATURE] Migrate to the official Upbound AWS provider (Product Catalog app)

This issue is for migrating from community AWS provider to the official Upbound AWS provider. The scope of this issue is the cloud resources needed for the same app (product catalog).

Changes:

  1. Add the Upbound AWS provider for DynamoDB (product catalog API has a dependency on DDB table) -- this requires changes at repos/gitops-system/tools/crossplane/ i.e. adding the provider installation manifests. A Kustomization has to be added under flux folder of the workload cluster repos/gitops-system/clusters/template/ for reconciling the installation manifests.
  2. Updating manifests at repos/app-manifests/product-catalog-api-manifests\ to refer to the MRs of the new provider.

Cannot upload ssh public key to new IAM user for AWS CodeCommit

Problem: Cannot call iam:UploadSSHPublicKey to upload new SSH public key to gitops user created for AWS CodeCommit.

Cause: Action iam:UploadSSHPublicKey is missing from initial-setup/config/cloud9-role-permission-policy-template.json

Fix: Add iam:UploadSSHPublicKey to initial-setup/config/cloud9-role-permission-policy-template.json

Move cluster OIDC IDP creation to composition and crossplane-iam to tools-config

  • Cluster OIDC IDP creation should be moved to repos/gitops-system/tools/crossplane/crossplane-composition/composition.yaml
  • crossplane-iam should be moved to tools-config
  • repos/gitops-system/clusters-config/commercial-prod should be updated to reflect the new resources structure
  • repos/gitops-system/clusters-config/template should be updated to reflect the new resources structure

Source reconciliation failing for AWS CodeCommit

Problem: Sources referring to AWS CodeCommit failing to reconcile.

Cause: GitRepositoryReconciler uses go-git client by default which does not support v2 protocol used by AWS CodeCommit.

Fix: Patch GitRepository objects to use libgit2 git client implementation.

Update EKS cluster Kubernetes minor version to 1.23 for cluster as well as node groups

Update EKS cluster and managed node group version to 1.23.

Make updates to
repos/gitops-system/clusters-config/template/def/eks-cluster.yaml: eks-k8s-version: '1.21'
repos/gitops-system/clusters-config/template/def/eks-cluster.yaml: mng-k8s-version: '1.21'

Also, update repos/gitops-system/tools-config/crossplane-eks-composition/compositeresourcedefinition.yaml to include all the EKS supported of Kubernetes.

Make sure all the markdown reflect the Kubernetes version change.

No REPO_PREFIX placeholder exists

Under Initial set up:

Update references to GitRepository URLs

Step 4:
sed -i "sREPO_PREFIX$REPO_PREFIX~g"
gitops-workloads/template/app-template/git-repo.yaml

No REPO_PREFIX variable exists in git-repo.yaml file

[FEATURE] Add support for multi-account

As of now, all the workload clusters provisioned by the solution lands on the same account where the management cluster resides. This issue for adding support for multi-cluster.

Changes:
Workload clusters can be provisioned in different accounts by following the steps below:

  1. Create an additional ProviderConfig that points to the account where you want to deploy the workload cluster. Original ProviderConfig can be found at: https://github.com/aws-samples/eks-multi-cluster-gitops/blob/main/repos/gitops-system/tools/crossplane/crossplane-aws-provider-config/aws-providerconfig.yaml. The new ProviderConfig will be a bit different from the original one; a role in the workload cluster account has to be assumed -- refer to the following sample for guidance: https://github.com/crossplane-contrib/provider-aws/blob/master/AUTHENTICATION.md#using-assumerole.
  2. Change the EKS composition to parameterise providerConfigRef.
  3. Pass the name of the new ProviderConfig created at step 1 in the claim of the new cluster at: https://github.com/aws-samples/eks-multi-cluster-gitops/blob/main/repos/gitops-system/clusters-config/template/def/eks-cluster.yaml.

NOTE: the steps above are based on the community AWS provider -- it needs to be validated for the official Upbound AWS providers.

You will have to create an IAM role in the workload cluster account with trust policy that allows assuming it from the IAM role in the management account used for running Crossplane AWS provider; it should have the required IAM permissions for creating EKS clusters, and its dependencies.

Fix health check for workload cluster sealed-secrets

The current manifest for workload cluster sealed-secrets incorrectly performs a health check on a local (management cluster) deployment instead of the workload cluster. This does not perform any useful function and should be removed. The existing HelmRelease healthcheck (which deploys to the remote workload cluster) should suffice.

Cannot create user and attach user policy for AWS CodeCommit from Cloud9

Problem: Unable to create user and attach user policy for AWS CodeCommit repository from Cloud9 instance.

Cause: Locked down IAM permissions attached to Cloud9 instance do not allow iam:CreateUser and iam:AttachUserPolicy actions.

Fix: Add actions "iam:AttachUserPolicy" and "iam:CreateUser" in file initial-setup/config/cloud9-role-permission-policy-template.json to allow Cloud9 instance to create new IAM user and attach permissions policy for AWS CodeCommit.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.