Coder Social home page Coder Social logo

aws-ia / terraform-aws-control_tower_account_factory Goto Github PK

View Code? Open in Web Editor NEW
621.0 50.0 412.0 541 KB

AWS Control Tower Account Factory

License: Apache License 2.0

HCL 50.14% Smarty 12.49% Python 35.54% Jinja 0.84% Shell 0.99%
control-tower-service-team-owned repo-does-not-accept-pull-requests

terraform-aws-control_tower_account_factory's Introduction

AWS Control Tower Account Factory for Terraform

AWS Control Tower Account Factory for Terraform (AFT) follows a GitOps model to automate the processes of account provisioning and account updating in AWS Control Tower. You'll create an account request Terraform file, which provides the necessary input that triggers the AFT workflow for account provisioning.

For more information on AFT, see Overview of AWS Control Tower Account Factory for Terraform

Getting started

This guide is intended for administrators of AWS Control Tower environments who wish to set up Account Factory for Terraform (AFT) in their environment. It describes how to set up an Account Factory for Terraform (AFT) environment with a new, dedicated AFT management account. This guide follows the deployment steps outlined in Deploy AWS Control Tower Account Factory for Terraform (AFT)

Configure and launch your AWS Control Tower Account Factory for Terraform

Five steps are required to configure and launch your AFT environment.

Step 1: Launch your AWS Control Tower landing zone

Before launching AFT, you must have a working AWS Control Tower landing zone in your AWS account. You will configure and launch AFT from the AWS Control Tower management account.

Step 2: Create a new organizational unit for AFT (recommended)

We recommend that you create a separate OU in your AWS Organization, where you will deploy the AFT management account. Create an OU through your AWS Control Tower management account. For instructions on how to create an OU, refer to Create an organization in the AWS Organizations User Guide.

Step 3: Provision the AFT management account

AFT requires a separate AWS account to manage and orchestrate its own requests. From the AWS Control Tower management account that's associated with your AWS Control Tower landing zone, you'll provision this account for AFT.

To provision the AFT management account, see Provisioning Account Factory Accounts With AWS Service Catalog. When specifying an OU, be sure to select the OU you created in Step 2. When specifying a name, use "AFT-Management".

Note: It can take up to 30 minutes for the account to be fully provisioned. Validate that you have access to the AFT management account.

Step 4: Ensure that the Terraform environment is available for deployment

This step assumes that you are experienced with Terraform, and that you have procedures in place for executing Terraform. AFT supports Terraform Version 0.15.x or later.

Step 5: Call the Account Factory for Terraform module to deploy AFT

The Account Factory for Terraform module must be called while you are authenticated with AdministratorAccess credentials in your AWS Control Tower management account.

AWS Control Tower, through the AWS Control Tower management account, vends a Terraform module that establishes all infrastructure necessary to orchestrate your AWS Control Tower account factory requests. You can view that module in the AFT repository.

Refer to the module’s README file for information about the input required to run the module and deploy AFT.

If you have established pipelines for managing Terraform in your environment, you can integrate this module into your existing workflow. Otherwise, run the module from any environment that is authenticated with the required credentials.

Note: The AFT Terraform module does not manage a backend Terraform state. Be sure to preserve the Terraform state file that’s generated, after applying the module, or set up a Terraform backend using Amazon S3 and DynamoDB.

Certain input variables may contain sensitive values, such as a private ssh key or Terraform token. These values may be viewable as plain text in Terraform state file, depending on your deployment method. It is your responsibility to protect the Terraform state file, which may contain sensitive data. See the Terraform documentation

for more information.

Note: Deploying AFT through the Terraform module requires several minutes. Initial deployment may require up to 30 minutes. As a best practice, use AWS Security Token Service (STS) credentials and ensure that the credentials have a timeout sufficient for a full deployment, because a timeout causes the deployment to fail. The minimum timeout for AWS STS credentials is 60 minutes or more. Alternatively, you can leverage any IAM user that has AdministratorAccess permissions in the AWS Control Tower management account.

Next Steps:

Now that you have configured and deployed AWS Control Tower Account Factory for Terraform, follow the steps outlined in Post-deployment steps and Provision accounts with AWS Control Tower Account Factory for Terraform to begin using your environment.

Collection of Operational Metrics

As of version 1.6.0, AFT collects anonymous operational metrics to help AWS improve the quality and features of the solution. For more information, including how to disable this capability, please see the documentation here.

Requirements

Name Version
terraform >= 1.2.0, < 2.0.0
aws >= 5.11.0, < 6.0.0

Providers

Name Version
aws >= 5.11.0, < 6.0.0
local n/a

Modules

Name Source Version
aft_account_provisioning_framework ./modules/aft-account-provisioning-framework n/a
aft_account_request_framework ./modules/aft-account-request-framework n/a
aft_backend ./modules/aft-backend n/a
aft_code_repositories ./modules/aft-code-repositories n/a
aft_customizations ./modules/aft-customizations n/a
aft_feature_options ./modules/aft-feature-options n/a
aft_iam_roles ./modules/aft-iam-roles n/a
aft_lambda_layer ./modules/aft-lambda-layer n/a
aft_ssm_parameters ./modules/aft-ssm-parameters n/a
packaging ./modules/aft-archives n/a

Resources

Name Type
aws_partition.current data source
aws_service.home_region_validation data source
aws_ssm_parameters_by_path.servicecatalog_regional_data data source
local_file.python_version data source
local_file.version data source

Inputs

Name Description Type Default Required
account_customizations_repo_branch Branch to source account customizations repo from string "main" no
account_customizations_repo_name Repository name for the account customizations files. For non-CodeCommit repos, name should be in the format of Org/Repo string "aft-account-customizations" no
account_provisioning_customizations_repo_branch Branch to source account provisioning customization files string "main" no
account_provisioning_customizations_repo_name Repository name for the account provisioning customizations files. For non-CodeCommit repos, name should be in the format of Org/Repo string "aft-account-provisioning-customizations" no
account_request_repo_branch Branch to source account request repo from string "main" no
account_request_repo_name Repository name for the account request files. For non-CodeCommit repos, name should be in the format of Org/Repo string "aft-account-request" no
aft_backend_bucket_access_logs_object_expiration_days Amount of days to keep the objects stored in the access logs bucket for AFT backend buckets number 365 no
aft_enable_vpc Flag turning use of VPC on/off for AFT bool true no
aft_feature_cloudtrail_data_events Feature flag toggling CloudTrail data events on/off bool false no
aft_feature_delete_default_vpcs_enabled Feature flag toggling deletion of default VPCs on/off bool false no
aft_feature_enterprise_support Feature flag toggling Enterprise Support enrollment on/off bool false no
aft_framework_repo_git_ref Git branch from which the AFT framework should be sourced from string null no
aft_framework_repo_url Git repo URL where the AFT framework should be sourced from string "https://github.com/aws-ia/terraform-aws-control_tower_account_factory.git" no
aft_management_account_id AFT Management Account ID string n/a yes
aft_metrics_reporting Flag toggling reporting of operational metrics bool true no
aft_vpc_cidr CIDR Block to allocate to the AFT VPC string "192.168.0.0/22" no
aft_vpc_endpoints Flag turning VPC endpoints on/off for AFT VPC bool true no
aft_vpc_private_subnet_01_cidr CIDR Block to allocate to the Private Subnet 01 string "192.168.0.0/24" no
aft_vpc_private_subnet_02_cidr CIDR Block to allocate to the Private Subnet 02 string "192.168.1.0/24" no
aft_vpc_public_subnet_01_cidr CIDR Block to allocate to the Public Subnet 01 string "192.168.2.0/25" no
aft_vpc_public_subnet_02_cidr CIDR Block to allocate to the Public Subnet 02 string "192.168.2.128/25" no
audit_account_id Audit Account Id string n/a yes
backup_recovery_point_retention Number of days to keep backup recovery points in AFT DynamoDB tables. Default = Never Expire number null no
cloudwatch_log_group_retention Amount of days to keep CloudWatch Log Groups for Lambda functions. 0 = Never Expire string "0" no
concurrent_account_factory_actions Maximum number of accounts that can be provisioned in parallel. number 5 no
ct_home_region The region from which this module will be executed. This MUST be the same region as Control Tower is deployed. string n/a yes
ct_management_account_id Control Tower Management Account Id string n/a yes
github_enterprise_url GitHub enterprise URL, if GitHub Enterprise is being used string "null" no
global_codebuild_timeout Codebuild build timeout number 60 no
global_customizations_repo_branch Branch to source global customizations repo from string "main" no
global_customizations_repo_name Repository name for the global customization files. For non-CodeCommit repos, name should be in the format of Org/Repo string "aft-global-customizations" no
log_archive_account_id Log Archive Account Id string n/a yes
log_archive_bucket_object_expiration_days Amount of days to keep the objects stored in the AFT logging bucket number 365 no
maximum_concurrent_customizations Maximum number of customizations/pipelines to run at once number 5 no
terraform_api_endpoint API Endpoint for Terraform. Must be in the format of https://xxx.xxx. string "https://app.terraform.io/api/v2/" no
terraform_distribution Terraform distribution being used for AFT - valid values are oss, tfc, or tfe string "oss" no
terraform_org_name Organization name for Terraform Cloud or Enterprise string "null" no
terraform_token Terraform token for Cloud or Enterprise string "null" no
terraform_version Terraform version being used for AFT string "1.6.0" no
tf_backend_secondary_region AFT creates a backend for state tracking for its own state as well as OSS cases. The backend's primary region is the same as the AFT region, but this defines the secondary region to replicate to. string "" no
vcs_provider Customer VCS Provider - valid inputs are codecommit, bitbucket, github, or githubenterprise string "codecommit" no

Outputs

Name Description
account_customizations_repo_branch n/a
account_customizations_repo_name n/a
account_provisioning_customizations_repo_branch n/a
account_provisioning_customizations_repo_name n/a
account_request_repo_branch n/a
account_request_repo_name n/a
aft_feature_cloudtrail_data_events n/a
aft_feature_delete_default_vpcs_enabled n/a
aft_feature_enterprise_support n/a
aft_management_account_id n/a
aft_vpc_cidr n/a
aft_vpc_private_subnet_01_cidr n/a
aft_vpc_private_subnet_02_cidr n/a
aft_vpc_public_subnet_01_cidr n/a
aft_vpc_public_subnet_02_cidr n/a
audit_account_id n/a
backup_recovery_point_retention n/a
cloudwatch_log_group_retention n/a
ct_home_region n/a
ct_management_account_id n/a
github_enterprise_url n/a
global_customizations_repo_branch n/a
global_customizations_repo_name n/a
log_archive_account_id n/a
maximum_concurrent_customizations n/a
terraform_api_endpoint n/a
terraform_distribution n/a
terraform_org_name n/a
terraform_version n/a
tf_backend_secondary_region n/a
vcs_provider n/a

terraform-aws-control_tower_account_factory's People

Contributors

andrew-glenn avatar hanafya avatar tonynv avatar troy-ameigh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-control_tower_account_factory's Issues

Account Customisations - private terraform modules

Using public terraform modules with AFT is easy. It would be great if there was some documentation or an enhancement to facilitate pulling private modules from git or s3.
For git modules i think there would need to be some changes to the AFT codebuild jobs to inject credentials into the job to allow pulling from the private repo
s3 would be easier - setting one up can be done outside AFT. IAM Policy guidence here would also be a nice touch

Issue creating lambda layer version due to key not present

I am facing issue creating a resource due to non existent s3 key. Could you please help identifying the cause here? My guess its not referencing the path correctly

module.aft.module.aft_lambda_layer.aws_lambda_layer_version.layer_version: Creating...
╷
│ Error: Error creating lambda layer: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
│ {
│   RespMetadata: {
│     StatusCode: 400,
│     RequestID: "7980129f-6f43-4ae7-9952-99df5af92aec"
│   },
│   Message_: "Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.",
│   Type: "User"
│ }
│ 
│   with module.aft.module.aft_lambda_layer.aws_lambda_layer_version.layer_version,
│   on .terraform/modules/aft/modules/aft-lambda-layer/main.tf line 24, in resource "aws_lambda_layer_version" "layer_version":
│   24: resource "aws_lambda_layer_version" "layer_version" {
│ 
╵
resource "aws_lambda_layer_version" "layer_version" {
  lifecycle {
    create_before_destroy = true
  }

  layer_name          = "${var.lambda_layer_name}-${replace(time_sleep.lambda_layer_wait.triggers["lambda_layer_version"], ".", "-")}"
  compatible_runtimes = ["python${var.lambda_layer_python_version}"]
  s3_bucket           = var.s3_bucket_name
  s3_key              = "layer.zip"
}

document referred while creating resources: https://learn.hashicorp.com/tutorials/terraform/aws-control-tower-aft

Lambda aft-account-request-processor fails to create a new account

Hi!

When trying to create a new account using aft, the lambda aft-account-request-processor fails to create the new account with the error:

{
    "time_stamp": "2022-01-18 16:11:23,200",
    "log_level": "ERROR",
    "log_message": {
        "FILE": "aft_account_request_processor.py",
        "METHOD": "lambda_handler",
        "EXCEPTION": "An error occurred (ResourceNotFoundException) when calling the ProvisionProduct operation: No launch paths found for resource: prod-emcidcalm2e2s"
    }
}
Traceback (most recent call last):
  File "/var/task/aft_account_request_processor.py", line 209, in lambda_handler
    response = create_new_account(
  File "/var/task/aft_account_request_processor.py", line 87, in create_new_account
    response = client.provision_product(
  File "/opt/python/lib/python3.8/site-packages/botocore/client.py", line 386, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/opt/python/lib/python3.8/site-packages/botocore/client.py", line 705, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the ProvisionProduct operation: No launch paths found for resource: prod-emcidcalm2e2s

After spending some time I solved the problem by adding the role arn:aws:iam::xxxx:role/AWSAFTExecution to the AWS Control Tower Account Factory Portfolio in my Control Tower Management Account. If this is the right solution, then it would be nice to add it to the documentation.

By the way, thank you for this powerful tool!

Provision existing AWS account

I'm considering using AFT to apply customizations to existing AWS accounts that are not even registered in Control Tower. I know that I can enroll existing account to Control Tower but it looks like provisioning existing AWS account to AFT is not officially supported.

I made a research and few tests and it appears that it can be done quite easy:

  1. Create account request file and put email address of existing account into AccountEmail parameter.
  2. Wait until lambda function aft-account-request-processor fails with an error that email is not valid ( because it is used by existing account ). By analyzing code of the mentioned lambda function it appears that the check is done just before the creation of new account which is basically ProvisionProduct API call to Service Catalog. Also creation request is not removed from DynamoDB tables in case of mentioned failure. That gave me an idea to try to enroll the AWS account in Control Tower manually in AWS console because I knew it works for emails of existing AWS accounts also. I was also aware of a fact that enrolling AWS account that way uses the mentioned API.
  3. Enroll AWS account in Control Tower. After successful registration it will generate lifecycle event CreateManagedAccount which is the event that AFT is waiting for to continue.
  4. Process of provisioning AWS account in AFT is resumed and finally the AWS account is marked as AFT provisioned. During the provisioning process the customizations are applied to existing AWS account. It's also possible to re-invoke the customizations as documented here: https://docs.aws.amazon.com/controltower/latest/userguide/aft-account-customization-options.html#aft-re-invoke-customizations. This is what I wanted to achieve.

Please review this approach and comment is it safe to do it that way. Also it would be nice to know if there are any plans to officially support provisioning of existing AWS account anytime soon.

lambda aft-enable-cloudtrail fails

aft version: 1.0.11
feature flag: aft_feature_cloudtrail_data_events = true

after the first new account is provisioned and enrolled in controltower the step function aft-feature-options fails at Enable Cloudtrail.
the lambda it invokes (aft-enable-cloudtrail) throws an error:

[ERROR] TrailNotFoundException: An error occurred (TrailNotFoundException) when calling the GetTrail operation: Unknown trail: aws-aft-CustomizationsCloudTrail for the user: xxxxxxxxxxx
Traceback (most recent call last):
  File "/var/task/aft_enable_cloudtrail.py", line 130, in lambda_handler
    if not trail_exists(ct_session):
  File "/var/task/aft_enable_cloudtrail.py", line 16, in trail_exists
    response = client.get_trail(Name=CLOUDTRAIL_TRAIL_NAME)
  File "/opt/python/lib/python3.8/site-packages/botocore/client.py", line 386, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/opt/python/lib/python3.8/site-packages/botocore/client.py", line 705, in _make_api_call
    raise error_class(parsed_response, operation_name)

i believe this should throw an exception that is gracefully caught in the lambda python code. you can see in the code a conditional for this cloudtail being created. as it stands in version 1.0.11 if the cloudtrail trail doesnt exist the boto3 request error isnt caught and it just fails:

you can see in an earlier version of AFT this was actually being handled correctly:

try:
client = session.client('cloudtrail')
logger.info('Checking for trail ' + CLOUDTRAIL_TRAIL_NAME)
response = client.get_trail(Name=CLOUDTRAIL_TRAIL_NAME)
logger.info("Trail already exists")
return True
except client.exceptions.TrailNotFoundException as e:

So currently i dont think this functionality for enabling cloudtrail data events will work, correct me if im wrong?

invoke-aft-account-provisioning-framework lambda fails due to lack of pagination

Hi,

I first noticed the lambda function was erroring out with the message TypeError: can only concatenate str (not "NoneType") to str due to the function get_account_email_from_id returning None.

The exception is raised due to the below line as id is NoneType

It looks like when the function tries to get the account email by id, it ends up returning None due to only a subset of accounts being returned on the list_accounts call. This should paginate through all available accounts before trying to match the id or else your limited to around 20 accounts.

def get_org_accounts(session):
try:
client = session.client("organizations")
logger.info("Listing accounts for the org")
response = client.list_accounts()
logger.info(response)
return response["Accounts"]
except Exception as e:
message = {
"FILE": __file__.split("/")[-1],
"METHOD": inspect.stack()[0][3],
"EXCEPTION": str(e),
}
logger.exception(message)
raise
def get_account_email_from_id(ct_management_session, id):
try:
accounts = get_org_accounts(ct_management_session)
logger.info("Getting account email for account id " + id)
for a in accounts:
if a["Id"] == id:
email = a["Email"]
logger.info("Account email: " + email)
return email
return None

Customizing CT or AFT Management Accounts

Is it possible to manage the Control Tower or AFT Management accounts in AFT? Those accounts are both created manually. If imported, would account customizations apply to those accounts as well?

Account provisioning framework stepfunction fails at aft-account-provisioning-framework-account-metadata-ssm lambda

I am unable to complete the account vending process as account provisioning errors out. The account id xxxx... is my aft management account id while yyyyy... is the account id of the partially created new account.

I'm not sure what step I might have missed but I did grant AWSAFTExecution access to the AWS Control Tower account factory portfolio in AWS Service Catalog for the AFT management account so I know its not that.

{
  "Error": "ClientError",
  "Cause": "{\"errorMessage\":\"An error occurred (AccessDenied) when calling the AssumeRole operation: 
   User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/AWSAFTAdmin/AWSAFT-Session is not authorized to perform: 
   sts:AssumeRole on resource: arn:aws:iam::yyyyyyyyyyyy:role/AWSAFTExecution\",\"errorType\":\"ClientError\",
   \"stackTrace\":[\"  File \\\"/var/task/aft_account_provisioning_framework_account_metadata_ssm.py\\\", line 87, 
   in lambda_handler\\n    target_account_creds = utils.get_assume_role_credentials(\\n\",\"  File \\\"/opt/python/
   lib/python3.8/site-packages/aft_common/aft_utils.py\\\", line 247, in get_assume_role_credentials\\n     
   assume_role_response = client.assume_role(**assume_role_params)\\n\",\"  File \\\"/opt/python/
   lib/python3.8/site-packages/botocore/client.py\\\", line 386, in _api_call\\n    
   return self._make_api_call(operation_name, kwargs)\\n\",\"  File \\\"/opt/python/lib/python3.8/site-packages/ 
   botocore/client.py\\\", line 705, in _make_api_call\
   \n    raise error_class(parsed_response, operation_name)\\n\"]}"
}

I'm unsure how to proceed. Should I try to login to the new account even though the pipeline hasn't really been created yet? Or was it a session credentials timeout issue and I should try to rerun the step function with the same input?

plan after initial deployment detects drift

Just deployed from main... initial deployment fine.. a new plan shows the a drift detection, despite doing nothing at all... The strange thing is that actually the drift is "non-existent"
These are the 3 resources with drift detected

  • module.aft.module.aft_lambda_layer.aws_iam_role.codebuild
  • resource "aws_kms_key" "aft_log_key"
  • resource "aws_s3_bucket_policy" "aft_logging_bucket"

all the "drifts" are policy objects where one of the principals has been replaced by another (of the same name) e.g.

~ Principal = {
                      ~ Service = [
                            - "cloudtrail.amazonaws.com",
                               "delivery.logs.amazonaws.com",
                             + "cloudtrail.amazonaws.com",
                               "vpc-flow-logs.amazonaws.com",
                            ]
                        }

running : terraform apply -refresh-only

doesn't fix the issue.. we see that one or more of the policies remains in "drift" state..

Customizations multiple regions

Hi team,

sometimes it's needed to deploy resources in multiple regions, however the provider is a jinja template file that is used during the build phase.

What's the best way to deploy customizations on multiple regions?

Thanks,
Francisco

aws_backup_vault resource causes drift

Terraform sees drift in module.aft.module.aft_account_request_framework.aws_backup_vault.aft_controltower_backup_vault and module.aft.module.aft_account_request_framework.aws_backup_selection.aft_controltower_backup_selection due to recovery points changing. This may be preventable by adding ignore_changes to the lifecycle of the resource. Below is part of the output of terraform apply.

changes to Terraform detected the following changes made outside of Terraform since the
last "terraform apply":
  # module.aft.module.aft_account_request_framework.aws_backup_vault.aft_controltower_backup_vault has been changed
  ~ resource "aws_backup_vault" "aft_controltower_backup_vault" {
        id              = "aft-controltower-backup-vault"
        name            = "aft-controltower-backup-vault"
      ~ recovery_points = 856 -> 860
        tags            = {}
        # (3 unchanged attributes hidden)
    }
  # module.aft.module.aft_account_request_framework.aws_backup_selection.aft_controltower_backup_selection has been changed
  ~ resource "aws_backup_selection" "aft_controltower_backup_selection" {
        id            = "30f9741e-fd10-47e9-b181-23c4a5ac49c0"
        name          = "aft-controltower-backup-selection"
      + not_resources = []
        # (3 unchanged attributes hidden)
      + condition {
        }
    }

Do not detect drift in changes.

Manually changed the OU from the AWS console for the AWS account created using the aft-account-request module.
The pipeline was rerun, and no change was detected, resulting in no change.

resources.instances.attributes.item

"control_tower_parameters.ManagedOrganizationalUnit” (In “resources.instances.attributes.item”) is drifting with the value of the AWS resource.

Delete default VPC's always fails

If you have Control Tower set to specify which regions to govern the lambda that invokes the delete default vpc script will fail with an unauthorized error when it attempts to run against a region that's not governed.

[ERROR] ClientError: An error occurred (UnauthorizedOperation) when calling the DescribeVpcs operation: You are not authorized to perform this operation. Traceback (most recent call last): File "/var/task/lambda_function.py", line 338, in lambda_handler vpc = get_default_vpc(client) File "/var/task/lambda_function.py", line 37, in get_default_vpc response = client.describe_vpcs( File "/opt/python/lib/python3.8/site-packages/botocore/client.py", line 386, in _api_call return self._make_api_call(operation_name, kwargs) File "/opt/python/lib/python3.8/site-packages/botocore/client.py", line 705, in _make_api_call raise error_class(parsed_response, operation_name)

module.aft_lambda_layer.aws_iam_role.codebuild causes drift

It looks like the assume_role_policy having the service principals not in alphabetical order causes drift. I believe that changing the order would fix this.

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

  # module.aft.module.aft_lambda_layer.aws_iam_role.codebuild has been changed
  ~ resource "aws_iam_role" "codebuild" {
      ~ assume_role_policy    = jsonencode(
          ~ {
              ~ Statement = [
                  ~ {
                      ~ Principal = {
                          ~ Service = [
                              - "events.amazonaws.com",
                                "codebuild.amazonaws.com",
                              + "events.amazonaws.com",
                            ]
                        }
                        # (2 unchanged elements hidden)
                    },
                ]
                # (1 unchanged element hidden)
            }
        )
        id                    = "python-layer-builder-aft-common-4vpk5j6o"
        name                  = "python-layer-builder-aft-common-4vpk5j6o"
        tags                  = {}
        # (8 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

[ERROR] Account ID Not Available was not found in the Organization

Hi,
AFT is deployed and I already used it to create new AWS accounts and enroll them in a specific OU.

But from today, when I try to create a new account in another OU, I have an error and no accounts are created

Everything starts fine:

aft-account-request-processor lambda log

{
"time_stamp": "2022-01-21 12:04:12,161",
"log_level": "INFO",
"log_message": "There are messages pending processing"
}

{
"time_stamp": "2022-01-21 12:04:12,163",
"log_level": "INFO",
"log_message": "Validating new CT Account Request"
}

{
"time_stamp": "2022-01-21 12:04:32,069",
"log_level": "INFO",
"log_message": "Creating new account leveraging parameters: [{'Key': 'SSOUserEmail', 'Value': 'my-sso-email@my_company.com'}, {'Key': 'AccountEmail', 'Value': 'my-email@my_company.com'}, {'Key': 'SSOUserFirstName', 'Value': 'Bob'}, {'Key': 'SSOUserLastName', 'Value': 'Patrick'}, {'Key': 'ManagedOrganizationalUnit', 'Value': 'My_OU (ou-my_ou_id)'}, {'Key': 'AccountName', 'Value': 'My-Account-Name}]"
}

But here is the error
aft-invoke-aft-account-provisioning-framework lambda log

{
"time_stamp": "2022-01-21 12:06:51,301",
"log_level": "INFO",
"log_message": "Getting account email for account id Not Available"
}

{
    "time_stamp": "2022-01-21 12:06:51,579",
    "log_level": "ERROR",
    "log_message": {
        "FILE": "aft_invoke_aft_account_provisioning_framework.py",
        "METHOD": "lambda_handler",
        "EXCEPTION": "Account ID Not Available was not found in the Organization"
    }
}

Traceback (most recent call last):
File "/var/task/aft_invoke_aft_account_provisioning_framework.py", line 93, in lambda_handler
invoke_event = build_invoke_event(
File "/var/task/aft_invoke_aft_account_provisioning_framework.py", line 54, in build_invoke_event
account_email = utils.get_account_email_from_id(
File "/opt/python/lib/python3.8/site-packages/aft_common/aft_utils.py", line 551, in get_account_email_from_id
raise Exception("Account ID " + id + " was not found in the Organization")
Exception: Account ID Not Available was not found in the Organization

What could be the reason ?

  • I double / triple check the Organizational Unit ID

Unable to deploy AFT module, terraform plan/apply results in below error. I'm using terraform version 1.0.7 &

terraform plan

│ Error: error archiving directory: error encountered during file walk: CreateFile .terraform\modules\aft\modules\aft-account-provisioning-framework\lambda\aft-account-provisioning-framework-validate-request\aft_account_provisioning_framework_validate_request.py: The system cannot find the path specified.

│ with module.aft.module.aft_account_provisioning_framework.data.archive_file.validate_request,
│ on .terraform\modules\aft\modules\aft-account-provisioning-framework\lambda.tf line 4, in data "archive_file" "validate_request":
│ 4: data "archive_file" "validate_request" {



│ Error: error archiving directory: error encountered during file walk: CreateFile .terraform\modules\aft\modules\aft-account-provisioning-framework\lambda\aft-account-provisioning-framework-get-account-info\aft_account_provisioning_framework_get_account_info.py: The system cannot find the path specified.

│ with module.aft.module.aft_account_provisioning_framework.data.archive_file.get_account_info,
│ on .terraform\modules\aft\modules\aft-account-provisioning-framework\lambda.tf line 34, in data "archive_file" "get_account_info":
│ 34: data "archive_file" "get_account_info" {



│ Error: error archiving directory: error encountered during file walk: CreateFile .terraform\modules\aft\modules\aft-account-provisioning-framework\lambda\aft-account-provisioning-framework-persist-metadata\aft_account_provisioning_framework_persist_metadata.py: The system cannot find the path specified.

│ with module.aft.module.aft_account_provisioning_framework.data.archive_file.persist_metadata,
│ on .terraform\modules\aft\modules\aft-account-provisioning-framework\lambda.tf line 125, in data "archive_file" "persist_metadata":│ 125: data "archive_file" "persist_metadata" {

Account name changing not working

I have deployed AFT recently and created a new account.
Unfortunately, I made a typo in AccountName and account_customizations_name, which resulted in the account being created under the wrong name and pointing to a directory which didn't exist.
Strangely enough, Code Pipeline didn't complain about not existing folder and happily reported that everything went OK even though it didn't.

So I thought easy fix, just update account request and then customizations again. But I was surprised that after merging change to main branch even though terraform updated data in DynamoDB table nothing else happened.
I mean, no action was triggered on Control Tower to change name and name is still incorrect.
Another weird part is that when I ran Step Functions to made customizations, instead of looking in aft_request dynamo table, it's parsing aft_request_metadata table.
I have checked values in that table and those are still invalid and contain old values. Because of that step functions or what ever happens next do not trigger action.

This means some parts of that pipeline didn't work, and some actions weren't triggered

[Feature Request] Multiple Customizations in Account Request

When deploying a new account, or updating an existing account, I would like to prove a LIST of customizations to the request.

This would

  1. Reduce the amount of copy and paste between customizations, where i want the same code deployed different account types
  2. Allow customers to produce specific customizations, that can be applied in a more distinct way to different accounts
  3. Allow customers to apply specific modules to specific accounts in a clear format

example

module "prod_01" {
  source = "./modules/aft-account-request"

  control_tower_parameters = {
    AccountEmail = "[email protected]"
    AccountName  = "dev-aft-test-01"
    # Syntax for top-level OU
    ManagedOrganizationalUnit = "prod"
    # Syntax for nested OU
    ....
    ....

  account_customizations_list = [
      "production_generic",
      "module_just_for_this_account1",
      "module_just_for_this_account2"
     ]
}

[Question] about account id in input

Hi all,

As an official document, we only need to create OU (step 1) and account for AFT management (step 2). After that, this tf module (step 3) will create the remaining accounts like log, audit, ... But in the example (https://github.com/aws-ia/terraform-aws-control_tower_account_factory/blob/main/examples/github%2Btf_oss/main.tf), I see that we need to input account id of these accounts. So where could I take these account id? This makes me really confused. Hope anyone could explain. Thanks.

Best regards,

Fail to deploy latest version 1.1.2

Hi,

the latest version 1.1.2 that was released 2 days ago fails to deploy from scratch. I've tested the previous version 1.1.1 and it works fine.

Here is the output:

$ terraform plan                          
╷
│ Error: Unsupported block type
│ 
│   on .terraform/modules/aft/modules/aft-account-request-framework/backup.tf line 26, in resource "aws_backup_selection" "aft_controltower_backup_selection":
│   26:   condition {}
│ 
│ Blocks of type "condition" are not expected here.
╵
╷
│ Error: Unsupported argument
│ 
│   on .terraform/modules/aft/modules/aft-account-request-framework/backup.tf line 27, in resource "aws_backup_selection" "aft_controltower_backup_selection":
│   27:   not_resources = []
│ 
│ An argument named "not_resources" is not expected here.

According to the comment on the line 25 it seems to be caused by hashicorp/terraform-provider-aws#22595.

Francisco.

module.aft-account-request-framework.aws_backup_selection.aft_controltower_backup_selection causes drift

It looks like the aws_backup_selection not having a blank condition and empty not_resources properties defined causes drift. I believe that changing the order would fix this.

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

  # module.aft.module.aft_account_request_framework.aws_backup_selection.aft_controltower_backup_selection has changed
  ~ resource "aws_backup_selection" "aft_controltower_backup_selection" {
        id            = "abcd1234-ab12-ab12-ab12-abcdef123456"
        name          = "aft-controltower-backup-selection"
      + not_resources = []
        # (3 unchanged attributes hidden)

      + condition {
        }
    }


Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes.   

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 


Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:
# module.aft.module.aft_account_request_framework.aws_backup_selection.aft_controltower_backup_selection must be replaced
-/+ resource "aws_backup_selection" "aft_controltower_backup_selection" {
      ~ id            = "abcd1234-ab12-ab12-ab12-abcdef123456" -> (known after apply)
        name          = "aft-controltower-backup-selection"
      - not_resources = [] -> null
        # (3 unchanged attributes hidden)

      - condition { # forces replacement
        }
    }

create_customizations doesn't exist

In https://github.com/aws-ia/terraform-aws-control_tower_account_factory/blob/main/sources/aft-customizations-repos/aft-account-request/README.md it states:

create_customizations must **** be set to true if you intend to customize account after provisioning. Refer to <TODO - insert link Account Customization> documentation for more information.

However when setting this value, we get the following error:

295 | │ Error: Unsupported argument
296 | │
297 | │   on customization_test.tf line 4, in module "customization_test":
298 | │    4:   create_customizations = true
299 | │
300 | │ An argument named "create_customizations" is not expected here.
301 | ╵
module "customization_test" {
  source = "./modules/aft-account-request"

  create_customizations = true

  control_tower_parameters = {
    AccountEmail = "[email protected]"
    AccountName  = "customizationtest"
    ManagedOrganizationalUnit = "XXXX"
    SSOUserEmail     = "XXX"
    SSOUserFirstName = "XXX"
    SSOUserLastName  = "XXX"
  }
...
}

aft-codebuild-customizations-role needs GetBucketLocation permissions.

The aft-codebuild-customizations-role policy tries to invoke GetBucketLocation when the account customizations pipelines run resulting in an AccessDenied CloudTrail entry. While this does not directly affect the success of the pipelines, it does cause an alarm if you have metric filters on AccessDenied events (as recommended by AWS CIS benchmark)

This is the statement which needs the permission added:

Below is an example of the AccessDenied entry in CloudTrail:

    "eventVersion": "1.08",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "*********************:AWSCodeBuild",
        "arn": "arn:aws:sts::************:assumed-role/aft-codebuild-customizations-role/AWSCodeBuild",
        "accountId": "************",
        "accessKeyId": "********************",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "*********************",
                "arn": "arn:aws:iam::************:role/aft-codebuild-customizations-role",
                "accountId": "************",
                "userName": "aft-codebuild-customizations-role"
            },
            "webIdFederationData": {},
            "attributes": {
                "creationDate": "2022-01-30T20:12:07Z",
                "mfaAuthenticated": "false"
            }
        }
    },
    "eventTime": "2022-01-30T20:12:07Z",
    "eventSource": "s3.amazonaws.com",
    "eventName": "GetBucketLocation",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "10.246.252.159",
    "userAgent": "[aws-internal/3 aws-sdk-java/1.12.127 Linux/5.4.156-94.273.amzn2int.x86_64 OpenJDK_64-Bit_Server_VM/25.312-b07 java/1.8.0_312 scala/2.11.8 kotlin/1.3.72 vendor/Oracle_Corporation cfg/retry-mode/standard]",
    "errorCode": "AccessDenied",
    "errorMessage": "Access Denied",
    "requestParameters": {
        "bucketName": "aft-customizations-pipeline-************",
        "location": "",
        "Host": "aft-customizations-pipeline-************.s3.us-east-1.amazonaws.com"
    },
    "responseElements": null,
    "additionalEventData": {
        "SignatureVersion": "SigV4",
        "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
        "bytesTransferredIn": 0,
        "AuthenticationMethod": "AuthHeader",
        "x-amz-id-2": "Yamvc2iEGNyJfcB4FdmTCJPJ5dlACfnUA5AaKNJQXpQaWQ15tNGARooUzVxliT803W8gHV6A9C4=",
        "bytesTransferredOut": 243
    },
    "requestID": "PV28G7RG98QGSMJS",
    "eventID": "ecf8988d-c706-4d05-8523-6bcce5612eea",
    "readOnly": true,
    "resources": [
        {
            "accountId": "************",
            "type": "AWS::S3::Bucket",
            "ARN": "arn:aws:s3:::aft-customizations-pipeline-************"
        }
    ],
    "eventType": "AwsApiCall",
    "managementEvent": true,
    "recipientAccountId": "************",
    "vpcEndpointId": "vpce-00dc1369",
    "eventCategory": "Management"
}```

Step Function Failure Results In Account Customizations Never Running

I have found that if there is a failure whilst a new account is being setup and this failure happens before the account customizations pipeline is created, it become impossible to customizie the account.

For instance when the the default-vpc-delete step failed on my new account setup AFT never gets to creating the customizations pipeline. This means i cant even run the manual step function "aft-invoke-customizations" as it fails to start the none existent customizations pipeline. Below is the error you get from "aft-invoke-customizations"

{ "time_stamp": "2021-12-06 22:54:06,118", "log_level": "ERROR", "log_message": { "FILE": "lambda_function.py", "METHOD": "pipeline_is_running", "EXCEPTION": "can only concatenate str (not \"NoneType\") to str" } }

Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 42, in pipeline_is_running
    logger.info("Getting pipeline executions for " + name)
TypeError: can only concatenate str (not "NoneType") to str

On a related note, the lack of a customization pipeline also prevents you from customizing existing accounts that were not created via AFT.

Cannot use module if the Control Tower & AFT accounts are in the same region

If when installing AFT for the first time and you terraform apply with the configuration options:

ct_home_region              = "us-east-1"
tf_backend_secondary_region = "us-east-1"

the apply fails because the regions are the same, and regions specified in resources within the module conflict with rules from the AWS provider. This module should not fail if these regions are the same or at least there should be some way to handle this.

╷
│ Error: error creating Lambda Event Source Mapping (arn:aws:dynamodb:us-east-1:000000000000:table/aft-request/stream/2022-01-27T00:36:16.589): InvalidParameterValueException: Cannot access stream arn:aws:dynamodb:us-east-1:000000000000:table/aft-request/stream/2022-01-27T00:36:16.589. Please ensure the role can perform the GetRecords, GetShardIterator, DescribeStream, ListShards, and ListStreams Actions on your stream in IAM.
│ {
│   RespMetadata: {
│     StatusCode: 400,
│     RequestID: "26e643d1-bd4d-4f74-9302-d1c455b9f714"
│   },
│   Message_: "Cannot access stream arn:aws:dynamodb:us-east-1:000000000000:table/aft-request/stream/2022-01-27T00:36:16.589. Please ensure the role can perform the GetRecords, GetShardIterator, DescribeStream, ListShards, and ListStreams Actions on your stream in IAM.",
│   Type: "User"
│ }
│ 
│   with module.aft.module.aft_account_request_framework.aws_lambda_event_source_mapping.aft_account_request_action_trigger,
│   on .terraform/modules/aft/modules/aft-account-request-framework/lambda.tf line 75, in resource "aws_lambda_event_source_mapping" "aft_account_request_action_trigger":
│   75: resource "aws_lambda_event_source_mapping" "aft_account_request_action_trigger" {
│ 
╵
╷
│ Error: error initially creating DynamoDB Table (aft-backend-000000000000) replicas: error creating DynamoDB Table (aft-backend-000000000000) replica (us-east-1): ValidationException: Cannot add, delete, or update the local region through ReplicaUpdates. Use CreateTable, DeleteTable, or UpdateTable as required.
│       status code: 400, request id: EF57U9342DNOBCGGLVKI0JSRIJVV4KQNSO5AEMVJF66Q9ASUAAJG
│ 
│   with module.aft.module.aft_backend.aws_dynamodb_table.lock-table,
│   on .terraform/modules/aft/modules/aft-backend/main.tf line 221, in resource "aws_dynamodb_table" "lock-table":
│  221: resource "aws_dynamodb_table" "lock-table" {
│ 
╵
╷
│ Error: error creating KMS Alias (alias/aft-backend-000000000000-kms-key): AlreadyExistsException: An alias with the name arn:aws:kms:us-east-1:000000000000:alias/aft-backend-000000000000-kms-key already exists
│ 
│   with module.aft.module.aft_backend.aws_kms_alias.encrypt-alias-secondary-region,
│   on .terraform/modules/aft/modules/aft-backend/main.tf line 275, in resource "aws_kms_alias" "encrypt-alias-secondary-region":
│  275: resource "aws_kms_alias" "encrypt-alias-secondary-region" {
│ 
╵

NEW: pass aft account request "custom_fields" metadata to customization pipeline

Background:

  • In the customization pipeline, I could use pre and post API helper to perform action such as populating the tfvars.
  • Account request can contain additional metadata stored in custom_fields, this information propagate through the step function but is not available inside the CodeBuild environment

Expected outcome:

I want to access these metadata, probably in format of environment variables when running the API helpers.
one way to achieve this is by adding this metadata as part of the CodeBuild environment variables and keep it up to date as I modify the account request.

provisioning halted after aft-account-proccesor is invoked.

The AFT process does not appear to be functioning correctly, after updating the request repository the aft-account-processor is invoked and runs but other actions or lambdas are triggered. It appears to pull the request from SQS but no other changes in state are made, I've tried deploying AFT to a few different sub accounts but none function. What is meant to trigger after aft-account-processor?

"Cannot have more than 1 builds in queue for the account" with newly deployed AFT account.

With a newly deployed AFT account, when you attempt to execute the customizations pipeline via step functions, if there is more than one account pipeline only one pipeline will start executing, and the others will fail with "Cannot have more than 1 builds in queue for the account".

I needed to run an EC2 instance for ~20 minutes and receive the "Your Request For Accessing AWS Resources Has Been Validated" email before more than one pipeline could be started concurrently without exception.

This should be called out in the documentation, or as part of the AFT account deployment an instance should be run to cause the account validation to occur.

Feature: Put full account request in vended account's SSM parameters

1.1.0 Feature to add custom_fields to parameter store is great, but it would be good to take it one step further and add the full request to SSM. This would allow doing things like adding the tags, which can't be accessed using the role assumed during the account customization process (The role in the vended account). Having a path of /aft/account-request that contains all of the attributes in the aft-account-request module solves this.

Failing to reference existing S3 Bucket while creating lambda layer

We are following the tutorial

https://learn.hashicorp.com/tutorials/terraform/aws-control-tower-aft#create-aws-aft-organizational-unit-and-account

We are logged in a non-root account with Administrator Access, inside a brand new account with Control Tower and AFT OU.

We are facing the following error while provisioning AFT account.

image

After looking inside the lambda module, we find that the bucket is already created.

image

We already destroyed and recreated AFT but the problem persist.

Would you like to take a look?

Archive File error

Hi Team,

We are continuously getting this error:
terraform plan

│ Error: error archiving directory: error encountered during file walk: CreateFile ..\aft-codecommit-solution\aft-deployment-new\modules\aing-framework\lambda\aft-account-provisioning-framework-validate-request\schema\request_schema.json: The system cannot find the path speci

│ with module.aft.module.aft_account_provisioning_framework.data.archive_file.validate_request,
│ on ..\aft-codecommit-solution\aft-deployment-new\modules\aft-account-provisioning-framework\lambda.tf line 4, in data "archive_file" "


│ Error: error archiving directory: error encountered during file walk: CreateFile ..\aft-codecommit-solution\aft-deployment-new\modules\aft-account-provisioning-framework\lambda\aft-account-provisioning-framework-create-role\iam\trust-policies\aftmanagement.tpl: The system cannot find the path specified.

│ with module.aft.module.aft_account_provisioning_framework.data.archive_file.create_role,
│ on ..\aft-codecommit-solution\aft-deployment-new\modules\aft-account-provisioning-framework\lambda.tf line 64, in data "archive_file"
"create_role":
│ 64: data "archive_file" "create_role" {

[Question] OU Creation/Update + Enbale Guardrails

Hi,
is it possible or already planned to use this module to create and/or enroll new Organizational Unit through AWS Control Tower ?
The only way I see to automate this with Terraform is to create OU independently and then enroll it via AWS Console.
Moreover, is it also planned to enable guardrails on specific OU via this module ?

Thank you

Pipelines not running automatically after committing on any of the repositoris

I'm using github as vcs with public repositories for testing. Whenever I push commits to the main branch of any of the following repositories:

  • Account requests
  • AFT account provisioning customizations
  • Global customizations
  • Account customizations

I need to go to Developer Tools > CodePipeline > Pipelines, select on the pipeline (for example: ct-aft-account-request) and then click on Release change to trigger the pipeline with the most resent change.

AFAIK, a commit on the main branch should trigger a deployment but that's not working.

The codestar connection is active and the user who created it has write/read permissions on the repository (is not the owner).

How can I troubleshoot this?

Account Request "Custom Fields" should be written to aft-request-metadata table,

In the documentation it is stated that:
The parameter custom_fields captures custom keys and values. You may want to collect additional metadata that can be logged with the account request. This metadata can trigger additional processing, either during provisioning or when updating an account

However the data is not written to the: "aft-request-metadata" table, instead it is written to the "aft-request table".

This is a two fold problem

  1. It is inconsistent with the other metadata that is written to the "request-metadata" table
  2. It makes it harder to use as intended, as the partition key of the aft-request table is NOT the account ID, and therefore cannot easily be used in the account customization's pipelines. Giving the customer a second class experience compared to the expectations used by the proved built-in pipeline.

The codebuild pipelines are given the ACCOUNT ID as an environment variable, and this is already used by the buildspec to get data from the corresponding item from the metadata data table.

e.g. CUSTOMIZATION=$(aws dynamodb get-item --table-name aft-request-metadata --key "{\"id\": {\"S\": \"$VENDED_ACCOUNT_ID\"}}" --attributes-to-get "account_customizations_name" | jq --raw-output ".Item.account_customizations_name.S")

However, if I want to run a script, or terraform local-exec to grab some useful metadata, as part of my account/global customizations (as per the documentation). I first have to query the account metadata table, extract the email address, then pass that into another dynamodb call to the other table to get the custom fields.

Please can we add the custom data to the metadata dynamodB and then I can just make use of the $VENDED_ACCOUNT_ID to get ALL the metadata I need for my customizations

Unable to complete the first AFT terraform apply

I have not been able to deploy AFT in a brand new set of accounts and control tower. As I am not clear what this module is doing, I'm also not sure how to troubleshoot this. I am consistently getting this error:

module.aft.module.aft_lambda_layer.aws_lambda_layer_version.layer_version: Creating...
╷
│ Error: Error creating lambda layer: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
│ {
│   RespMetadata: {
│     StatusCode: 400,
│     RequestID: "13388f79-ae01-497f-b0b1-a61fcff75d55"
│   },
│   Message_: "Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.",
│   Type: "User"
│ }
│ 
│   with module.aft.module.aft_lambda_layer.aws_lambda_layer_version.layer_version,
│   on .terraform/modules/aft/modules/aft-lambda-layer/main.tf line 24, in resource "aws_lambda_layer_version" "layer_version":
│   24: resource "aws_lambda_layer_version" "layer_version" {
│ 
╵

creating multiple accounts : stack named production-aft already exists

Hello, when I tried creating multiple accounts at once. I got only one account created and for the rest of the accounts I got the following error from /aws/lambda/aft-account-request-processor :

{
"time_stamp": "2021-12-27 15:27:18,181",
"log_level": "ERROR",
"log_message": {
"FILE": "lambda_function.py",
"METHOD": "create_new_account",
"EXCEPTION": "An error occurred (InvalidParametersException) when calling the ProvisionProduct operation: A stack named production-aft already exists."
}
}

I also couldn't find the stack that is causing the error so I can delete it. How can I proceed to resolve this error? should I delete some stacks in aws catalog from the aft management account ?

Thank you in advance,

Define customizations in multiple regions

It's sometimes necessary to deploy resources in multiple regions for new accounts (e.g. enabling GuardDuty, customizing VPCs, etc.). The README says a providers.tf file can be added to define new providers, but those providers must provide a role to assume in the target account in order to deploy those resources. I tried adding new providers to aft-providers.jinja, e.g.

provider "aws" {
  alias  = "us-east-2"
  region = "us-east-2"
  assume_role {
    role_arn    = "{{ target_admin_role_arn }}"
  }
  default_tags {
    tags = {
      managed_by                  = "AFT"
    }
  }
}

and then created a module for my TF resources and invoked the module in a root-level TF file like this:

module "conf" {
  source  = "./modules/my_module"

  providers = {
    aws = "aws",
    aws.us-east-1 = "aws.us-east-1",
    aws.us-east-2 = "aws.us-east-2"
  }

  enable  = true
}

I then invoke the customization step function to apply the new global customization, and the step function finishes successfully but I am not seeing the new resources in the additional regions.

Is modifying the .jinja file the best way to manage deployment into different regions? Where should I be looking for log files to understand what is or is not happening as part of the global customization?

How to use Bitbucket as VCS

When using BitBucket as VCS, AFT deployment is not creating any Code Star project. Is there any specific steps to establish connection with BitBucket

Using `custom_fields` values in account customisations

I would like to use values fromcustom_fields to pass parameters to terraform in account customisations. Could you please provide an example of how to access those values in account customisations?

I tried digging those out of dynamodb aft-request table in api_helpers/pre-api-helpers.sh but codebuild assumes a role to the target account before that's executed so it doesn't work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.