Coder Social home page Coder Social logo

cloudposse / terraform-aws-ecs-container-definition Goto Github PK

View Code? Open in Web Editor NEW
330.0 26.0 242.0 249 KB

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource

Home Page: https://cloudposse.com/accelerate

License: Apache License 2.0

Makefile 9.64% HCL 82.74% Go 7.62%
terraform terraform-module ecs fargate container-definition task docker aws hcl2

terraform-aws-ecs-container-definition's Introduction

terraform-aws-ecs-container-definition

Latest ReleaseLast UpdatedSlack Community

Terraform module to generate well-formed JSON documents that are passed to the aws_ecs_task_definition Terraform resource as container definitions.

Tip

πŸ‘½ Use Atmos with Terraform

Cloud Posse uses atmos to easily orchestrate multiple environments using Terraform.
Works with Github Actions, Atlantis, or Spacelift.

Watch demo of using Atmos with Terraform
Example of running atmos to manage infrastructure from our Quick Start tutorial.

Usage

This module is meant to be used as output only, meaning it will be used to create outputs which are consumed as a parameter by Terraform resources or other modules.

Caution: This module, unlike nearly all other Cloud Posse Terraform modules, does not use terraform-null-label. Furthermore, it has an input named environment which has a completely different meaning than the one in terraform-null-label. Do not call this module with the conventional context = module.this.context. See the documentation below for the usage of environment.

For complete examples, see

For a complete example with automated tests, see examples/complete with bats and Terratest for the example test.

module "container_definition" {
  source = "cloudposse/ecs-container-definition/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  container_name  = "geodesic"
  container_image = "cloudposse/geodesic"
}

The output of this module can then be used with one of our other modules.

module "ecs_alb_service_task" {
  source = "cloudposse/ecs-alb-service-task/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  # ...
  container_definition_json = module.container_definition.json_map_encoded_list
  # ...
}

Important

In Cloud Posse's examples, we avoid pinning modules to specific versions to prevent discrepancies between the documentation and the latest released versions. However, for your own projects, we strongly advise pinning each module to the exact version you're using. This practice ensures the stability of your infrastructure. Additionally, we recommend implementing a systematic approach for updating versions to avoid unexpected changes.

Makefile Targets

Available targets:

  help                                Help screen
  help/all                            Display help for all targets
  help/short                          This help short screen
  lint                                Lint terraform code

Requirements

Name Version
terraform >= 1.3.0
local >= 1.2

Providers

No providers.

Modules

No modules.

Resources

No resources.

Inputs

Name Description Type Default Required
command The command that is passed to the container list(string) null no
container_cpu The number of cpu units to reserve for the container. This is optional for tasks using Fargate launch type and the total amount of container_cpu of all containers in a task will need to be lower than the task-level cpu value number 0 no
container_definition Container definition overrides which allows for extra keys or overriding existing keys.
object({
command = optional(list(string))
cpu = optional(number)
dependsOn = optional(list(object({
condition = string
containerName = string
})))
disableNetworking = optional(bool)
dnsSearchDomains = optional(list(string))
dnsServers = optional(list(string))
dockerLabels = optional(map(string))
dockerSecurityOptions = optional(list(string))
entryPoint = optional(list(string))
environment = optional(list(object({
name = string
value = string
})))
environmentFiles = optional(list(object({
type = string
value = string
})))
essential = optional(bool)
extraHosts = optional(list(object({
hostname = string
ipAddress = string
})))
firelensConfiguration = optional(object({
options = optional(map(string))
type = string
}))
healthCheck = optional(object({
command = list(string)
interval = optional(number)
retries = optional(number)
startPeriod = optional(number)
timeout = optional(number)
}))
hostname = optional(string)
image = optional(string)
interactive = optional(bool)
links = optional(list(string))
linuxParameters = optional(object({
capabilities = optional(object({
add = optional(list(string))
drop = optional(list(string))
}))
devices = optional(list(object({
containerPath = string
hostPath = string
permissions = optional(list(string))
})))
initProcessEnabled = optional(bool)
maxSwap = optional(number)
sharedMemorySize = optional(number)
swappiness = optional(number)
tmpfs = optional(list(object({
containerPath = string
mountOptions = optional(list(string))
size = number
})))
}))
logConfiguration = optional(object({
logDriver = string
options = optional(map(string))
secretOptions = optional(list(object({
name = string
valueFrom = string
})))
}))
memory = optional(number)
memoryReservation = optional(number)
mountPoints = optional(list(object({
containerPath = optional(string)
readOnly = optional(bool)
sourceVolume = optional(string)
})))
name = optional(string)
portMappings = optional(list(object({
containerPort = number
hostPort = optional(number)
protocol = optional(string)
name = optional(string)
appProtocol = optional(string)
})))
privileged = optional(bool)
pseudoTerminal = optional(bool)
readonlyRootFilesystem = optional(bool)
repositoryCredentials = optional(object({
credentialsParameter = string
}))
resourceRequirements = optional(list(object({
type = string
value = string
})))
secrets = optional(list(object({
name = string
valueFrom = string
})))
startTimeout = optional(number)
stopTimeout = optional(number)
systemControls = optional(list(object({
namespace = string
value = string
})))
ulimits = optional(list(object({
hardLimit = number
name = string
softLimit = number
})))
user = optional(string)
volumesFrom = optional(list(object({
readOnly = optional(bool)
sourceContainer = string
})))
workingDirectory = optional(string)
})
{} no
container_depends_on The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. The condition can be one of START, COMPLETE, SUCCESS or HEALTHY
list(object({
condition = string
containerName = string
}))
null no
container_image The image used to start the container. Images in the Docker Hub registry available by default string n/a yes
container_memory The amount of memory (in MiB) to allow the container to use. This is a hard limit, if the container attempts to exceed the container_memory, the container is killed. This field is optional for Fargate launch type and the total amount of container_memory of all containers in a task will need to be lower than the task memory value number null no
container_memory_reservation The amount of memory (in MiB) to reserve for the container. If container needs to exceed this threshold, it can do so up to the set container_memory hard limit number null no
container_name The name of the container. Up to 255 characters ([a-z], [A-Z], [0-9], -, _ allowed) string n/a yes
disable_networking When this parameter is true, networking is disabled within the container. bool null no
dns_search_domains Container DNS search domains. A list of DNS search domains that are presented to the container list(string) null no
dns_servers Container DNS servers. This is a list of strings specifying the IP addresses of the DNS servers list(string) null no
docker_labels The configuration options to send to the docker_labels map(string) null no
docker_security_options A list of strings to provide custom labels for SELinux and AppArmor multi-level security systems. list(string) null no
entrypoint The entry point that is passed to the container list(string) null no
environment The environment variables to pass to the container. This is a list of maps. map_environment overrides environment
list(object({
name = string
value = string
}))
null no
environment_files One or more files containing the environment variables to pass to the container. This maps to the --env-file option to docker run. The file must be hosted in Amazon S3. This option is only available to tasks using the EC2 launch type. This is a list of maps
list(object({
type = string
value = string
}))
null no
essential Determines whether all other containers in a task are stopped, if this container fails or stops for any reason. Due to how Terraform type casts booleans in json it is required to double quote this value bool true no
extra_hosts A list of hostnames and IP address mappings to append to the /etc/hosts file on the container. This is a list of maps
list(object({
hostname = string
ipAddress = string
}))
null no
firelens_configuration The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more details, see https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_FirelensConfiguration.html
object({
options = optional(map(string))
type = string
})
null no
healthcheck A map containing command (string), timeout, interval (duration in seconds), retries (1-10, number of times to retry before marking container unhealthy), and startPeriod (0-300, optional grace period to wait, in seconds, before failed healthchecks count toward retries)
object({
command = list(string)
interval = optional(number)
retries = optional(number)
startPeriod = optional(number)
timeout = optional(number)
})
null no
hostname The hostname to use for your container. string null no
interactive When this parameter is true, this allows you to deploy containerized applications that require stdin or a tty to be allocated. bool null no
links List of container names this container can communicate with without port mappings list(string) null no
linux_parameters Linux-specific modifications that are applied to the container, such as Linux kernel capabilities. For more details, see https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LinuxParameters.html
object({
capabilities = optional(object({
add = optional(list(string))
drop = optional(list(string))
}))
devices = optional(list(object({
containerPath = string
hostPath = string
permissions = optional(list(string))
})))
initProcessEnabled = optional(bool)
maxSwap = optional(number)
sharedMemorySize = optional(number)
swappiness = optional(number)
tmpfs = optional(list(object({
containerPath = string
mountOptions = optional(list(string))
size = number
})))
})
null no
log_configuration Log configuration options to send to a custom log driver for the container. For more details, see https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LogConfiguration.html
object({
logDriver = string
options = optional(map(string))
secretOptions = optional(list(object({
name = string
valueFrom = string
})))
})
null no
map_environment The environment variables to pass to the container. This is a map of string: {key: value}. map_environment overrides environment map(string) null no
map_secrets The secrets variables to pass to the container. This is a map of string: {key: value}. map_secrets overrides secrets map(string) null no
mount_points Container mount points. This is a list of maps, where each map should contain containerPath, sourceVolume and readOnly
list(object({
containerPath = optional(string)
readOnly = optional(bool)
sourceVolume = optional(string)
}))
null no
port_mappings The port mappings to configure for the container. This is a list of maps. Each map should contain "containerPort", "hostPort", and "protocol", where "protocol" is one of "tcp" or "udp". If using containers in a task with the awsvpc or host network mode, the hostPort can either be left blank or set to the same value as the containerPort
list(object({
containerPort = number
hostPort = optional(number)
protocol = optional(string)
name = optional(string)
appProtocol = optional(string)
}))
null no
privileged When this variable is true, the container is given elevated privileges on the host container instance (similar to the root user). This parameter is not supported for Windows containers or tasks using the Fargate launch type. bool null no
pseudo_terminal When this parameter is true, a TTY is allocated. bool null no
readonly_root_filesystem Determines whether a container is given read-only access to its root filesystem. Due to how Terraform type casts booleans in json it is required to double quote this value bool false no
repository_credentials Container repository credentials; required when using a private repo. This map currently supports a single key; "credentialsParameter", which should be the ARN of a Secrets Manager's secret holding the credentials
object({
credentialsParameter = string
})
null no
resource_requirements The type and amount of a resource to assign to a container. The only supported resource is a GPU.
list(object({
type = string
value = string
}))
null no
secrets The secrets to pass to the container. This is a list of maps
list(object({
name = string
valueFrom = string
}))
null no
start_timeout Time duration (in seconds) to wait before giving up on resolving dependencies for a container number null no
stop_timeout Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own number null no
system_controls A list of namespaced kernel parameters to set in the container, mapping to the --sysctl option to docker run. This is a list of maps: { namespace = "", value = ""}
list(object({
namespace = string
value = string
}))
null no
ulimits Container ulimit settings. This is a list of maps, where each map should contain "name", "hardLimit" and "softLimit"
list(object({
hardLimit = number
name = string
softLimit = number
}))
null no
user The user to run as inside the container. Can be any of these formats: user, user:group, uid, uid:gid, user:gid, uid:group. The default (null) will use the container's configured USER directive or root if not set. string null no
volumes_from A list of VolumesFrom maps which contain "sourceContainer" (name of the container that has the volumes to mount) and "readOnly" (whether the container can write to the volume)
list(object({
readOnly = optional(bool)
sourceContainer = string
}))
null no
working_directory The working directory to run commands inside the container string null no

Outputs

Name Description
json_map_encoded JSON string encoded container definitions for use with other terraform resources such as aws_ecs_task_definition
json_map_encoded_list JSON string encoded list of container definitions for use with other terraform resources such as aws_ecs_task_definition
json_map_object JSON map encoded container definition
sensitive_json_map_encoded JSON string encoded container definitions for use with other terraform resources such as aws_ecs_task_definition (sensitive)
sensitive_json_map_encoded_list JSON string encoded list of container definitions for use with other terraform resources such as aws_ecs_task_definition (sensitive)
sensitive_json_map_object JSON map encoded container definition (sensitive)

Related Projects

Check out these related projects.

Tip

Use Terraform Reference Architectures for AWS

Use Cloud Posse's ready-to-go terraform architecture blueprints for AWS to get up and running quickly.

βœ… We build it with you.
βœ… You own everything.
βœ… Your team wins.

Request Quote

πŸ“š Learn More

Cloud Posse is the leading DevOps Accelerator for funded startups and enterprises.

Your team can operate like a pro today.

Ensure that your team succeeds by using Cloud Posse's proven process and turnkey blueprints. Plus, we stick around until you succeed.

Day-0: Your Foundation for Success

  • Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
  • Deployment Strategy. Adopt a proven deployment strategy with GitHub Actions, enabling automated, repeatable, and reliable software releases.
  • Site Reliability Engineering. Gain total visibility into your applications and services with Datadog, ensuring high availability and performance.
  • Security Baseline. Establish a secure environment from the start, with built-in governance, accountability, and comprehensive audit logs, safeguarding your operations.
  • GitOps. Empower your team to manage infrastructure changes confidently and efficiently through Pull Requests, leveraging the full power of GitHub Actions.

Request Quote

Day-2: Your Operational Mastery

  • Training. Equip your team with the knowledge and skills to confidently manage the infrastructure, ensuring long-term success and self-sufficiency.
  • Support. Benefit from a seamless communication over Slack with our experts, ensuring you have the support you need, whenever you need it.
  • Troubleshooting. Access expert assistance to quickly resolve any operational challenges, minimizing downtime and maintaining business continuity.
  • Code Reviews. Enhance your team’s code quality with our expert feedback, fostering continuous improvement and collaboration.
  • Bug Fixes. Rely on our team to troubleshoot and resolve any issues, ensuring your systems run smoothly.
  • Migration Assistance. Accelerate your migration process with our dedicated support, minimizing disruption and speeding up time-to-value.
  • Customer Workshops. Engage with our team in weekly workshops, gaining insights and strategies to continuously improve and innovate.

Request Quote

✨ Contributing

This project is under active development, and we encourage contributions from our community.

Many thanks to our outstanding contributors:

For πŸ› bug reports & feature requests, please use the issue tracker.

In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.

  1. Review our Code of Conduct and Contributor Guidelines.
  2. Fork the repo on GitHub
  3. Clone the project to your own machine
  4. Commit changes to your own branch
  5. Push your work back up to your fork
  6. Submit a Pull Request so that we can review your changes

NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!

🌎 Slack Community

Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.

πŸ“° Newsletter

Sign up for our newsletter and join 3,000+ DevOps engineers, CTOs, and founders who get insider access to the latest DevOps trends, so you can always stay in the know. Dropped straight into your Inbox every week β€” and usually a 5-minute read.

πŸ“† Office Hours

Join us every Wednesday via Zoom for your weekly dose of insider DevOps trends, AWS news and Terraform insights, all sourced from our SweetOps community, plus a live Q&A that you can’t find anywhere else. It's FREE for everyone!

License

License

Preamble to the Apache License, Version 2.0

Complete license is available in the LICENSE file.

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.

Trademarks

All other trademarks referenced herein are the property of their respective owners.


Copyright Β© 2017-2024 Cloud Posse, LLC

README footer

Beacon

terraform-aws-ecs-container-definition's People

Contributors

aaronlake avatar aknysh avatar chris-mac avatar cloudpossebot avatar david7482 avatar dekimsey avatar devonhk avatar dylanbannon avatar fernandosilvacornejo avatar hackerbone avatar jtdoepke avatar karvounis avatar korenyoni avatar maximmi avatar mickdekkers avatar nitrocode avatar nutellinoit avatar osterman avatar pguinard-public-com avatar ptqa avatar rajmaniar avatar renovate[bot] avatar sarkis avatar sieverssj avatar sweeneypng avatar syphernl avatar tisted-digitalis avatar vadim-hleif avatar wittro avatar zahorniak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-ecs-container-definition's Issues

Cannot include environment variables

Not sure what is going on, and after a couple of hours I'm stumped. Here's a failing definition:

module "container_definition" {
  source           = "git::https://github.com/cloudposse/terraform-aws-ecs-container-definition.git?ref=0.1.3"
  container_name   = "${var.name}"
  container_image  = "nginx:latest"
  container_memory = 128
  container_port   = 80

  environment      = [
    {
      name  = "AWS_REGION",
      value = "${var.aws_region}"
    },
    {
      name  = "NODE_ENV",
      value = "${var.stage}"
    },
    {
      name  = "PORT",
      value = "80"
    },
    {
      name  = "DEBUG",
      value = "server:*"
    }
  ]

  log_options      = {
    "awslogs-region"        = "${var.aws_region}"
    "awslogs-group"         = "${aws_cloudwatch_log_group.app.name}"
    "awslogs-stream-prefix" = "${var.name}"
  }
}

healthcheck.command should accept list of strings

I have been using this module from v0.11, and now I'm trying to migrate to v0.12.0.

I hit an issue with the module during migration.
TF says healthcheck attribute's values must all be strings, even as command option should accept a list of strings (#31 ).
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_healthcheck

With current master branch healthcheck variable is declared as a map, so the value's type must all be same.

Do you have a good idea for mitigation or need module fix?

Code

module "grpc_envoy" {
  source  = "cloudposse/ecs-container-definition/aws"
  version = "0.13.0"
  healthcheck = {
    command     = ["CMD-SHELL", "grpc_health_probe -addr=localhost:80 || exit 1"]
    retries     = 3
    timeout     = 2
    interval    = 5
    startPeriod = 10
  }
  (snip)
}

Plan result (tf v0.12.0, aws provider v2.11.0)

%  terraform plan                                                       

Error: Invalid value for module argument

  on ../module/aws/foo/xxx/task_definition_grpc.tf line 20, in module "grpc_envoy":
  20:   healthcheck = {
  21:     command     = ["CMD-SHELL", "grpc_health_probe -addr=localhost:80 || exit 1"]
  22:     retries     = 3
  23:     timeout     = 2
  24:     interval    = 5
  25:     startPeriod = 10
  26:   }

The given value is not suitable for child module variable "healthcheck"
defined at
.terraform/modules/xxx.grpc_envoy/cloudposse-terraform-aws-ecs-container-definition-c6f5fc3/variables.tf:32,1-23:
all map elements must have the same type.

Using secrets input results in "Invalid function argument" error on map_secrets

Describe the Bug

After upgrading to v0.51.0 a Terraform plan now shows the following error for each of the defined secrets in the secrets input:

Error: Invalid function argument
  on .terraform/modules/container/main.tf line 20, in locals:
  20:   secrets_values      = var.map_secrets != null ? values(var.map_secrets) : [for m in local.secrets_vars : lookup(m, "value")]
Invalid value for "inputMap" parameter: the given object has no attribute
"value".
  • No map_secrets input is defined in the module invocation
  • The secrets contains a number of references to SSM parameters (which worked fine prior to v0.51.0).

Expected Behavior

No error.

Steps to Reproduce

Create a container definition such as:

module "demo_container" {
  source          = "git::https://github.com/cloudposse/terraform-aws-ecs-container-definition.git?ref=0.49.2"
  container_name  = "demo-container"
  container_image = "nginxdemos/hello:latest"

  secrets = [
    # App specific
    {
      name      = "APP_KEY",
      valueFrom = module.store_write_demo.arn_map[local.demo_app_key_parameter]
    }
  ]

  environment = [
    {
      name  = "APP_ENV",
      value = "production",
    }
  ]

  port_mappings = [
    {
      containerPort = 80
      hostPort      = 80
      protocol      = "tcp"
    }
  ]

  log_configuration = {
    logDriver = "awslogs"
    options = {
      "awslogs-region"        = var.aws_region
      "awslogs-group"         = join("", aws_cloudwatch_log_group.our_log_group.*.name)
      "awslogs-stream-prefix" = "app"
    }
  }

  healthcheck = {
    command     = ["CMD-SHELL", "curl -f http://localhost:80/health || exit 1"]
    retries     = 5
    timeout     = 5
    interval    = 30
    startPeriod = 30
  }
}

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment:

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: Alpine Linux 3.13
  • Terraform: 0.14.7
  • AWS Provider: 3.29.1

Additional Context

PR with changes: #120 & #123

Terraform 0.15 error when specifying bool input

Describe the Bug

After upgrading to Terraform 0.15.x , I'm now getting an error while trying to plan my changes. This seems to relate the the bool flag for readOnly within the mount_points inupt. Switching back to 0.14.x and the issue goes away.

Expected Behavior

My terraform plan should work with no issues and changes.
No changes. Infrastructure is up-to-date.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create module with mount_points and specify readOnly bool to true or "true"
  2. terraform plan
  3. See error
  mount_points = [
    {
      containerPath = "/this/path"
      sourceVolume = "this_volume"
      readOnly = <"true" OR true>
    }
  ]

Screenshots

Error: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal string into Go struct field MountPoint.MountPoints.ReadOnly of type bool

with aws_ecs_task_definition.this,
on task_definition.tf line ??, in resource "aws_ecs_task_definition" "this":
83:   container_definitions = jsonencode([
84:     module.application_container_def.json_map_object,
85:   ])

Environment (please complete the following information):

  • OS: OSX 11.1

Log configuration options must all be strings

I was considering trying to use this module to replace an even worse hack that I had for container definitions, particularly around environment variables, but I'm running into an issue with trying to set the log configuration options due to the regex replacing all string quoted integers/booleans with real integers/booleans.

I'm trying to pass something like this:

  log_options = {
    "fluentd-async-connect" = "true",
    "fluentd-retry-wait" = "10s",
    "fluentd-max-retries" = "60"
  }

But I get the following error on a plan:

ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal bool into Go struct field LogConfiguration.Options of type string

I'm guessing that I'd need to do something similar to the environment variables and secrets configuration to get this working?

Extra Parameters

Describe the Feature

Support a new parameter named container_definition which is an object. This would allow absolutely any parameter to be passed to the module, or to override the behavior of the module on any property

Expected Behavior

Merge this container_definition map with the settings computed by this module.

Use Case

Basically, this is an escape hatch to allow users to do anything they want, but still be able to use documented named input variables for common use-cases.

Describe Ideal Solution

https://github.com/cloudposse/terraform-aws-ecs-container-definition/blob/master/main.tf#L67

variable "container_definition" {
  default = {}
  description = "Container definition overrides"
}
 json_map = jsonencode(local.container_definition, var.container_definition)

Capabilities cannot be defined without defining other values

Describe the Bug

Capabilities cannot be defined without defining shared memory and swap values, leading to specifying options that are optional and don't have an equivalent when specified.
Unable for instance to add the capability SYS_BOOT to a container without specifying shared memory.
Shared memory does not allow for a value of 0, so unable to add SYS_BOOT without giving shared memory to a container.

Expected Behavior

Capacity to specify capabilities without specifying other options inside linux_parameters.

Steps to Reproduce

Steps to reproduce the behavior:
main.tf

module "container-definition" {
  source   = "cloudposse/ecs-container-definition/aws"
  version  = "~> 0.57.0"

  container_name = "hello"
  container_image = "some/image:latest"
  linux_parameters = {
    capabilities = {
      add = ["SYS_BOOT", "NET_ADMIN", "SYS_TIME", "KILL"]
      drop = []
    }
  }
  1. Run terraform plan
  2. Enter '....'
  3. See error
    β”‚ Error: Invalid value for module argument
    β”‚
    β”‚ on ops-steadybit-agent.tf line 135, in module "steadybit-agent":
    β”‚ 135: container_linux_parameters = {
    β”‚ 136: capabilities = {
    β”‚ 137: add = ["SYS_BOOT", "NET_ADMIN", "SYS_TIME", "KILL"]
    β”‚ 138: drop = []
    β”‚ 139: }
    β”‚ 140: }
    β”‚
    β”‚ The given value is not suitable for child module variable "container_linux_parameters" defined at modules/application/variables.tf:393,1-38: attributes "devices", "initProcessEnabled", "maxSwap", "sharedMemorySize", "swappiness", and "tmpfs" are required.
    β•΅
    make: *** [execute] Error 1

Improve interface of the "environment" input variable

That's not an issue, just a suggestion.

The current way of passing environment variables to the module is by using a list of maps (list(object({ name = string, value = string }))) , e.g:

module "my_container_definition" {
  source  = "cloudposse/ecs-container-definition/aws"
  version = "v0.23.0"
  [...]

  environment= [
    {
      name  = "FIRST_ENV_VAR_NAME",
      value = "FIRST_ENV_VAR_VALUE"
    },
    {
      name  = "SECOND_ENV_VAR_NAME",
      value = "SECOND_ENV_VAR_VALUE"
    },
    {
      name  = "THIRD_ENV_VAR_NAME",
      value = "THIRD_ENV_VAR_VALUE"
    } 
  ]
}

IMO that adds a lot of noise to code (14 lines to declare 3 environment variables), making it much less pleasant to read.

Currently I'm using the following pattern:

module "my_container_definition" {
  source  = "cloudposse/ecs-container-definition/aws"
  version = "v0.23.0"
  [...]

  environment = [for name, value in var.environment: {"name" = name, "value" = value}]

Then I can declare environment variables like so:

module "my_outer_module" {

  [...]
  environment = {
    FIRST_ENV_VAR_NAME = "FIRST_ENV_VAR_VALUE"
    SECOND_ENV_VAR_NAME = "SECOND_ENV_VAR_VALUE"
    THIRD_ENV_VAR_NAME = "THIRD_ENV_VAR_VALUE"
  }
}

Which is much more readable, specially for a high number of variables.

Would anyone oppose a PR to add this pattern to the module?

Version 0.21 force to have "hostPort" in port_mappings

Hi,

I was using the following port_mapping:

  port_mappings = [
    {
      containerPort = 80,
      protocol = "tcp"
    }
  ]

As far as I know it's perfectly valid when using the awsvpc networking mode, but latest version 0.21 fails with error: element 0: attribute "hostPort" is required.

Something is wrong in the module, right?

Thanks.

Enviroment variable should sort by name

ECS store ENVs on taskdefinition sorted by name so everytime you run a terraform apply if yours envs are out of order a new revision is created.

ex:

  environment = [
    {
      name  = "APP_KEY"
      value = "base64:LALALA"
    },
    {
      name  = "APP_URL"
      value = "${var.app_url}"
    },
    {
      name  = "LOG_CHANNEL"
      value = "errorlog"
    },
    {
      name  = "PHP_ARTISAN_MIGRATE"
      value = true
    },
    {
      name  = "COMPOSER_INSTALL"
      value = true
    },
    {
      name  = "PHP_ARTISAN_DB_SEED"
      value = false
    },
    {
      name  = "DB_HOST"
      value = "${module.rds_mysql.db_instance_address}"
    },
    {
      name  = "DB_DATABASE"
      value = "DB"
    },
    {
      name  = "DB_USERNAME"
      value = "foo"
    },
    {
      name  = "DB_PASSWORD"
      value = "foobarbaz"
    },
    {
      name  = "REDIS_HOST"
      value = "${aws_elasticache_cluster.redis.cache_nodes.0.address}"
    },
    {
      name  = "CACHE_DRIVER"
      value = "redis"
    },
    {
      name  = "CACHE_DRIVER"
      value = "redis"
    },
    {
      name  = "SESSION_DRIVER"
      value = "redis"
    },
  ]

when I run a terraform apply terraform create a new version of task definition because on current version the envs are sorted

"environment": [
        {
          "name": "APP_KEY",
          "value": "base64:LALALALA"
        },
        {
          "name": "APP_URL",
          "value": "${var.app_url}"
        },
        {
          "name": "CACHE_DRIVER",
          "value": "redis"
        },
        {
          "name": "COMPOSER_INSTALL",
          "value": "true"
        },
        {
          "name": "DB_DATABASE",
          "value": "linhadireta"
        },
        {
          "name": "DB_HOST",
          "value": "<DBHOST>"
        },
        {
          "name": "DB_PASSWORD",
          "value": "foobarbaz"
        },
        {
          "name": "DB_USERNAME",
          "value": "foo"
        },
        {
          "name": "LOG_CHANNEL",
          "value": "errorlog"
        },
        {
          "name": "PHP_ARTISAN_DB_SEED",
          "value": "false"
        },
        {
          "name": "PHP_ARTISAN_MIGRATE",
          "value": "true"
        },
        {
          "name": "REDIS_HOST",
          "value": "<REDIS_HOST>"
        },
        {
          "name": "SESSION_DRIVER",
          "value": "redis"
        }
      ],

terraform apply

-/+ resource "aws_ecs_task_definition" "td" {
      ~ arn                      = "arn:aws:ecs:us-east-1:<ACCOUNT>:task-definition/<SERVICE>-td:37" -> (known after apply)
      ~ container_definitions    = jsonencode(
          ~ [ # forces replacement
              ~ {
                  + command                = null
                    cpu                    = 256
                  + dependsOn              = null
                  + dnsServers             = null
                  + entryPoint             = null
                  ~ environment            = [
                      ~ {
                          ~ name  = "REDIS_HOST" -> "APP_KEY"
                          ~ value = "<SERVICE>" -> "<key>"
                        },
                      ~ {
                          ~ name  = "PHP_ARTISAN_MIGRATE" -> "APP_URL"
                          ~ value = "true" -> "<APP_URL>"
                        },
                      ~ {
                          ~ name  = "PHP_ARTISAN_DB_SEED" -> "LOG_CHANNEL"
                          ~ value = "false" -> "errorlog"
                        },
                      + {
                          + name  = "PHP_ARTISAN_MIGRATE"
                          + value = "true"
                        },
                      + {
                          + name  = "COMPOSER_INSTALL"
                          + value = "true"
                        },
                      + {
                          + name  = "PHP_ARTISAN_DB_SEED"
                          + value = "false"
                        },
                      + {
                          + name  = "DB_HOST"
                          + value = "<DB>"
                        },
                      + {
                          + name  = "DB_DATABASE"
                          + value = "DB"
                        },
                      + {
                          + name  = "DB_USERNAME"
                          + value = "<USER>"
                        },
                      + {
                          + name  = "DB_PASSWORD"
                          + value = "<DBPASS>"
                        },
                      + {
                          + name  = "REDIS_HOST"
                          + value = "=<REDIS>"
                        },
                      + {
                          + name  = "CACHE_DRIVER"
                          + value = "redis"
                        },
                      + {
                          + name  = "CACHE_DRIVER"
                          + value = "redis"
                        },
                      + {
                          + name  = "SESSION_DRIVER"
                          + value = "redis"
                        },
                        {
                            name  = "APP_KEY"
                            value = "base64:LALALA"
                        },
                      - {
                          - name  = "DB_HOST"
                          - value = "<SERVICE>.<RANDOM>.us-east-1.rds.amazonaws.com"
                        },
                      - {
                          - name  = "SESSION_DRIVER"
                          - value = "redis"
                        },
                      - {
                          - name  = "DB_USERNAME"
                          - value = "foo"
                        },
                      - {
                          - name  = "LOG_CHANNEL"
                          - value = "errorlog"
                        },
                        {
                            name  = "APP_URL"
                            value = "<APP_URL>"
                        },
                      - {
                          - name  = "CACHE_DRIVER"
                          - value = "redis"
                        },
                        {
                            name  = "COMPOSER_INSTALL"
                            value = "true"
                        },
                        {
                            name  = "DB_DATABASE"
                            value = "<DB>"
                        },
                        {
                            name  = "DB_PASSWORD"
                            value = "<DB_PASSWORD>"
                        },
                    ]
                    essential              = true
                  + healthCheck            = null
                    image                  = "<ACCOUNT>.dkr.ecr.us-east-1.amazonaws.com/<SERVICE>:latest"
                  + links                  = null
                    logConfiguration       = {
                        logDriver = "awslogs"
                        options   = {
                            awslogs-group         = "/ecs/service/<SERVICE>"
                            awslogs-region        = "us-east-1"
                            awslogs-stream-prefix = "ecs"
                        }
                    }
                    memory                 = 512
                    memoryReservation      = 512
                  ~ mountPoints            = [] -> null
                    name                   = "<SERVICE>-api"
                  ~ portMappings           = [
                      ~ {
                            containerPort = 80
                            hostPort      = 80
                          ~ protocol      = "tcp" -> "HTTP"
                        },
                    ]
                    readonlyRootFilesystem = false
                  + repositoryCredentials  = null
                  + secrets                = null
                    stopTimeout            = 30
                  + ulimits                = null
                  + user                   = null
                  ~ volumesFrom            = [] -> null
                  + workingDirectory       = null
                } # forces replacement,
            ]
        )
        cpu                      = "256"
        execution_role_arn       = "arn:aws:iam::<ACCOUNT>:role/<SERVICE>-ecs-task-execution-role"
        family                   = "<SERVICE>-td"
      ~ id                       = "<SERVICE>-td" -> (known after apply)
        memory                   = "512"
        network_mode             = "awsvpc"
        requires_compatibilities = [
            "FARGATE",
        ]
      ~ revision                 = 37 -> (known after apply)
      - tags                     = {} -> null
        task_role_arn            = "arn:aws:iam::<ACCOUNT>:role/<SERVICE>-ecs-task-execution-role"
    }

Plan: 1 to add, 2 to change, 1 to destroy.

An argument named "log_options" is not expected here.

Error: Unsupported argument
on ../shared/ecs_docker/main.tf line 148, in module "container_definition":
148: log_options = {
An argument named "log_options" is not expected here.

This error occurs in this module:
module "container_definition" {
source = "git::https://github.com/cloudposse/terraform-aws-ecs-container-definition.git?ref=master"
container_name = module.label.id
container_image = "${data.aws_caller_identity.account.account_id}.dkr.ecr.${var.aws_region}.amazonaws.com/${aws_ecr_repository.repo.name}:release"

port_mappings = [
{
containerPort = var.container_port
hostPort = 80
protocol = "tcp"
},
]

secrets = "${var.secrets}"

log_options = {
"awslogs-region" = var.aws_region
"awslogs-group" = aws_cloudwatch_log_group.logs.name
"awslogs-stream-prefix" = "ecs"
}
}

Support initProcessEnabled on ECS Fargate

Describe the Feature

Allow set parameter

linux_parameters = { initProcessEnabled = true }

Expected Behavior

Generate json with:

"linux_parameters": { "initProcessEnabled": true }

Use Case

When using ECS Fargate, some options like sharedMemorySize is not supported.
If i set only initProcessEnabled then this module will throw error

β”‚ Error: Invalid value for module argument
β”‚
β”‚   on main.tf line 564, in module "container_definition":
β”‚  564:   linux_parameters = {
β”‚  565:     initProcessEnabled = true
β”‚  566:   }
β”‚
β”‚ The given value is not suitable for child module variable
β”‚ "linux_parameters" defined at
β”‚ .terraform/modules/container_definition/variables.tf:140,1-28: attributes
β”‚ "capabilities", "devices", "maxSwap", "sharedMemorySize", "swappiness", and
β”‚ "tmpfs" are required.

But if i set full options, aws_ecs_task_definition resource will throw error

aws_ecs_task_definition.app: Creating...
β”‚
β”‚ Error: ClientException: Fargate compatible task definitions do not support sharedMemorySize
β”‚
β”‚   with aws_ecs_task_definition.app,
β”‚   on main.tf line 619, in resource "aws_ecs_task_definition" "app":
β”‚  619: resource "aws_ecs_task_definition" "app" {
β”‚

https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LinuxParameters.html

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Error type: undefined. Note: this is a nested preset so please contact the preset author if you are unable to fix it yourself.

log_configuration shouldn't require optional key secretOptions

If you try and define a log_configuration block, it will fail if you do not define any secretOptions, which are optional.

  log_configuration = {
    logDriver = "awslogs"
    options = {
      awslogs-group = aws_cloudwatch_log_group.foo.id
      awslogs-region = var.aws_region
      awslogs-stream-prefix = "db"
    }
  }

will throw

The given value is not suitable for child module variable "log_configuration"
defined at .terraform\modules\resat_db\variables.tf:135,1-29: attribute
"secretOptions" is required.

https://github.com/cloudposse/terraform-aws-ecs-container-definition/blob/master/variables.tf#L149

Setting secretOptions = [] works around this.

Add Example Usage

what

  • Add example invocation

why

  • We need this so we can soon enable automated continuous integration testing of module

Improve interface of the "secrets" input variable

This is a suggestion, very similar to the one already created in #61 and improvement is closed.

The current way of passing secrets variables to the module is by using a list of maps

variable "secrets" {
  type = list(object({
    name      = string
    valueFrom = string
  }))
  description = "The secrets to pass to the container. This is a list of maps"
  default     = null
}

eg. with using parameter store

secrets = [
    {
      name  = "SECRET_1",
      valueFrom = "arn:aws:ssm:region:aws_account_id:parameter/parameter1_name"
    },
    {
      name  = "SECRET_2",
      valueFrom = "arn:aws:ssm:region:aws_account_id:parameter/parameter2_name"
    }
  ]

Hardcoded part of ARN (region and aws_account_id) i guess can be managed dynamically with local variables.
Maybe there can be some refactoring to create type = map(any) variable

map_secrets = {
      "SECRET_1" = "parameter/parameter1_name"
      "SECRET_2" = "parameter/parameter2_name"
    }

Do you think this is possible?

Thanks,
Dejan

Usage with lifecycle

The question

Hi all

We're experiencing a hiccup with terraform infrastructure changes + CD deployments to ECS. CD changes the container_image value. There appear to me 3 ways to run terraform without wiping out the changes that CD made.

  1. inject the correct docker image tag & string interpolate it into container_image
  2. get terraform to discover the current container_image.
  3. use lifecycle & ignore_changes with container_image.

Option 1 seems reasonable although 3 seems to be more the correct approach according to https://www.terraform.io/docs/configuration/resources.html#ignore_changes

On that note, is lifecycle supposed to work with this module, if not is it planned, or even possible?

I imagine something like this:

lifecycle {
  ignore_changes = [ container_image ]
}

That usage returns

The block type name "lifecycle" is reserved for use by Terraform in a future
version.

Coincidentally lifecycle works fine when used with aws_ecs_task_definition as such:

  lifecycle {
    ignore_changes = [ container_definitions ]
  }

However that's not ideal, we need to be able to control things like cpu & memory using terraform.

Thanks for your time & consideration.

Separately for anyone that's interested

Option 2 has been a failure, I can't find the correct strategy using data declarations to get this information, i have tried something like this:

data "aws_ecs_container_definition" "www" {
  task_definition = "${aws_ecs_task_definition.default.arn}"
  container_name  = "www"
  depends_on = ["aws_ecs_task_definition.default"]
}

locals {
  www_container_image = coalesce(
    "${data.aws_ecs_container_definition.www.image}",
    "${aws_ecr_repository.default.repository_url}:0.0.1"
    )
}

module "www_container_definition" {
  source = "github.com/cloudposse/terraform-aws-ecs-container-definition"

  container_name = "www"
  container_image = local.www_container_image
  ...
}

resource "aws_ecs_task_definition" "default" {
  family                = "${var.environ_name}"
  container_definitions = "${module.www_container_definition.json}"
  requires_compatibilities = ["FARGATE"]
  network_mode = "awsvpc"
  cpu = 512
  memory = 1024
  execution_role_arn = "${var.task_execution_role_arn}"
}

However this fails with the following error, that I do not understand (and only guess is due to the circular nature of it all, or maybe I've bungled coalesce).

Error: Cycle: module.frontend.module.www_container_definition.output.json, module.frontend.aws_ecs_task_definition.default, module.frontend.data.aws_ecs_container_definition.www, module.frontend.local.www_container_image, module.frontend.module.www_container_definition.var.container_image, module.frontend.module.www_container_definition.local.container_definition, module.frontend.module.www_container_definition.local.encoded_container_definition, module.frontend.module.www_container_definition.local.json_with_environment, module.frontend.module.www_container_definition.local.json_with_secrets, module.frontend.module.www_container_definition.local.json_with_cpu, module.frontend.module.www_container_definition.local.json_with_memory, module.frontend.module.www_container_definition.local.json_with_memory_reservation, module.frontend.module.www_container_definition.local.json_with_stop_timeout, module.frontend.module.www_container_definition.local.json_map

portMappings.hostPort is required

Describe the Bug

hostPort is optional when using awsvpc network_mode. However, this module requires it no matter what.

Expected Behavior

hostPort should be optional.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create a container definition with the following:
  port_mappings = [
    {
      "protocol"      = "tcp",
      "containerPort" = 3000
    }
  ]

Screenshots

Error: Invalid value for module argument

  on ecs.tf line 36, in module "container_definition":
  36:   port_mappings = [
  37:     {
  38:       "protocol"      = "tcp",
  39:       "containerPort" = 3000
  40:     }
  41:   ]

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: Linux
  • Version:
    • Module 0.47.0

[BUG] - Sensitive values causing error in Terraform apply

Dear all,

I came across this error in terraform 0.14.3. I believe it has something to do with it here.

https://www.terraform.io/upgrade-guides/0-14.html#sensitive-values-in-plan-output

Error: Error in function call

  on .terraform/modules/prizor-chatbot-campaign-worker/main.tf line 6, in locals:
   6:   env_vars_as_map      = zipmap(local.env_vars_keys, local.env_vars_values)
    |----------------
    | local.env_vars_keys is (sensitive value)
    | local.env_vars_values is (sensitive value)

Call to function "zipmap" failed: panic in function implementation: value is
marked, so must be unmarked first

Support "logConfiguration": null

The logConfiguration property is created by default in main.tf

logConfiguration = {
   logDriver = "${var.log_driver}"
   options   = "${var.log_options}"
}

This does not gives the ability to set the logConfiguration to null, which is the default setting when creating a EC2 task definition from the aws console.
Could it be possible to make it optional ?

Thanks for your help

Module ignoring port mappings

Description
Creating a container definition with port_mappings and then removing the mapping doesn't work

Repro steps

  1. Create a container definition with port_mappings defined:
    Example:
module "aws_ecs_container_definition" {
  source          = "git::https://github.com/cloudposse/terraform-aws-ecs-container-definition.git?ref=master"
  container_name = "somecontainer"
  container_image = "5435345345346.dkr.ecr.us-east-1.amazonaws.com/app-blabla:${terraform.workspace}"
  container_cpu = 512
  //  essential = true

  command = [
    "/bin/bash",
    "bin/start.sh"]

  # hard limit
  container_memory = 2048

  # soft limit
  container_memory_reservation = 1024

  log_options = {
    awslogs-group = "${data.terraform_remote_state.global_resources.ecs_cloudwatch_log_group_name}",
    awslogs-region = "us-east-1",
    awslogs-stream-prefix = "someapp"
  }

  mount_points = [
    {
      containerPath = "${var.efs_cache_path}"
      sourceVolume = "${var.efs_cache_volume}"
    }
  ]

  environment = [
    {
      name = "AWS_ENVIRONMENT"
      value = "${terraform.workspace}"
    },
  ]
  port_mappings = [
    {
      hostPort = 80
      containerPort = 80
      protocol = "tcp"
    }
  ]
}
  1. Remove said port_mappings and run terraform plan -out=plan.out

Expected behaviour
Terraform detects the change in the task_definition

Actual behaviour
No change is detected

CPU limit is not optional

The variable defaults to 0, but it is allowed to leave it unset.

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_task_definition#cpu
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html

IMHO, we should allow to pass null to this module.

Error: Invalid value for module argument

  on main.tf line 33, in module "container-definition":
  33:   container_cpu                = var.container-cpu

The given value is not suitable for child module variable "container_cpu"
defined at .terraform/modules/container-definition/variables.tf:54,1-25: a
number is required.```

Add validation to `secrets` and `map_secrets`'s `valueFrom` to ensure it's an arn

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Add validation to secrets and map_secrets's valueFrom to ensure it's an arn

Expected Behavior

Fail if any valueFrom does not use a valid arn format

Use Case

At the moment, this will error after an apply. An input validation would catch it earlier.

Describe Ideal Solution

Input var validation using a regex.

For example, this is a valid arn

arn:aws:ssm:us-east-2:snip:parameter/global/snip

Perhaps the regex from here hashicorp/terraform-provider-aws#8307

^arn:[\w-]+:([a-zA-Z0-9\-])+:([a-z]{2}-((?:gov|iso|isob)-)?[a-z]+-\d{1})?:(\d{12})?:(.*)$

or even simpler

^arn:.*

Alternatives Considered

  • Apply, fail, update.
  • Be more vigilant when passing in inputs vars

Additional Context

Issues mapping EFS volume in task definition

Hi - I'm fairly new to this, but gotten to like terraform a lot.
I'm trying to launch a service/task in ECS with a volume mapped to an EFS volume. I've failed miserably after a few days. I can't find a good end to end example.

In your documentation, you mention the following for volumes_from:

A list of VolumesFrom maps which contain "sourceContainer" (name of the container that has the volumes to mount) and "readOnly" (whether the container can write to the volume)

So if I have 1 container in my task and I wanted the volumes mapped on that container, what should I pass for sourceContainer? If I pass the name of the container itself, it detects a loop. It sounds like I must have 2 containers in my task to get an EFS volume mounted.. but I'm not sure.

Also, I think we need to install efs utils? and I'm not sure this repo does that or if it's indeed needed for EFS to work.

Request: mount_points lack of readOnly flag

I'm passing data to a Fargate container, but at mount_points I only see this:

variable "mount_points" {
  type = list(object({
    containerPath = string
    sourceVolume  = string
  }))

  description = "Container mount points. This is a list of maps, where each map should contain a `containerPath` and `sourceVolume`"
  default     = null
}

And in a mysql container I want to pass a volume as read-only mode. Like this

mount_points = [
    {
      sourceVolume: local.database_volume,
      containerPath: "/docker-entrypoint-initdb.d"
      readOnly: true
    }
  ]

json: cannot unmarshal string into Go struct field MountPoint.MountPoints.ReadOnly of type bool

Describe the Bug

When you provide a mount point with readOnly = true as a mapped item, it fails to be translated to valid JSON. The same is true of readOnly = "true".

Expected Behavior

A read-only volume to be created.

Steps to Reproduce

Apply an otherwise valid container definition with this code block.

mount_points = [
        {
            sourceVolume = "docker-socket"
            containerPath = "/var/run/docker.sock"
            readOnly        = true
        },
        {
            sourceVolume = "letsencrypt"
            containerPath = "/letsencrypt"
        }
    ]

If relevant, this is one of several definitions running under one service, so the final definition is laid out as:

container_definitions    = "[${module.cache.json_map_encoded},${module.database.json_map_encoded},${module.php-fpm.json_map_encoded},${module.reverse-proxy.json_map_encoded}]"

Screenshots

Error: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal string into Go struct field MountPoint.MountPoints.ReadOnly of type bool

  on ecs.tf line 62, in resource "aws_ecs_task_definition" "dev_stack_task":
  62:   container_definitions    = "[${module.cache.json_map_encoded},${module.database.json_map_encoded},${module.php-fpm.json_map_encoded},${module.reverse-proxy.json_map_encoded}]"

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • Terraform Cloud, v0.13.5

Provide command as a string

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

It would be nice to provide the command as a string instead of as a list but maintain the original command input for backwards compatibility

Expected Behavior

Use a command_string input that splits the string into a list

Use Case

I want to provide a command without turning it into a list first

Describe Ideal Solution

Create a command_string input that splits the string into a list

Alternatives Considered

We can split it before passing in a command to the container... but that's no fun

  command = split(" ", "bin/rails service -p 3000 -b 0.0.0.0")

Additional Context

None

Error decoding JSON: json: cannot unmarshal string into Go struct field HealthCheck.Command of type []*string

Can't launch module with healthcheck parameter.

module "hello-rabbitmq" {
  source           = "git::https://github.com/cloudposse/terraform-aws-ecs-container-definition.git?ref=master"
  container_image  = "rabbit:alpine"
  container_name   = "rabbit"
  container_memory = 512
  container_cpu    = 10

  mount_points = [{
    containerPath = "/var/lib/rabbitmq/"
    sourceVolume  = "rabbit_data"
  }]

  healthcheck = {
    command  = "rabbitmqctl status"
    interval = 5
    timeout  = 5
    retries  = 5
  }
  port_mappings = [{
    "containerPort" = 5672
    "hostPort"      = 5672
    "protocol"      = "tcp"
  }]

If I remove or comment heathcheck, Don't have error!

secrets_as_map causing issues with sensitive values

This was originally posted as a comment on closed issue #125 but as this is a different problem problem a new issue would make more sense.

Running this module past v0.50.0 results in:

Error: Error in function call

  on .terraform/modules/container_grafana/main.tf line 21, in locals:
  21:   secrets_as_map      = zipmap(local.secrets_keys, local.secrets_values)
    |----------------
    | local.secrets_keys is (sensitive value)
    | local.secrets_values is (sensitive value)

Call to function "zipmap" failed: panic in function implementation: value is
marked, so must be unmarked first
goroutine 9695 [running]:
runtime/debug.Stack(0xc004da1ab8, 0x23d80e0, 0x2c41070)
	/usr/local/go/src/runtime/debug/stack.go:24 +0x9f
github.com/zclconf/go-cty/cty/function.errorForPanic(...)
	/go/pkg/mod/github.com/zclconf/[email protected]/cty/function/error.go:44
github.com/zclconf/go-cty/cty/function.Function.ReturnTypeForValues.func1(0xc004da20c8,

This can be reproduced using:

/test_module/main.tf:

resource "random_string" "password" {
  length  = 32
  special = false
}

output "test_password" {
  value       = random_string.password.result
  description = "Test password"
  sensitive   = true
}

/main.tf:

module "test" {
  source = "./test_module"
}

module "demo_container" {
  source          = "git::https://github.com/cloudposse/terraform-aws-ecs-container-definition.git?ref=master"
  container_name  = "demo-container"
  container_image = "nginxdemos/hello:latest"

  secrets = [
    {
      name      = "PASSWORD",
      valueFrom = module.test.test_password
    }
  ]

  environment = [
    {
      name  = "APP_ENV",
      value = "production",
    }
  ]

  port_mappings = [
    {
      containerPort = 80
      hostPort      = 80
      protocol      = "tcp"
    }
  ]

  healthcheck = {
    command     = ["CMD-SHELL", "curl -f http://localhost:80/health || exit 1"]
    retries     = 5
    timeout     = 5
    interval    = 30
    startPeriod = 30
  }
}

output "demo_container" {
  value = module.demo_container.json_map_encoded
}

It appears to be an issue in Terraform itself, I've found a few issues about it over the past months.
The most recent one is hashicorp/terraform#27954. In this particular issue it was reported as fixed, this fix was also included in Terraform v0.14.8 but this doesn't fix this particular issue with the module so I created hashicorp/terraform#28049.

Error when trying to use EFS volumes in task/container definition

Describe the Bug

I'm trying to use an EFS volume in an ECS service definition. The volumes variable is defined such that one has to supply a value for both the efs_volume_configuration and docker_volume_configuration parameters. This seems to be a Terraform syntax limitation having to do with a lack of optional arguments. However, the solution of passing an empty list doesn't work in this case, yielding the following error:

ClientException: When the volume parameter is specified, only one volume configuration type should be used.

Passing null for docker_volume_configuration doesn't work, either:

Error: Invalid dynamic for_each value

  on .terraform/modules/ecs-service/main.tf line 70, in resource "aws_ecs_task_definition" "default":
  70:         for_each = lookup(volume.value, "docker_volume_configuration", [])
    |----------------
    | volume.value is object with 4 attributes

Cannot use a null value in for_each.

Expected Behavior

To be able to use EFS volumes in an ECS service definition.

Steps to Reproduce

Update the example in examples/complete with the following added to main.tf:

  volumes = [{
    name = "html"
    host_path = "/usr/share/nginx/html"
    # docker_volume_configuration = null
    docker_volume_configuration = []
    efs_volume_configuration = [{
      file_system_id = "fs-8de214f2"
      root_directory          = "/home/user/www"
      transit_encryption      = "ENABLED"
      transit_encryption_port = 2999
      authorization_config = []
    }]
  }]


Environment (please complete the following information):

Terraform v0.14.10, MacOS 10.15.7
Anything that will help us triage the bug will help. Here are some ideas:

  • OS: [e.g. Linux, OSX, WSL, etc]
  • Version [e.g. 10.15]

json: cannot unmarshal object into Go value of type []*ecs.ContainerDefinition

Hey guys,

I guess it's definitely something on my end, but recently I have changed from version 0.23 to latest one (to support TF 0.13).
Based on that unfortunately all my projects starts throwing errors like in title.

Before I had:

resource "aws_ecs_task_definition" "task_definition"
{
...
  container_definitions    = module.my_own_name.json
}

Now I changed output name accordingly to your documenation like below:

resource "aws_ecs_task_definition" "task_definition"
{
...
  container_definitions    = module.my_own_name.json_map_encoded
}

But that gives me:

ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal object into Go value of type []*ecs.ContainerDefinition

Really I would be gratefully for any help

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

This repository currently has no open or pending branches.

Detected dependencies

terraform
versions.tf
  • hashicorp/terraform >= 0.13.0
  • local >= 1.2

  • Check this box to trigger a request for Renovate to run again on this repository

Support for privileged parameter

This module doesn't currently support the privileged parameter in the container definition.

While I'm aware of the security implications of using privileged, it would be a useful addition.

An example of this parameter being required is running a Falco docker container on ECS hosts to detect anomalous activity: https://falco.org/docs/installation/#docker

I've opened PR #38 to include the privileged parameter and tested it with both EC2 and FARGATE launch types.

Could not download module "ecs-container-definition" source code from "https://api.github.com/repos/cloudposse/terraform-aws-ecs-container-definition/tarball/0.21.0//*?archive=tar.gz":

My terraform file has below definition

module "ecs-container-definition" {
  source  = "cloudposse/ecs-container-definition/aws"
  version = "0.21.0"
...

When I run terraform init I get below error

terraform init
Initializing modules...
Downloading cloudposse/ecs-container-definition/aws 0.21.0 for ecs-container-definition...

Error: Failed to download module

Could not download module "ecs-container-definition" (ecs.tf:106) source code
from
"https://api.github.com/repos/cloudposse/terraform-aws-ecs-container-definition/tarball/0.21.0//*?archive=tar.gz":
Error opening a gzip reader for

The URI that terraform uses does not seem to be valid, even when I try wget I get an error

wget "https://api.github.com/repos/cloudposse/terraform-aws-ecs-container-definition/tarball/0.21.0//*?archive=tar.gz"
Warning: wildcards not supported in HTTP.
--2020-01-09 15:31:18--  https://api.github.com/repos/cloudposse/terraform-aws-ecs-container-definition/tarball/0.21.0//*?archive=tar.gz
Resolving api.github.com (api.github.com)... 140.82.114.6
Connecting to api.github.com (api.github.com)|140.82.114.6|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://codeload.github.com/cloudposse/terraform-aws-ecs-container-definition/legacy.tar.gz/0.21.0//* [following]
Warning: wildcards not supported in HTTP.
--2020-01-09 15:31:18--  https://codeload.github.com/cloudposse/terraform-aws-ecs-container-definition/legacy.tar.gz/0.21.0//*
Resolving codeload.github.com (codeload.github.com)... 192.30.253.121
Connecting to codeload.github.com (codeload.github.com)|192.30.253.121|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-01-09 15:31:18 ERROR 404: Not Found.

Any help on how to fix this is appreciated. This was working 2 days ago and broke today.

Make FireLens usage optional

Hi,

thanks for updating the module recently :)

However, there is no way of disabling the FireLens feature in the Module and im running in the error:

Error: ClientException: When a firelensConfiguration object is specified, at least one container has to be configured with the awsfirelens log driver.

Could you please make this optional?

Distinguishment between outputs

Background

We have the local.map which is defined as

json_map = jsonencode(merge(local.container_definition_without_null, var.container_definition))

and we have these outputs

output "json" {
description = "JSON encoded list of container definitions for use with other terraform resources such as aws_ecs_task_definition"
value = "[${local.json_map}]"
}
output "json_map" {
description = "JSON encoded container definitions for use with other terraform resources such as aws_ecs_task_definition"
value = local.json_map
}

  • Since jsonencode returns a string, that means that local.map is a string of json of the container definition.
  • The output json is a string list of json where the only item is the container definition.
  • The output json_map is a string of the container definition

Problem

I recently wanted to use the json_map output thinking that it was an actual map but wasn't and in order to use the map version of it, I had to wrap the output in jsondecode().

Proposal

Since we pin and push people to pin down their versions of modules, I think we can make the following changes and document them

  • Rename json -> json_map_encoded_list so it still returns a list with a single item (unsure where this is useful)
  • Rename json_map -> json_map_encoded so it still returns a json string
  • Create json_map_object and set it to jsondecode(local.json_map) so it returns a map

e.g.

output "json_map_encoded_list" { 
  description = "JSON string encoded list of container definitions for use with other terraform resources such as aws_ecs_task_definition" 
  value       = "[${local.json_map}]" 
 } 
 
output "json_map_encoded" { 
  description = "JSON string encoded container definitions for use with other terraform resources such as aws_ecs_task_definition" 
  value       = local.json_map 
}

output "json_map_object" { 
  description = "JSON map encoded container definition" 
  value       = jsondecode(local.json_map)
} 

Output refers to sensitive values

Describe the Bug

Running terraform plan and terraform apply using terraform 0.15.0 results in this error message:

β”‚ Error: Output refers to sensitive values
β”‚
β”‚   on .terraform/modules/application.app_container_definition/outputs.tf line 6:
β”‚    6: output "json_map_encoded" {
β”‚
β”‚ Expressions used in outputs can only refer to sensitive values if the sensitive attribute is true.

Expected Behavior

No error when planning / applying.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create a terraform script with a module with github.com/cloudposse/terraform-aws-ecs-container-definition?ref=0.56.0 as the source
  2. Run 'terraform plan'
  3. See error

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: MacOS Big Sur
  • Version: 0.15.0

Additional Context

I think this issue is already fixed on master but there was no release. Can we get this released? We'd like to keep the version locked. EDIT: it's not actually fixed yet. I can give it a go but just marking the outputs as sensitive will be a bit too naive probably. πŸ˜„

Allow to configure volumes

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

This feature request is to add host volumes to container definition, for example, from https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_task_definition:

volume {
    name      = "service-storage"
    host_path = "/ecs/service-storage"
  }

Expected Behavior

Local host path can be used to create volumes.

Use Case

Sometimes the volume parameter is required for container definition and it is supported by Terraform provider. Here is a reference: https://registry.terraform.io/modules/lazzurs/ecs-service/aws/latest?tab=inputs.

[req] Remove defaults that aren't actually ECS/Fargate defaults

I've run into a few issues where I've left settings unset expecting the default Fargate behavior but been unexpectedly surprised by the selected configuration values.

CPU, memory, reservation, and now port mappings. I appreciate what this module is doing, but setting default values that don't match the ECS service's defaults is surprising behavior.

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.