Coder Social home page Coder Social logo

aws / containers-roadmap Goto Github PK

View Code? Open in Web Editor NEW
5.1K 691.0 310.0 187 KB

This is the public roadmap for AWS container services (ECS, ECR, Fargate, and EKS).

Home Page: https://aws.amazon.com/about-aws/whats-new/containers/

License: Other

Shell 50.99% PowerShell 49.01%
roadmap aws containers eks fargate ecr ecs kubernetes

containers-roadmap's Introduction

Containers Roadmap

This is the public roadmap for AWS container services (ECS, ECR, Fargate, and EKS).

Introduction

This is the public roadmap for AWS Container services. Knowing about our upcoming products and priorities helps our customers plan. This repository contains information about what we are working on and allows all AWS customers to give direct feedback.

See the roadmap »

Other AWS Public Roadmaps

Developer Preview Programs

We now have information for developer preview programs within this repository. Issues tagged Developer Preview on the public roadmap are active preview programs.

Current Programs

  • There's more to come! Stay tuned!

Past Programs

Security disclosures

If you think you’ve found a potential security issue, please do not post it in the Issues. Instead, please follow the instructions here or email AWS security directly.

FAQs

Q: Why did you build this?

A: We know that our customers are making decisions and plans based on what we are developing, and we want to provide our customers the insights they need to plan.

Q: Why are there no dates on your roadmap?

A: Because job zero is security and operational stability, we can't provide specific target dates for features. The roadmap is subject to change at any time, and roadmap issues in this repository do not guarantee a feature will be launched as proposed.

Q: What do the roadmap categories mean?

  • Just shipped - obvious, right?
  • Coming soon - coming up. Think a couple of months out, give or take.
  • We're working on it - in progress, but further out. We might still be working through the implementation details, or scoping stuff out.
  • Researching - We're thinking about it. This might mean we're still designing, or thinking through how this might work. This is a great phase to send how you want to see something implemented! We'd love to see your usecase or design ideas here.

Q: Is everything on the roadmap?

A: The majority of our development work for Amazon ECS, Fargate, ECR, EKS and other AWS-sponsored OSS projects are included on this roadmap. Of course, there will be technologies we are very excited about that we are going to launch without notice to surprise and delight our customers.

Q: How can I provide feedback or ask for more information?

A: Please open an issue!

Q: How can I request a feature be added to the roadmap?

A: Please open an issue! You can read about how to contribute here. Community submitted issues will be tagged "Proposed" and will be reviewed by the team.

Q: Will you accept a pull request?

A: We haven't worked out how pull requests should work for a public roadmap page, but we will take all PRs very seriously and review for inclusion. Read about contributing.

License

This library is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.

To learn more about the services, head here: http://aws.amazon.com/containers

containers-roadmap's People

Contributors

abby-fuller avatar akshayram-wolverine avatar andyhopp avatar colmmacc avatar dcopestake avatar hyandell avatar jamesiri avatar jayanthvn avatar joebowbeer avatar labisso avatar maishsk avatar mattshin avatar mcrute avatar mhausenblas avatar micahhausler avatar mikestef9 avatar mndoci avatar nithu0115 avatar patmyron avatar pettitwesley avatar realvz avatar rikatz avatar rpnguyen avatar schmutze avatar somanyhs avatar srini-ram avatar tabern avatar toricls avatar vsiddharth avatar wongma7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

containers-roadmap's Issues

support for curl meta-info on taskDefinition

can we support curl meta-info resolving on taskDefinition?
i want to spawn a docker container and specify --dns to be $(curl http://.... metadata endpoint here from aws), i couldn't get it to work, how to point it to the host in which this docker container is running from a task definition dnsServers: ["??, ??"]

thanks

[ECS] Configure log driver for the amazon-ecs-agent container itself (awslogs, splunk, etc)

First of all, I really like the simplicity this project provides!

If I'm not mistaken, it is currently not possible to configure the logging of the agent container itself. The agent will pick up the ecs.config but I see no way to configure the agent's logging.

What I would like is to pass

--log-driver=awslogs \
--log-opt awslogs-region=${aws_region} \
--log-opt awslogs-group=${ecs_log_group_name} \

arguments to the agent container.

Did I miss something or is there a way to hook into the agent start?

Expose AWS credentials on a per-container basis

In reference to #346, I'd like to create a feature request to make credentials even more granular.

Currently, all containers within a single task definition have to share the same set of permissions, so we cannot really use multi-container task definitions in sensitive scenarios. That severely limits some applications, e.g. if we want to pair each "application" container with a co-pilot "service" container on the same host. I.e. instead of hosting an actual ECS service shared by multiple "application" containers, we would like to run one "service" container per each "application" container, with different permissions for each container in the pair. An example of that could be a custom logger, or an S3 management container, or a DB instance, paired with a client-facing application, where each application container and the corresponding "service" container should be completely isolated from others, while running on a dedicated bridge network, and with sensitive permissions granted only to the backend. In that case, currently we have to split them in separate tasks, put them on the host network (which provides less isolation) and write a custom scheduler that would place both tasks on the same EC2 instance.

Of course, there are workarounds to that, e.g. running a single pair of tasks per each EC2 instance. Still, they may introduce an additional layer of complexity and/or unnecessary constrains on the instance size and the total number of instances (limited on the account level) .

It seems that adding "container role ARN" as an option to "container overrides" would address this problem without changing the current API. This option could be used in isolation for each container when no task role is configured, so no conflicts with the latter would come up. Or perhaps it could create a superset of the 2 roles, much like currently task role policies apply in addition to instance role policies. Or maybe it could even override the task role policies, though that would require more work to implement while preserving compatibility with the current behavior.

Support of "GroupAdd" configuration.

Summary

Docker client support option --group-add
or
API parameter

       "HostConfig": {
         ...
         "GroupAdd": ["newgroup"],
         ...
}

Would be nice to be able to set this parameter from the TaskDefinition. Support of this configuration was introduced in the API v1.20

Mac Address Configuration Option

This relates to #502
I have a use case for the --mac-address option. I know it's not supported by ECS or it's agent right now. Are there any workarounds I might be able to utilize so I can still use ECS but also satisfy my application requirement?

Unlike some other docker options, I can't add this to /etc/sysconfig/docker.

Allow to register multiple containers in a task for application load balancing

Right now since a service will only expose one port to a load balancer we need to run another container in the task that load balances incoming requests to all individual containers. For our particular case we need to run multiple containers of the same image that expose port n. Right now we have to setup service that includes 1 haproxy load container and 5 webservices that all map port 0 to port 80 in the container. Haproxy links all containers and then exposes port 80 to the service to be load balanced. We would be able to avoid this if ECS allowed to load balance more than one container in a service.

Feature request: cluster-wide ENV definitions

Use case: I'm running a mix of ECS, EC2, and AWS resources like RDS and ElastiCache. I have separate staging and production environments on different VPCs. Right now, to convey the connection URL for my ElastiCache redis instance, I have to set an environment variable REDIS_HOST on every single task definition that I create. It's different for staging vs. production, so this means I have to create different tasks for every container in every environment. This is manageable with automation, but also a bit overly complex.

Another use case would be setting RUBY_ENV or RAILS_ENV for every task.

Proposed solution: Add the ability to define env variables per ECS cluster, and have them applied to all containers run within that cluster.

Extend StartTask API to allow multiple tasks

Currently the StartTask API allows schedulers to start one task on multiple instances but there is no mechanism to customised the overrides on each instance.

Our company routinely runs massively parallel jobs where the same task is run over and over again with different commands or environment variables. The current limitation of StartTask means our scheduler needs to make an API call for every combination of parameters we want to run, which could be a lot.

We would like to have an API like start-task task1 with override1 on instance1, override2 on instance2 or equivalently start-task task-definition1 on instance1, task-definition2 on instance2.

Feature request: override mount points for each task

Hi Team,

Would it be possible to introduce an option to override volume mount path for each task? Right now, we have to register a new task definition just so we could change the mount path on a per-container basis. This introduces additional headache in how to keep track of various definitions, and is also rate limited by the task registration rate.

Our use case is batch processing of various datasets, where the container parameters don't change but the location of the dataset on the local host must be unique for each container/task. E.g. we would have all of our datasets under /datasets/<job_id> on the host, where job_id is unique for each task. One workaround is to pass an environmental variable JOB_ID and have each container read/write to its prefixed location, e.g. if /datasets is mounted to /data in all containers, then each container would gracefully access only /data/<JOB_ID>. However, this provides only minimal isolation of tasks in untrusted environments. I.e. if one task 'misbehaves', it can access everything under /data, including datasets used by other tasks.

It seems like Docker run's -v option maps directly to the containerPath option of a task definition, so is there any reason it cannot be currently overridden for each task, just like environmental variables? We wouldn't even need to change the volume itself, which can remain an immutable part of the task definition.

Thanks

[ECS] Support use of swap memory

I have an ECS workload that requires a very large amount of memory. The performance of the workload is secondary, so using swap memory would be the ideal solution.

This issue is related to #124. In that issue, @samuelkarp suggested opening a new issue if swap support is still needed. Since several others added +1 to #124 after it was closed, I'm opening this issue now.

Port mappings: add support of port ranges

Hi there!

Some applications do require wide range of open ports and the only option we have at the moment is to set them explicitly (e.g. one by one) when defining tasks.

I think that it would be useful to allow use of port ranges, like

{
          "hostPorts": 3000-3025,
          "containerPorts": 3000-3025,
          "protocol": "tcp"
}

Also, I've noticed that ecs-agent throws an error if there is > 100 ports specified in task definition, regardless of port numbers:

 Run tasks failed
 Reasons : RESOURCE:PORTS

Please advice.

Dynamically update ECS_RESERVED_MEMORY?

Hi Team,

Currently, the only way to "block out" memory external to ECS-scheduled containers (e.g. consumed by a TMPFS volume, or by a container scheduled directly through "docker run"), seems to be to specify a certain amount in advance via ECS_RESERVED_MEMORY. If this amount needs to change dynamically, we have to modify it in ecs.config, de-register that instance, and then register it again. That seems like a pretty disruptive change, especially that some tasks may still be running and would not normally be expected to fail or lost track of.

Or is there currently a way (or at least a workaround) to dynamically update the amount of memory not managed by ECS, ideally through an API call, that doesn't disrupt existing tasks on that instance?

Thank you

Get the task id as a log-opt tag in non Cloudwatch logs logging driver

Summary

It would be useful to be able to use the task id in a logging drivers tagged output.

Description

I'm starting to look at shipping logs to somewhere other than Cloudwatch logs (was useful to get a PoC up and running but Cloudwatch logs suffers massively from being difficult to view across streams, something that is 10x worse with containers) but I'd still like to be able to see the task id that is logging if possible.

I was planning on using a fluentd log driver and tag the logs with the service name but it strikes me that it would be near perfect to use the ECS generated container name.

Looking at the output of docker ps and https://github.com/aws/amazon-ecs-agent/blob/master/agent/engine/docker_task_engine.go#L639 I can see the autogenerated name for the container has the task definition family + version which is useful but then oddly has a random hex string at the end.

I can see you need a source of randomness here in case a service schedules multiple tasks on to the same container instance but it would be more useful if this was just the task id which is already unique.

The task struct already contains the Arn which has the task id so I'd suggest parsing it out of that.

Customization of image name that gets run

Right now, it looks like the image being run is a constant AgentImageName = "amazon/amazon-ecs-agent:latest". It'd be useful if this could be changed with an environment variable or something so that it's simpler to test and use custom forks of the agent.

env-file support

I open this issue to pick up a point that was made in #127 to support the --env-file parameter. As pointed out in #127 this would be useful e.g. to add some environment variables that contain sensitive information. This way sensitive environment variables could be stored in a private S3 bucket and be pulled in from there either directly or via a mounted volume.

If the --env-file parameter is supported I guess the documentation on Task Definition Parameters could also be improved. Under environment it is mentioned that it is not recommended to put sensitive information in there, however it does not point to a solution on how to do this otherwise.

Extract from issue #127:

[...] Ideally it would allow an s3 endpoint:

"containerDefinitions":[
  {
    "env_file":[
      { "bucket":"my-bucket", "key":"myenvlist" }
    ]
  }
]

Elastic Beanstalk lets you do something similar in the Dockerrun.aws.json for docker private repository configuration:

"Authentication":{
  "Bucket":"my-bucket",
  "Key":"mydockercfg"
},

How to run "docker exec... " command in ECS

Hi
I have three containers that are running in ECS. But the website comes up only when we run "docker exec..." command. I can do this by login into the server and running this command. But this shouldn't be used. So my question is how to run "docker exec..." command without logging into the server.
You can give solution in AMAZON ECS console or using ecs-cli or any other which you know.
Since using ecs-cli command, we can make a cluster, tasks etc from our local machine. So how to run docker exec command from local machine into the containers.

[Rebalancing] Smarter allocation of ECS resources

@euank

Doesn't seem like ECS rebalances tasks to allocate resources more effectively. For example, if I have task A and task B running on different cluster hosts, and try to deploy task C, I'll get a resource error, and task C will fail to deploy. However, if ECS rebalanced tasks A and B to run on the same box, there would be enough resources to deploy task C.

This comes up pretty often for us- with the current placement of tasks, we run of resources and cannot deploy/achieve 100% utilization of our cluster hosts, because things are balanced inefficiently. Max, we're probably getting 75% utilization, and 25% is going to waste.

Configurable Domainname

Currently it is only possible to define the full hostname in the task definition. From e.g. a monitoring perspective this is problematic, because e.g. in NewRelic (and probably other monitoring tools which use the hostname for identification) all containers are then recognized as a single server.

It would be very useful, if it's possible to define a domainname (like e.g. nginx.myservice.myorganization.org) which will be appended to the container id and then used as the container hostname: 6efe0554d193.nginx.myservice.myorganization.org.
This way it would be possible to identify the containers by hostname and at the same time they keep their unique hostnames.

Or is there already a way to achieve that?
What do you think?

Best
Fabian

Cloudwatch metric for when agent kills task

Is there a metric for when the agent kills a service due to hard memory limits being exceeded? I am trying to configure cloudwatch alarms for this scenario or the scenario where services can't be placed due to constraints and am having a hard time finding a set of metrics or aggregate metrics that describe this.

If these don't already exist, is it possible to get them added?

Overrides all of task definition parameters

Start-Task and Run-Task API can overrides the task definition.
But it supports a few parameters only. (command, environment and taskRoleArn).
I would like to overrides other parameters. (ex: hostname,logConfiguration ..)
It would be good to be able to overrides all of task definition parameters.

Can't see task level cpu/memory utilization in cloudwatch

Hi!

In my team we're using ECS to run services. The ECS agent collect a bunch of metrics, that we can then view in cloudwatch. Especially useful for us is the memory/cpu utilization metric, that we use to tune how much memory and cpu we allocate to services. This is really nice that in that it allows us to catch services that are running on dangerously high memory before they go to 100% and get killed by the agent.

In our cluster we're also running a bunch of scheduled tasks, like ETLs, daily cleanups, sending emails at specific times etc. These are run by simply starting ECS tasks.

In cloudwatch, we can only see ECS cluster and service level metrics, not task level ones. This means that we can't see how much memory/cpu these tasks use (as they are not running under a ECS service)

It would be really nice to get these metrics for these kind of short-lived tasks as well. Are there any plans to support this in cloudwatch?

thanks for any replies!

Support volume mount propagation options

The ECS Agent should have first class support for the Docker volume mount propagation options, "[r]shared", "[r]slave", and "[r]private". These options are documented sparsely here, and more comprehensively here.

This should be a straightforward extension to the mountPoints property in the task definition. For example:

"mountPoints": [
                {
                  "sourceVolume": "string",
                  "containerPath": "string",
                  "readOnly": true|false
                  "propagation": "[[r]shared]|[[r]slave]|[[r]private]"
                }
              ]

Please note that it is currently possible to specify these options by appending them to the containerPath string. For example, the following task definition creates a '/data' mount in the container with propagation set to "shared".

{
  "family": "myapp",
  "volumes": [ {
      "name": "data",
      "host": {
        "sourcePath": "/data"
      }
    } ],
  "containerDefinitions": [ {
      "name": "myapp",
      "image": "myco/myapp",
      "mountPoints": [ {
          "sourceVolume": "data",
          "containerPath": "/data:shared"
        } ]
    } ]
}

This works because the ECS Agent passes the containerPath string through to the Docker remote API unchanged. However, it's an undocumented hack that relies on an internal implementation detail of the agent. We are currently relying on this hack to get our application to work and would appreciate that it be allowed to continue to work at least until this issue is resolved.

[ECS] How to programmatically get event "unable to place a task because the resources could not be found"

I'm trying to get some event for when an ECS task fails to be placed, specifically when a task cannot be placed due to insufficient resources. I want this event here to trigger a Lambda, which I can use to respond with scale-out actions.

I have tried listening to the ECS Event Stream, but no event at all was triggered for task placement failure, the Lambda trigger never occurred. I also didn't see anything in CloudWatch logs for ECS at all.

Is there any way to receive notification of this event? We are able to alert on it in DataDog, how do they get it? Do we need to resort to polling?

Pre-caching images

Is it possible if we
register new Task Definition ( or new revision from existing Task)
and pull image only?

It's same as docker-compose pull app. we need it to save time to pull large image ( GB) over AWS regions.

ecs mounts volumes as bind

my userdata

echo "XXX.efs.eu-west-1.amazonaws.com:/ /var/lib/docker/volumes nfs4 vers=4.1,defaults 0 0" >> /etc/fstab
mount -a && docker volume create --label OVPN_DATA OVPN_DATA

my task definition

  volume {
    name      = "OVPN_DATA"
  }
    "mountPoints": [
      {
        "containerPath": "/etc/openvpn",
        "sourceVolume": "OVPN_DATA"
      }

if I run from the EC2 host
docker run -v OVPN_DATA:/etc/openvpn -d -p 443:1194/tcp --cap-add=NET_ADMIN kylemanna/openvpn
docker inspect shows

    "Mounts": [
          {
              "Type": "volume",
              "Name": "OVPN_DATA",
              "Source": "/var/lib/docker/volumes/OVPN_DATA/_data",
              "Destination": "/etc/openvpn",
              "Driver": "local",
              "Mode": "z",
              "RW": true,
              "Propagation": ""
          }
      ],

but when running from ECS

        "Mounts": [
            {
                "Type": "bind",
                "Source": "/var/lib/docker/volumes/OVPN_DATA/_data",
                "Destination": "/etc/openvpn",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],

it mounts the volume as BIND
Documentation https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html doesnt really help here

Allow setting a default value for `awslog*` configuration settings, per-service or per-instance

Scenario: I'm trying to ensure that my ECS tasks are fully environment-agnostic, so that the same Task Definition can be deployed to a test/staging/production cluster and will automatically pick up the right configuration. I can achieve this by not baking configuration into the task definition (via environment variables or whatever) but instead get either the instance or the container to pull in its configuration at launch.

I am using the awslogs driver, and so I have to configure awslogs-group and awslogs-stream-prefix in the Task Definition. I would still like to be able to separate the log streams per-environment, however, and I don't think there is a way to do this without changing the Task Definition.

I would like to be able to set a configuration variable (eg DEFAULT_AWSLOGS_STREAM_PREFIX=production) in /etc/ecs/ecs.config, and have that be used as the value of awslogs-stream-prefix if nothing is specified in an incoming TD. I think overriding awslogs-stream-prefix in this way is probably more useful than awslogs-group, but it would be good to have both, I think.

Ideally it would also be possible to specify a default value in the Service configuration, but I appreciate that that's an AWS API change.

Feature request: per-container Target Group registration

Summary

ECS services comprised of multiple complementary microservices would benefit from the ability to register individual containers with Target Groups.

ECS currently only provides for service-level registration with a single TargetGroup. This registration and subsequent health checks are used as a determination of whether an ECS service is healthy.

For ECS services comprised of multiple containers that should be accessed via a single endpoint (for example https://microservice/ where containers are accessed under https://microservice/foo, https://microservice/bar, https://microservice/baz) there is no capability provided by the ECS agent for registration/deregistration with Target Groups used in ELBv2 Listener Rules. A workaround is to run a reverse proxy routing container within each task, which is wasteful and not necessary from an architecture standpoint

I request that an additional (optional) attribute be added to http://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RegisterTaskDefinition.html in the conatinerDefinition, such as targetGroup, allowing a list of target groups to register the container with. At task launch time, the ECS agent should iterate over a task's container definitions for target groups, and register with the appropriate instance ID/port tuplet. At task stop time, the ECS agent should handle deregistration as is currently done.

Containers need additional storage

I'm looking in to how to modify the dm.basesize setting. It's not incredibly clear where to push these kinds of configuration settings. Could someone please follow up with a stable place to modify some storage defaults? Thanks!

I'm seeing related settings in /etc/sysconfig/docker-storage-setup, /etc/sysconfig/docker-storage, and other files.

More info about dm.basesize can be found here: https://docs.docker.com/v1.10/engine/reference/commandline/daemon/

[ec2-user@debug ~]$ cat /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper --storage-opt dm.thinpooldev=/dev/mapper/docker-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true --storage-opt dm.fs=ext4"
[ec2-user@debug ~]$ cat /etc/sysconfig/docker-storage-setup
# Edit this file to override any configuration options specified in
# /usr/lib/docker-storage-setup/docker-storage-setup.
#
# For more details refer to "man docker-storage-setup"
DEVS=/dev/xvdcz
VG=docker
DATA_SIZE=99%FREE
AUTO_EXTEND_POOL=yes
LV_ERROR_WHEN_FULL=yes
EXTRA_DOCKER_STORAGE_OPTIONS="--storage-opt dm.fs=ext4"
[ec2-user@debug ~]$

For now, I modified OPTIONS in /etc/sysconfig/docker with --storage-opt dm.basesize=20G

Also, is there a way to allocate storage in a task definition? Do you by chance have any suggestions if one container for example needs 100G of storage, when the others may be sufficient with 10G?

Add support for plugins

Now that Go 1.8 is released, and has support for plugins, it could be handy to expose some hooks in the ECS agent as a method for people to extend it, without forking.

As an example, in Empire, we'd like to eventually switch to the RunTask api for running "attached" containers, however, #462 is a blocker for that. If the ECS agent had support for a plugin, where we could manipulate the task before it was started, then we could add the OpenStdin and Tty options to CreateContainer before starting (e.g. if it had a docker label set). Without this, our only option is to fork the agent, which is something we'd like to avoid.

running scripts before task start

I have volumes id like to delete using docker rm volume VOLUME_NAME but dont see a way to do it when the agent receives a new task. Adding the ability to execute a script before a task executes, or specifying the command to run on the instance beforehand would be awesome.

Seccomp Unconfined Parameter

Adding the seccomp:unconfined parameter is not getting passed to the container.

Parameters allow for label/apparmor however adding a parameter as: label:seccomp:unconfined does not work also.

Allow environment variables in awslogs-group

At my company, we use multicontainer ElasticBeanstalk tiers for all of our environments. Currently we stream logs to SumoLogic via their agent; but, for our needs, CloudWatch logs would be sufficient and much cheaper. Each of our environments defines an environment variable "ENVIRONMENT", with values like "develop", "qa", etc. I found that recent ECS agents support streaming logs to CloudWatch natively via the awslogs driver. I attempted to do something like this:

{
  "name": "tasks",
  "image": "876270261134.dkr.ecr.us-west-2.amazonaws.com/armada/tasks:{{ BUILD_TAG }}",
  "readonlyRootFilesystem": true,
  "essential": true,
  "memory": 3072,
  "workingDirectory": "/armada/app",
  "command": [
    "./start.sh"
  ],
  "portMappings": [
    {
      "hostPort": 80,
      "containerPort": 80
    }
  ],
  "logConfiguration": {
    "logDriver": "awslogs",
    "options": {
      "awslogs-group": "/armada/$ENVIRONMENT/tasks.log",
      "awslogs-region": "us-west-2"
    }
  }

Some background: when our CI system produces a build, it instantiates the Dockerrun.aws.json from a template defined in our source repository. BUILD_TAG is ${branch}-${build_number}-${commit_hash_prefix}. The CI system produces ZIP archives for the web and worker tiers, uploads them to S3, and registers them as application versions which can be reused against any of the other environments via a Slack bot (@bula deploy develop develop-38-abcd1234).

I don't want to hardcode "awslogs-group": "/armada/tasks.log", because then all environments' logs would be grouped the same. When I attempted this, however, I got an error CannotStartContainerError: API error (500): Failed to initialize logging driver: InvalidParameterException: 1 validation error detected: Value '\''/armada/$ENVIRONMENT/tasks.log'\'' at '\''logGroupName'\'' failed to satisfy constraint: Member must satisfy regular ex.

The only workaround I can think of would be to alter our CI system to not register application versions at build time, but just ZIP archives with Dockerrun.aws.json still in Jinja2 template form; and, in tandem, alter the deployment bot to download that, extract it, instantiate the Dockerrun.aws.json template with the target environment, and register a new application version, and update the target environment with it. That seems needlessly convoluted (and, frankly, ugly) to me. If there's some other, cleaner way to accomplish this, I'm open to any suggestions. But IMHO, it'd be quite nice to use environment variables in task definitions, to keep them easily reusable.

extraHosts param should allow choosing local-ipv4

The extraHost ipAddress param should allow some sort of variable "${local-ipv4}" or some such way to choose the ecs servers own IP rather than a hard coded ip.

This would be to support applications not yet migrated into service discovery.

For example it would be easier to migrate hundreds of apps that use statsd to horizontally scale to use their own statsd on an ECS server than to get hundreds of repos updated.

This model:
ECS server -> local statsd -> graph system
Over this one:
ECS server -> UDP packets to some pet server -> graph system

Ideally applications use service discovery, but this is not done quickly in some places.

Please support new auth format

Hello guys,

I pull docker images on ECS from my private registry setup with registry 2.6 , and keep encountering CannotPullContainerError. Finally I found the problem is still the auth format.

According to this page, "The dockercfg format uses the authentication information stored in the configuration file that is created when you run the docker login command",
However, my installed docker generates the config in the format below:

{
  "auths": {
    "registry.example.com": {
      "auth": "ZXhhbXBsZTpleGFtcGxlCg=="
    }
  }
}

rather than something like

{
  "https://index.docker.io/v1/": {
      "auth":"zq212MzEXAMPLE7o6T25Dk0i",
      "email":"[email protected]"
    }
}

in the document.

Currently My workaround is to have a script tranforming ~/.docker/config.json format after executing docker login and it works. The docker versions I've tried are 1.12.5 and some newer versions afterwards installed with instructions here, and running latest(1.14.0) ecs agent pulled from amazon/amazon-ecs-agent/ .

Thanks

Add support for `--stop-signal` `docker run` flag

Hi,

When running Centos 7 based container with systemd there's a graceful systemd service shutdown issue.

If I run container (not in ECS) with option "docker run --stop-signal=$(kill -l RTMIN+3) ..." it all works fine - systemd service gets the signal and service is being correctly shut down inside the container.

Would be really great to have this feature implemented in ECS!

p.s. Or maybe there's a wide known workaround for this graceful shutdown issue. Please share!

Clone ECS Service into another Cluster

Hi,

In my production service, we use 2 cluster
permanent cluster
autoscale cluster
There are some services we need them in both clusters. That leads to my question
Can ECS support clone service in multiple clusters in the future?
That helps us a lot and better to have an API to clone ECS service.

Thoughts?

[ECS]Feature Request: Support sd_notify for systemd watchdog

Currently the ECS Agent can be managed by a systemd unit file, but we cannot use the full features of systemd. Systemd has a feature called WatchdogSec

Configures the watchdog timeout for a service. The watchdog is activated when the start-up is completed. The service must call sd_notify(3) regularly with "WATCHDOG=1" (i.e. the "keep-alive ping"). If the time between two such calls is larger than the configured time, then the service is placed in a failed state and it will be terminated with SIGABRT. By setting Restart= to on-failure, on-watchdog, on-abnormal or always, the service will be automatically restarted. The time configured here will be passed to the executed service process in the WATCHDOG_USEC= environment variable. This allows daemons to automatically enable the keep-alive pinging logic if watchdog support is enabled for the service. If this option is used, NotifyAccess= (see below) should be set to open access to the notification socket provided by systemd. If NotifyAccess= is not set, it will be implicitly set to main. Defaults to 0, which disables this feature. The service can check whether the service manager expects watchdog keep-alive notifications. See sd_watchdog_enabled(3) for details. sd_event_set_watchdog(3) may be used to enable automatic watchdog notification support.

that would allow systemd to action on hung ECS Agents.

Register user-defined resources per container instance

The ECS API allows container instances to track user-defined resources beyond cpu and memory. From http://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Resource.html:

name: The name of the resource, such as CPU, MEMORY, PORTS, or a user-defined resource.

The ecs-agent's calls to RegisterContainerInstance should provide some mechanism to register user-defined resources. The API call the agent makes allows this, but as far as I know there's no way for a user to control this aspect of the registration process.

Coupled with TaskDefinition support for reserving user-defined resource types, this would allow users to have much more control over task scheduling in a custom system via StartTask API calls.

My specific interest is in allowing my tasks to reserve disk space on their hosts.

[ECS,Fargate] Multiple target groups for a service

Summary

I want to be able to route traffic from two different Path rules in my ALB to the same ECS instance and have ECS automatically register targets in both groups.

Description

Pretty much what the summary says. I can manually add targets to the second group, but if the service is restarted, then the second group loses the targets

Expected Behavior

Unable to have 2 targets for the same ECS service

Observed Behavior

N/A

Environment Details

Supporting Log Snippets

Consider overriding role policies in addition to roles themselves

Currently, the only way to override role policies within a task is to define a new role and provide it via overrides.taskRoleArn in runTask/startTask calls.

However, some applications may require a more fine-grained approach, where one may need to override (scope down) individual policies, much like what's already possible with an AssumeRole call for other APIs. In addition, there's a default maximum limit of 250 roles per account, which seems restrictive if one needs to run many instances of a particular task definition with a larger number of uniform variations in role policies (e.g. each task could have access only to a certain prefix in a particular S3 bucket). In addition, it gets harder to manage these roles because we'd need to define and remove them dynamically, while there are no "families" of roles that would signify slight variations between one over-arching role.

It seems that implementing a "role policy overrides" parameter could be done while preserving compatibility with existing APIs.

Feature Request: Add support for disabling volumes

In poorly configured Docker environments, volumes can be use to escalate to root on the Docker host. It would be nice if ECS added an attribute if the task definition has volumes.

For example, if a I have a task definition that specifies mount points on the host, then the task definition would get a required attribute like:

{
  "requiresAttributes": [
    {
      "value": null,
      "name": "com.amazonaws.ecs.capability.volumes",
      "targetId": null,
      "targetType": null
    }
  ]
}

By default, the ECS agent would be registered with this capability, but could be disabled with a configuration option, so that ECS won't attempt to schedule any tasks that include volumes on the container instance.

[ECS] Need Host environment variable resolution to pass some information to a container

Before moving to ECS, I need to be able to make sure that I can run consul in that environment. To run consul, I need to pass some Host environment variables as environment variables to the container. I would normally do this as -e HOST=$HOST etc. However, it seems that that if I define a variable with $HOST in a task definition, this is treated as a static string.

Related, I also typically would set docker command line options on the run command for several features I need. I may be able to get around some of this by overriding options in the docker daemon itself, but as far as passing host information I am not aware of any workaround.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.