Coder Social home page Coder Social logo

cloudfoundry-community / cf-containers-broker Goto Github PK

View Code? Open in Web Editor NEW
38.0 9.0 31.0 324 KB

A generic "Containers" broker for the Cloud Foundry v2 services API

License: Apache License 2.0

Ruby 27.68% CSS 71.22% HTML 0.87% Shell 0.05% Dockerfile 0.17% SCSS 0.01%

cf-containers-broker's Introduction

Known Vulnerabilities

Containers Service Broker for Cloud Foundry

This is a generic Containers broker for the Cloud Foundry v2 services API.

This service broker allows users to provision services that runs inside a compatible container backend and bind applications to the service. The management tasks that the broker can perform are:

  • Provision a service container with random credentials and service arbitrary parameters
  • Bind a service container to an application:
    • Expose the credentials to access the provisioned service (see CREDENTIALS.md for details)
    • Provide a syslog drain service for your application logs (see SYSLOG_DRAIN.md for details)
  • Unbind a service container from an application
  • Unprovision a service container
  • Expose a service container management dashboard

More details can be found at this Pivotal P.O.V Blog post.

Disclaimer

This is not presently a production ready service broker. This is a work in progress. It is suitable for experimentation and may not become supported in the future.

Usage

Prerequisites

This service broker does not include any container backend. Instead, it is meant to be deployed alongside any compatible container backend, which it manages:

  • Docker: Instructions to configure the service broker with a Docker backend can be found at DOCKER.md.

Configuration

Configure the application settings according to the instructions found at SETTINGS.md.

Run

Standalone

Start the service broker:

bundle
bundle exec rackup

The service broker will listen by default at port 9292. View the catalog API at http://localhost:9292/v2/catalog. The basic auth username is containers and secret is secret.

As a Docker container

Build the image

This step is optional, you can use the already built Docker image located at the Docker Hub Registry.

If you want to create locally the image frodenas/cf-containers-broker (Dockerfile) execute the following command on a local cloned cf-containers-broker repository:

docker build -t frodenas/cf-containers-broker .
Run the image

To run the image and bind it to host port 80:

docker run -d --name cf-containers-broker \
       --publish 80:80 \
       --volume /var/run:/var/run \
       frodenas/cf-containers-broker

Some aspects of configuration can be overridden with environment variables. See config/settings.yml for documentation and environment variables.

docker run -d --name cf-containers-broker \
       --publish 80:80 \
       --volume /var/run:/var/run \
       -e BROKER_USERNAME=broker \
       -e BROKER_PASSWORD=password \
       -e EXTERNAL_HOST=localhost \
       frodenas/cf-containers-broker

If you want to override the entire configuration, then create a directory with the configuration files, and mount this directory into the container's /config directory:

mkdir -p /tmp/cf-containers-broker/config
cp config/settings.yml /tmp/cf-containers-broker/config
cp config/unicorn.conf.rb /tmp/cf-containers-broker/config
vi /tmp/cf-containers-broker/config/settings.yml
docker run -d --name cf-containers-broker \
       --publish 80:80 \
       --volume /var/run:/var/run \
       --volume /tmp/cf-containers-broker/config:/config \
       frodenas/cf-containers-broker

If you want to expose the application logs, create a host directory and mount the container's directory /app/log into the previous host directory:

mkdir -p /tmp/cf-containers-broker/logs
docker run -d --name cf-containers-broker \
       --publish 80:80 \
       --volume /var/run:/var/run \
       --volume /tmp/cf-containers-broker/logs:/app/log \
       frodenas/cf-containers-broker

Using CF/BOSH

This service broker can be deployed alongside:

Enable the service broker at your Cloud Foundry environment

Add the service broker to Cloud Foundry as described by the service broker documentation.

A quick way to register the service broker and to enable all service offerings is running:

cf create-service-broker docker-broker containers secret http://<BROKER IP ADDRESS>
while read p __; do
    cf enable-service-access "$p";
done < <(cf service-access | awk '/orgs/{y=1;next}y && NF' | sort | uniq)

Bindings

The way that each service is configured determines how binding credentials are generated.

A service that exposes only a single port and has no other credentials configuration will include the minimal host and port in its credentials:

{ "host": "10.11.12.13", "port": 61234, "ports": ["8080/tcp": 61234] }

In the example above, the container exposed an internal port 8080 and it was bound to port 61234 on the host machine 10.11.12.13.

If a service exposes more than a single port, then you must specify the port you want to bind using the credentials.uri.port property, otherwise the binding will not contain a port.

{ "host": "10.11.12.13", "port": 61234, "ports": ["8080/tcp": 61234, "8081/tcp": 61235] }

In the example above, the container exposed internal ports 8080 and 8081, and it was bound to port 61234 on the host machine 10.11.12.13 because the credentials.uri.port property was set to 8080/tcp.

For more details, see the CREDENTIALS.md file.

Self-discovery of host port bindings

Optionally, each exposed host port for an instantiated container will passed into the container via environment variables, if you enable the enable_host_port_envvar: true setting.

If a Docker image exposes an internal port 5432, then each instantiated container will be provided a DOCKER_HOST_PORT_5432 environment variable containing the host's port allocation.

Implementation detail: In order to support this feature, provisioning new Docker containers requires two steps:

  1. Instantiate a Docker container and allow Docker to allocate host ports.
  2. Restart the Docker container with the additional DOCKER_HOST_PORT_nnnn environment variables.

If you wish to enable this feature, provide enable_host_port_envvar: true in config/settings.yml.

Updating Containers

When new images become available or configuration of the plans change it can become desirable to restart the running containers to pick up the latest version of their image and/or update their configuration.

bin/update_all_containers will attempt to find all running containers managed by the broker and restart them with the latest configuration.

The mapping between running containers and configured plans is achieved by adding the labels plan_id and instance_id to the containers at create time. Containers that don't have these labels will be ignored by the update script.

If you are updating cf-containers-broker from an older version that didn't add the required labels you can force the broker to recreate the containers via cf update-service <service-id>. After this the labels will be available and the bin/update_all_containers script will be able to identify the containers for automatic updating.

In order for cf update-service <service-id> to work the service must declare plan_updateable: true.

Tests

To run all specs:

bundle
bundle exec rake spec

Be aware that this project does not yet provide a full set of tests. Contributions are welcomed!

Contributing

In the spirit of free software, everyone is encouraged to help improve this project.

Here are some ways you can contribute:

  • by using alpha, beta, and prerelease versions
  • by reporting bugs
  • by suggesting new features
  • by writing or editing documentation
  • by writing specifications
  • by writing code (no patch is too small: fix typos, add comments, clean up inconsistent whitespace)
  • by refactoring code
  • by closing issues
  • by reviewing patches

Submitting an Issue

We use the GitHub issue tracker to track bugs and features. Before submitting a bug report or feature request, check to make sure it hasn't already been submitted. You can indicate support for an existing issue by voting it up. When submitting a bug report, please include a Gist that includes a stack trace and any details that may be necessary to reproduce the bug, including your gem version, Ruby version, and operating system. Ideally, a bug report should include a pull request with failing specs.

Submitting a Pull Request

  1. Fork the project.
  2. Create a topic branch.
  3. Implement your feature or bug fix.
  4. Commit and push your changes.
  5. Submit a pull request.

Copyright

See LICENSE for details. Copyright (c) 2014 Pivotal Software, Inc.

cf-containers-broker's People

Contributors

aeijdenberg avatar bkrannich avatar cbuben avatar dependabot[bot] avatar drnic avatar frodenas avatar metmajer avatar samuelmarks avatar tahaozket avatar tproot avatar vlerenc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cf-containers-broker's Issues

Support habitat exported docker images

https://www.habitat.sh/ can export Docker images, which in turn docker_manager.rb could run.

Except @bodymindarts annoyingly tells me that we can't pass in simple env vars -e REDIS_PASSWORD=2134jfdjhaf into habitat containers. Instead we pass in -e HAB_REDIS='username="admin", password="adsfasdfsadf", db_name="adsfsadfdfsg"'

So to support habitat, we could keep the REDIS_PASSWORD concepts for configuration, but just change what env var is created (one HAB_REDIS instead of many REDIS_PASSWORD etc.

Proposal, cargo cult docker_manager.rb into habitat_docker_manager.rb (and _spec.rb) to support habitat

Authentication issues with PCF

When deploying with BOSH:

Ferdy, the manage link is not working. One thing I noticed is that the manifest refers to it to cf-containers-broker.<root_domain>, but I could not find a job step that actually creates a route on PCF.

So this ends on the route never being found. I tried to replace it by the broker ip (and that's how I registered it), but after being challenged by PCF (1.2.0) auth screen, it redirects back to the console application. I can't reach the manage endpoint.

I guess I could break this into two "mini issues"

  • Auth issues
  • How to register the broker application with cf

Regards

bosh deploy - busy disk on unmount

Hello,
we finally had the time to check the cf-containers-brokers. Really nice extension to the current bosh capabilities.

We experience issues while testing the broker container, and trying a bosh update after a successfull first deployment.
It seems that the persistent disk is busy when bosh tries to unmount the disk from the original VM.
see logs below.

We use latest version:

  • bosh-vcloud-esxi-ubuntu-trusty-go_agent 2922, with vcloud CPI
  • bosh 148
  • cf-containers-broker v11

Any idea why the persistent disk cant be unmounted ?
Best regards
Pierre

Started updating job broker > broker/0. Failed: Action Failed get_task: Task c3dc2d78-19c4-4bfe-4f91-10b9f1246e21 result: Unmounting persistent disk: Running command: 'umount /dev/sdc1', stdout: '', stderr: 'umount: /var/vcap/store: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
': exit status 1 (00:10:03)

Error 450001: Action Failed get_task: Task c3dc2d78-19c4-4bfe-4f91-10b9f1246e21 result: Unmounting persistent disk: Running command: 'umount /dev/sdc1', stdout: '', stderr: 'umount: /var/vcap/store: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
': exit status 1

Task 1099 error

Stop containers instead of kill

in actions #destroy and #update we are killing the containers. Are there objections to stop them instead? That way we would have a chance to trap the SIGTERM signal and run a shutdown script in the container.

@frodenas do you remeber why this was originally coded as kill?

cc @drnic

Syslog Drain tcp instead of http

I think you should use by default syslog (tcp) protocol instead of http, also because with your example with logsearch, the default settings listen for tcp connections, to use http you have to search for a plugin.

file: cf-containers-broker/app/models/docker_manager.rb LINE 116
'syslog' instead of 'http'

It would be even better to allow changes using some settings file instead of re-building the image

Edit: even because http does not exist:
ERR SinkManager: Invalid syslog drain URL (http://xxxxx) for application xxxxxx. Err: Invalid scheme type http, must be https, syslog-tls or syslog

Overriding credentials - hostname

Currently host returned in service credentials is an ip address of of the machine running the broker.
I would like to be able to specify a hostname that is returned instead.

Example:

credentials:
username:
key: 'SERVICE_USERNAME'
value: 'admin'
password:
key: 'SERVICE_PASSWORD'
dbname:
key: 'SERVICE_DBNAME'
hostname:
key: 'SERVICE_HOSTNAME'
value: 'postgres-01.docker.com'
uri:
prefix: 'mongodb'
port: '27017/tcp'

can't create a service-broker

Hi,
I got somethings wrong with running "cf create-service-broker". Could you help me ?

Here is the detail message:
ubuntu@bastion:~/workspace/deployments/docker-services-boshworkspace$ CF_TRACE=true cf create-service-broker docker containers containers http://cf-containers-broker.10.239.70.76.xip.io

VERSION:
6.12.2-24abed3

Creating service broker docker as admin...

REQUEST: [2015-09-02T13:42:07Z]
POST /v2/service_brokers HTTP/1.1
Host: api.10.239.70.76.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.12.2-24abed3 / linux

{"name":"docker","broker_url":"http://cf-containers-broker.10.239.70.76.xip.io","auth_username":"containers","auth_password":"containers"}

RESPONSE: [2015-09-02T13:42:07Z]
HTTP/1.1 401 Unauthorized
Content-Length: 97
Content-Type: application/json;charset=utf-8
Date: Wed, 02 Sep 2015 13:42:07 GMT
Server: nginx
X-Cf-Requestid: 30f24168-c994-497f-5be9-f282ce0291bb
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 1a3b8e8d-0b5c-4c6b-51c3-615ac2c6c478::2583e1eb-b44b-449f-9bbd-3e7467323fdc

{
"code": 1000,
"description": "Invalid Auth Token",
"error_code": "CF-InvalidAuthToken"
}

REQUEST: [2015-09-02T13:42:07Z]
POST /oauth/token HTTP/1.1
Host: login.10.239.70.76.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/x-www-form-urlencoded
User-Agent: go-cli 6.12.2-24abed3 / linux

grant_type=refresh_token&refresh_token=eyJhbGciOiJSUzI1NiJ9.eyJqdGkiOiJjNThkOTRiYy1iMzM4LTQwOWYtYTg1NS1kYWMwODE0YmZmMjIiLCJzdWIiOiIzN2QzYWRhZC01ZmM1LTQ4NjMtYWE3OS00ODg3ZTk4NzFlZWUiLCJzY29wZSI6WyJzY2ltLnJlYWQiLCJjbG91ZF9jb250cm9sbGVyLmFkbWluIiwic2NpbS53cml0ZSIsImNsb3VkX2NvbnRyb2xsZXIud3JpdGUiLCJwYXNzd29yZC53cml0ZSIsIm9wZW5pZCIsImNsb3VkX2NvbnRyb2xsZXIucmVhZCIsImRvcHBsZXIuZmlyZWhvc2UiXSwiaWF0IjoxNDQxMTk0NzI5LCJleHAiOjE0NDM3ODY3MjksImNpZCI6ImNmIiwiY2xpZW50X2lkIjoiY2YiLCJpc3MiOiJodHRwczovL3VhYS4xMC4yMzkuNzAuNzYueGlwLmlvL29hdXRoL3Rva2VuIiwiemlkIjoidWFhIiwiZ3JhbnRfdHlwZSI6InBhc3N3b3JkIiwidXNlcl9uYW1lIjoiYWRtaW4iLCJ1c2VyX2lkIjoiMzdkM2FkYWQtNWZjNS00ODYzLWFhNzktNDg4N2U5ODcxZWVlIiwicmV2X3NpZyI6IjY1YjIzNDhkIiwiYXVkIjpbImNmIiwic2NpbSIsImNsb3VkX2NvbnRyb2xsZXIiLCJwYXNzd29yZCIsIm9wZW5pZCIsImRvcHBsZXIiXX0.DbqwsWoS5kiekIaZwkAiW9QD_OB-jy7Vat72aorDj1vYtqUfOOEBS3IS7KTLdWAtiJ_4g1XKHSehsP-HPKgkjBdQEcRhr9uxkLtVednP3m0yLG-vK3mzz499MedcsqijvUhbKN5VGYTZbUSHU6cCMZXEjz8r6zkAZjws79XKXaA&scope=

RESPONSE: [2015-09-02T13:42:07Z]
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Access-Control-Allow-Origin: *
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Cache-Control: no-store
Content-Type: application/json;charset=UTF-8
Date: Wed, 02 Sep 2015 13:42:06 GMT
Expires: 0
Pragma: no-cache
Pragma: no-cache
Server: Apache-Coyote/1.1
X-Cf-Requestid: 0415ef1b-95a5-4060-589e-cd767874d6bc
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Xss-Protection: 1; mode=block

870
{"access_token":"[PRIVATE DATA HIDDEN]","token_type":"bearer","refresh_token":"[PRIVATE DATA HIDDEN]","expires_in":599,"scope":"scim.read cloud_controller.admin password.write scim.write openid cloud_controller.write cloud_controller.read doppler.firehose","jti":"74d82a56-d39a-4641-9147-3ce1cecd1595"}
0

REQUEST: [2015-09-02T13:42:07Z]
POST /v2/service_brokers HTTP/1.1
Host: api.10.239.70.76.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.12.2-24abed3 / linux

{"name":"docker","broker_url":"http://cf-containers-broker.10.239.70.76.xip.io","auth_username":"containers","auth_password":"containers"}

RESPONSE: [2015-09-02T13:42:43Z]
HTTP/1.1 502 Bad Gateway
Content-Length: 300
Content-Type: application/json;charset=utf-8
Date: Wed, 02 Sep 2015 13:42:43 GMT
Server: nginx
X-Cf-Requestid: 289e3daf-ff7a-43b4-6d33-c3b76659097e
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: e022e464-de62-4515-54b7-72e3c5acca49::19be1a4f-d49a-448f-92f8-64bf9155ca81

{
"code": 10001,
"description": "The service broker could not be reached: http://cf-containers-broker.10.239.70.76.xip.io/v2/catalog",
"error_code": "CF-ServiceBrokerApiUnreachable",
"http": {
"uri": "http://cf-containers-broker.10.239.70.76.xip.io/v2/catalog",
"method": "GET"
}
}

FAILED
Server error, status code: 502, error code: 10001, message: The service broker could not be reached: http://cf-containers-broker.10.239.70.76.xip.io/v2/catalog
FAILED
Server error, status code: 502, error code: 10001, message: The service broker could not be reached: http://cf-containers-broker.10.239.70.76.xip.io/v2/catalog


ubuntu@bastion:~/workspace/deployments/docker-services-boshworkspace$ curl -I -u containers:containers cf-containers-broker.10.239.70.76.xip.io/v2/catalog
HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Wed, 02 Sep 2015 13:25:10 GMT
Etag: "55c466633ed95d93106f3297f4cdbdd0"
Status: 200 OK
X-Cf-Requestid: 64420ed2-2d21-418a-4ec0-63382bac0d24
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 0fc48803-a0f2-424d-b8a4-e513f81170ea
X-Runtime: 0.035762
X-Xss-Protection: 1; mode=block


root@vm-bcdbe19a-dc0c-4d33-9a82-d9251d9fa122:/var/vcap/bosh_ssh/bosh_sxx1a58pj# monit summary
The Monit daemon 5.2.4 uptime: 31m

Process 'docker' running
Process 'cf-containers-broker' running
System 'system_91ab54f0-8ce9-4a48-bca5-33d16d6da1ac' running


root@vm-bcdbe19a-dc0c-4d33-9a82-d9251d9fa122:/var/vcap/bosh_ssh/bosh_sxx1a58pj# ps aux | grep unicorn
root 1446 0.0 0.8 231376 67024 ? S<l 13:14 0:01 unicorn master --daemonize -c /var/vcap/jobs/cf-containers-broker/config/unicorn.conf.rb
root 1450 0.0 0.7 232040 63188 ? S<l 13:14 0:00 unicorn worker[0] --daemonize -c /var/vcap/jobs/cf-containers-broker/config/unicorn.conf.rb
root 1453 0.0 0.7 232048 63136 ? S<l 13:14 0:00 unicorn worker[1] --daemonize -c /var/vcap/jobs/cf-containers-broker/config/unicorn.conf.rb
root 1456 0.0 0.7 231376 62684 ? S<l 13:14 0:00 unicorn worker[2] --daemonize -c /var/vcap/jobs/cf-containers-broker/config/unicorn.conf.rb

root 1459 0.0 0.7 232220 63224 ? S<l 13:14 0:00 unicorn worker[3] --daemonize -c /var/vcap/jobs/cf-containers-broker/config/unicorn.conf.rb

root@vm-bcdbe19a-dc0c-4d33-9a82-d9251d9fa122:/var/vcap/bosh_ssh/bosh_sxx1a58pj# netstat -anp | grep -i listen
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 559/rpcbind
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1446/unicorn.conf.r
tcp 0 0 127.0.0.1:4243 0.0.0.0:* LISTEN 998/docker
tcp 0 0 127.0.0.1:33331 0.0.0.0:* LISTEN 851/bosh-agent
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 752/sshd
tcp 0 0 127.0.0.1:2822 0.0.0.0:* LISTEN 991/monit
tcp 0 0 127.0.0.1:2825 0.0.0.0:* LISTEN 851/bosh-agent
tcp 0 0 0.0.0.0:56841 0.0.0.0:* LISTEN 634/rpc.statd
tcp6 0 0 :::111 :::* LISTEN 559/rpcbind
tcp6 0 0 :::22 :::* LISTEN 752/sshd
tcp6 0 0 :::49803 :::* LISTEN 634/rpc.statd
unix 2 [ ACC ] STREAM LISTENING 1446 1/init @/com/ubuntu/upstart
unix 2 [ ACC ] SEQPACKET LISTENING 10381 294/systemd-udevd /run/udev/control
unix 2 [ ACC ] STREAM LISTENING 11968 998/docker /var/vcap/sys/run/docker/docker.sock
unix 2 [ ACC ] STREAM LISTENING 9173 559/rpcbind /run/rpcbind.sock


Docs request: installing container as service

Been scratching my head a little. Okay, so I have:

  1. Bosh lite (vagrant with virtualbox)
  2. Cloudfoundry (provisioned on ^)
  3. cf-containers-broker (installed on ^; as per your README.md)

Now scrolling down to DOCKER.md#example where there's a little bit of YAML specifying a Docker image &etc.

Okay, what's the next step? - Say I want to use that YAML; to pull a Docker image, run the container inside cloudfoundry, and register it as a cloudfoundry service… how?

cf-containers-broker as a spec tool for cf app teams

I'm wondering if anyone considered or could be interested in a version of the cf-containers-broker that could be used as a specification tool for CF application teams, by accepting a docker image in arbitrary params.


Background:

One frequent claim for docker-based orchestration systems such as K8S is that app teams can quickly prototype new data services autonomously ( without relying on availability of a service provider to do so or taking on the service provider role and the associated tools learning curve). While CloudFoundry strong opinions (separating apps (12 factors) from stateful services) are great, CF app teams have currently no easy way of specifying services that they'd like to get into the marketplace, and validating such services would suit their needs. See notes of related discussions hold during the berlin cfsummit unconference


To support such usage, the cf-containers-broker would conditionally accept a selection of supported docker properties as arbitrary params in the provision service broker endpoint, in particular:

  • container.image
  • container.tag
  • container.command
  • container.workdir

These arbitrary params would override default values set in the broker settings.yml file.

So the CF app developper experience to prototype/specify a new stateful data service would be:

$ cf m
service          plans          description
docker-proto  512M, 1GB  Prototype a toy data service running a single instance docker container.

$ cf cs docker-proto 1GB -c '{"container.image":"tutum/influxdb" '}

The standard service instance dashboard provides a way to lookup the container logs and troubleshoot errors.

This would provide a way for cf app teams to produce "toy" specification of services that can then be taken by a CF service provider team to implement a robust service (adding HA, metrics, logs, backup/restore ...). This toy specifications would enable the app team to get short feedback loops on whether new technology would indeed help solve business problems for their app.

Such broker obviously turns into a "dangerous remote-exec" capability, so some additional restrictions would be necessary for any platform operations team to accept running it onto its network/Iaas, ie prevent the on-demand provisionned docker containers to cause harm:

  • restricted egress traffic rules (avoid having a malicious image scan the network and perform attacks)
  • restricted privilege (i.e. prevent container.user from having root / superuser permissions)
  • restricted resources associated with the container (RAM, volume size), matching the service plan
  • max docker image size to pull (as to avoid DoS from too large image) ...

In addition, to avoid such "toy service instance" to be relied upon by app teams, intrinsic lack of robustness would be made very explicit to app teams, e.g. by regular service restart (say 24 Hours) that would start from blank volumes.

Keep all the explicit ports even if offering nice `uri` & `port` credentials

Currently, you can either blindly pass all 8080/tcp port bindings back in credentials OR you can select one of them to become the traditional styled hostname/port/uri credentials:

        "credentials": {
          "hostname": "10.3.2.6",
          "port": "49153",
          "uri": "rethinkdb://10.3.2.6:49153"
        },

In the latter case, I'd still like all the other port bindings to be passed through. This would allow me to find the HTTP web UI port for rethinkdb; whilst the binding could still be primarily a binding for the TCP port (as advertised above).

That is, I'd like the credentials above to be:

        "credentials": {
          "hostname": "10.3.2.6",
          "port": "49153",
          "uri": "rethinkdb://10.3.2.6:49153",
          "8080/tcp": ...
        },

Passing credentials to a container

Wondering if someone might have ideas on how to deal with this scenario: I have a docker image that does a task but that needs credentials to write the output of that task to a shared resource, such as S3. I'd like to run that task as a container service from an application on CloudFroundry. So the application would issue a POST request to /v2/service_instances to create the service, it would run its process, and output the result to S3, and terminate. The benefit to this is the process would run in an isolated container instead of on the application.

I don't know how to pass the credentials and some other configuration from the application to the service. I was thinking the parameters attribute in the request body might allow for this, but I'm not sure how the service would use that data. Ideally it could be read through env variables.

Would love any thoughts on how this could work. Thanks.

Running on Stackato v3?

stackato@stackato-w3br:/tmp/cf-containers-broker/logs$ docker ps -a

b44f45a97ab3 frodenas/cf-containers-broker:latest "/app/bin/run.sh bun 22 minutes ago Up 22 minutes 0.0.0.0:80->80/tcp cf-containers-broker

then:

stk add-service-broker \ --username containers --password secret \ --url http://10.200.16.49 cf-containers-broker

results in:

Creating new service broker [cf-containers-broker] ... Error 10001: An unknown error occurred. (500)

events in stackato console:

[2014-09-12T23:29:11.000Z] 10.200.16.49 - cloud_controller_ng - Request failed: 500: {"code"=>10001, "description"=>"error response", "error_code"=>"CF-TargetError", "backtrace"=>["/home/stackato/stackato/code/cloud_controller_ng/vendor/cache/cf-uaa-lib-4b94e14aa772/lib/uaa/http.rb:116:in json_parse_reply'", "/home/stackato/stackato/code/cloud_controller_ng/vendor/cache/cf-uaa-lib-4b94e14aa772/lib/uaa/token_issuer.rb:78:in request_token'"

Can't create Service Keys

App_guid parameter in binding request is no longer mandatory from SB API v2.5:
https://docs.cloudfoundry.org/services/api-v2.5.html#binding

Well, there's an asterisk by the name of the field, but the comment tells that it won't appear when user creates Service Keys. This is the minor issue in documentation, but the bigger one is in containers broker code. Take a look at this line:
https://github.com/cf-platform-eng/cf-containers-broker/blob/master/app/controllers/v2/service_bindings_controller.rb#L9

The 'fetch' method validates that 'app_guid' param value exists and the request ends up with 400 status code. You can see it when you issue 'cf create-service-key myservice mykey'.

I would suggest to remove the line of code linked above. First, we'll be able to create keys for services, second - app_guid variable is not used at all anyway.

Flaky test: `when there are container env vars in files`

In spec/models/docker_manager_spec.rb, the when there are container env vars in files test is flaky.

On my system, the env variables list comes out in one particular order, such that I need to re-order the values in the test in order to pass. In your Travis config, they come out in a different order.

As far as I'm aware, there's isn't a defined order they should be in, so I would guess the test should be fixed to not care (maybe just sort both lists before comparing?).

I don't know enough Ruby to easily do so, so apologies for no attached PR.

authentication error when registering service broker with CF

I tried to register the service broker to my local cf, but ran into authentication error
The docker container for service broker is running on a VM with IP 192.168.150.40
My setting.yml has the followings
cc_api_uri: 'http://api.bosh-lite.com'
external_ip: 192.168.150.40 <----- same as the IP of the VM running the docker container
auth_password: containers

Thanks for the help.
------ERROR----
cf create-service-broker docker containers containers http://192.168.150.40

Unable to connect to the Docker Remote API `unix:///var/run/docker.sock': No such file or directory - connect(2) for /var/run/docker.sock (Errno::ENOENT)

------The output file is file of mark-up text
cf-create-service-broker.output.txt

Volumes not cleaned up

@frodenas Sorry to bother you again. I noticed (in Swarm mode at least; not sure about plain Docker mode) that two volume folders pile up for each launched container that do not get deleted after container removal/termination. Some thousand or so container creations later your persistent disk is full.

The volume folders are located in /var/vcap/store/docker/docker/volumes and /var/vcap/store/cf-containers-broker. I am not even sure why there are two. Maybe because the container is first created without volume and then started in a consecutive call with volume?

INFO -- : Creating Docker container `cf-70a66738-03bc-4208-97a3-cf09fcbac208'...
INFO -- : +-> Create options: {"name"=>"cf-70a66738-03bc-4208-97a3-cf09fcbac208", "Hostname"=>"", "Domainname"=>"", "User"=>"", "AttachStdin"=>false, "AttachStdout"=>true, "AttachStderr"=>true, "Tty"=>false, "OpenStdin"=>false, "StdinOnce"=>false, "Env"=>["MONGODB_USERNAME=2mcvaxrmed915bum", "MONGODB_PASSWORD=qo9wzksiylpmfiqq", "MONGODB_DBNAME=ybf61i3esdr4kayi"], "Cmd"=>["--smallfiles", "--httpinterface"], "Entrypoint"=>nil, "Image"=>"frodenas/mongodb:2.6", "Labels"=>{}, "Volumes"=>{}, "WorkingDir"=>nil, "NetworkDisabled"=>false, "ExposedPorts"=>{}, "HostConfig"=>{"Memory"=>0, "MemorySwap"=>0, "CpuShares"=>nil, "PublishAllPorts"=>false, "Privileged"=>false}}

INFO -- : Starting Docker container `cf-70a66738-03bc-4208-97a3-cf09fcbac208'...
INFO -- : +-> Start options: {"Binds"=>["/var/vcap/store/cf-containers-broker/cf-70a66738-03bc-4208-97a3-cf09fcbac208/data:/data"], "Links"=>[], "LxcConf"=>{}, "Memory"=>0, "MemorySwap"=>0, "CpuShares"=>nil, "PortBindings"=>{"27017/tcp"=>[{"HostPort"=>"37056"}], "28017/tcp"=>[{"HostPort"=>"37057"}]}, "PublishAllPorts"=>false, "Privileged"=>false, "ReadonlyRootfs"=>false, "VolumesFrom"=>[], "CapAdd"=>[], "CapDrop"=>[], "RestartPolicy"=>{"Name"=>"always"}, "Devices"=>[], "Ulimits"=>[]}

This at least fits the observation that the /var/vcap/store/docker/docker/volumes volume was always empty while the /var/vcap/store/cf-containers-broker contained the real data. In any case, even if you delete the container, the volumes get never deleted which becomes a problem after some time. I noticed it just now that we have a couple hundred active containers, but a couple thousand volume corpses lying around.

binding API not returning credentials

I think the binding API is not returning a credentials hash in the body; but that seems a weird assertion to make. Perhaps you can you help figure out what I'm doing wrong in programmatically talking to the broker API (for the subway project):

$ curl -v -X PUT secretusername:[email protected]/v2/service_instances/1/service_bindings/1 -d '{"plan_id": "40f402e0-047f-11e5-88d5-6c4008a663f0"}'
*   Trying 54.175.173.220...
* Connected to subway-broker.cfapps.io (54.175.173.220) port 80 (#0)
* Server auth using Basic with user 'secretusername'
> PUT /v2/service_instances/1/service_bindings/1 HTTP/1.1
> Host: subway-broker.cfapps.io
> Authorization: Basic c2VjcmV0dXNlcm5hbWU6c2VjcmV0cGFzc3dvcmQ=
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 51
> Content-Type: application/x-www-form-urlencoded
> 
* upload completely sent off: 51 out of 51 bytes
< HTTP/1.1 201 Created
< Content-Type: application/json
< Date: Mon, 21 Sep 2015 21:40:33 GMT
< X-Cf-Requestid: 9c9c38ea-8998-4d3b-4214-6ed415f6a7ec
< Content-Length: 0
< Connection: keep-alive
< 
* Connection #0 to host subway-broker.cfapps.io left intact

The {...} response body isn't being displayed. Odd. (And I was investigating because Subway was receiving '' I think).

In the backend logs I see the credentials being returned as expected:

==> /var/vcap/sys/log/cf-containers-broker/cf-containers-broker.stderr.log <==
54.236.219.204 - - [21/Sep/2015:21:40:16 +0000] "PUT /v2/service_instances/1/service_bindings/1 HTTP/1.1" 404 - 0.0124

==> /var/vcap/sys/log/cf-containers-broker/cf-containers-broker.log <==
I, [2015-09-21T21:40:33.843856 #4752]  INFO -- : Started PUT "/v2/service_instances/1/service_bindings/1" for 54.236.219.204 at 2015-09-21 21:40:33 +0000
I, [2015-09-21T21:40:33.847874 #4752]  INFO -- : Processing by V2::ServiceBindingsController#update as HTML
I, [2015-09-21T21:40:33.848108 #4752]  INFO -- :   Parameters: {"app_guid"=>"", "plan_id"=>"40f402e0-047f-11e5-88d5-6c4008a663f0", "service_id"=>"", "parameters"=>nil, "service_instance_id"=>"1", "id"=>"1"}
I, [2015-09-21T21:40:33.850027 #4752]  INFO -- :   Request: {"headers":{"REMOTE_ADDR":"54.236.219.204","REQUEST_METHOD":"PUT","REQUEST_PATH":"/v2/service_instances/1/service_bindings/1","PATH_INFO":"/v2/service_instances/1/service_bindings/1","REQUEST_URI":"/v2/service_instances/1/service_bindings/1","SERVER_PROTOCOL":"HTTP/1.1","HTTP_VERSION":"HTTP/1.1","HTTP_HOST":"52.11.48.179","HTTP_USER_AGENT":"Go 1.1 package http","CONTENT_LENGTH":"99","HTTP_AUTHORIZATION":"[PRIVATE DATA HIDDEN]","CONTENT_TYPE":"application/json","SERVER_NAME":"52.11.48.179","SERVER_PORT":"80","QUERY_STRING":"","SCRIPT_NAME":"","SERVER_SOFTWARE":"Unicorn 4.9.0"},"body":"{\"app_guid\":\"\",\"plan_id\":\"40f402e0-047f-11e5-88d5-6c4008a663f0\",\"service_id\":\"\",\"parameters\":null}\n"}

==> /var/vcap/sys/log/docker/docker.stderr.log <==
time="2015-09-21T21:40:33.852014049Z" level=info msg="GET /v1.16/version" 
time="2015-09-21T21:40:33.856034779Z" level=info msg="GET /v1.16/containers/cf-1/json" 

==> /var/vcap/sys/log/cf-containers-broker/cf-containers-broker.log <==
I, [2015-09-21T21:40:33.861160 #4752]  INFO -- : Building credentials hash for container `cf-1'...

==> /var/vcap/sys/log/docker/docker.stderr.log <==
time="2015-09-21T21:40:33.862236282Z" level=info msg="GET /v1.16/containers/cf-1/json" 
time="2015-09-21T21:40:33.867600341Z" level=info msg="GET /v1.16/containers/9b293577b466ea996476b018590969abbe2d0a40854fa15859fc4af166c23105/json" 

==> /var/vcap/sys/log/cf-containers-broker/cf-containers-broker.log <==
I, [2015-09-21T21:40:33.872176 #4752]  INFO -- : +-> Credentials: {"hostname"=>"10.10.1.11", "ports"=>{"5432/tcp"=>"32768"}, "port"=>"32768"}

==> /var/vcap/sys/log/docker/docker.stderr.log <==
time="2015-09-21T21:40:33.873160782Z" level=info msg="GET /v1.16/containers/cf-1/json" 

==> /var/vcap/sys/log/cf-containers-broker/cf-containers-broker.log <==
I, [2015-09-21T21:40:33.913725 #4752]  INFO -- : Building syslog_drain_url for container `cf-1'...

==> /var/vcap/sys/log/docker/docker.stderr.log <==
time="2015-09-21T21:40:33.914763711Z" level=info msg="GET /v1.16/containers/9b293577b466ea996476b018590969abbe2d0a40854fa15859fc4af166c23105/json" 

==> /var/vcap/sys/log/cf-containers-broker/cf-containers-broker.log <==
I, [2015-09-21T21:40:33.918954 #4752]  INFO -- : +-> syslog drain port 514/tcp is not exposed
I, [2015-09-21T21:40:33.919473 #4752]  INFO -- :   Response: {"headers":{"X-Frame-Options":"SAMEORIGIN","X-XSS-Protection":"1; mode=block","X-Content-Type-Options":"nosniff"},"body":"{\"credentials\":{\"hostname\":\"10.10.1.11\",\"ports\":{\"5432/tcp\":\"32768\"},\"port\":\"32768\"}}"}
I, [2015-09-21T21:40:33.919800 #4752]  INFO -- : Completed 201 Created in 71ms (Views: 0.2ms)

==> /var/vcap/sys/log/cf-containers-broker/cf-containers-broker.stderr.log <==
54.236.219.204 - - [21/Sep/2015:21:40:33 +0000] "PUT /v2/service_instances/1/service_bindings/1 HTTP/1.1" 201 - 0.0786

Any ideas why I might not be getting the response in the body?

dockerfile requires newer ruby version

$ docker build -t frodenas/cf-containers-broker .
Sending build context to Docker daemon 2.122 MB
Step 1/14 : FROM frodenas/ruby
latest: Pulling from frodenas/ruby
30d541b48fc0: Already exists
8ecd7f80d390: Already exists
46ec9927bb81: Already exists
2e67a4d67b44: Already exists
7d9dd9155488: Already exists
485933a73bf0: Pull complete
d8fbca5d31b4: Pull complete
Digest: sha256:80580b73606051ab4eb34e97f370cc5a2f0da5554d3925e8890d4248f4258da6
Status: Downloaded newer image for frodenas/ruby:latest
 ---> d858d1f84c3b
Step 2/14 : MAINTAINER Ferran Rodenas <[email protected]>
 ---> Running in 48605f5b9e7a
 ---> 411b5601f384
Removing intermediate container 48605f5b9e7a
Step 3/14 : ADD . /app
 ---> f95e64dedb0f
Removing intermediate container b778e6875ddd
Step 4/14 : RUN cd /app &&     bundle package --all &&     RAILS_ENV=assets bundle exec rake assets:precompile &&     rm -rf spec &&     mkdir /config
 ---> Running in 75b017b0e99a
Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Fetching gem metadata from https://rubygems.org/.........
Fetching version metadata from https://rubygems.org/..
Fetching dependency metadata from https://rubygems.org/.
Fetching https://github.com/cloudfoundry/omniauth-uaa-oauth2
listen-3.1.5 requires ruby version >= 2.2.3, which is incompatible with the
current version, ruby 2.1.2p95
The command '/bin/sh -c cd /app &&     bundle package --all &&     RAILS_ENV=assets bundle exec rake assets:precompile &&     rm -rf spec &&     mkdir /config' returned a non-zero code: 5

400 Bad Request: malformed Host header for /v2/catalog

I tried running the broker as a docker container from the docker image available at frodenas/cf-containers-broker.

I ran the image using the command:

sudo docker run -d --name cf-containers-broker --publish 80:80 --volume /var/run:/var/run --volume /home/ubuntu/docker/cf-containers-broker/config:/config --volume /home/ubuntu/docker/cf-containers-broker/log:/app/log  frodenas/cf-containers-broker

However, I get this error while trying to query /v2/catalog.

Docker::Error::ClientError in V2::CatalogsController#show
400 Bad Request: malformed Host header

Here are the app logs:

Started GET "/v2/catalog" for 10.0.0.14 at 2016-09-22 09:45:16 +0000
Processing by V2::CatalogsController#show as JSON
  Request: {"headers":{"REMOTE_ADDR":"10.0.0.14","REQUEST_METHOD":"GET","REQUEST_PATH":"/v2/catalog","PATH_INFO":"/v2/catalog","REQUEST_URI":"/v2/catalog","SERVER_PROTOCOL":"HTTP/1.1","HTTP_VERSION":"HTTP/1.1","HTTP_X_BROKER_API_VERSION":"2.10","HTTP_X_VCAP_REQUEST_ID":"dc45b8a1-298d-4aff-6c90-0d90723c2e1d::d6a35a9d-d776-4f05-ab9f-6b48432c391a","HTTP_ACCEPT":"application/json","HTTP_AUTHORIZATION":"[PRIVATE DATA HIDDEN]","HTTP_USER_AGENT":"HTTPClient/1.0 (2.7.1, ruby 2.3.1 (2016-04-26))","HTTP_HOST":"10.0.0.114","SERVER_NAME":"10.0.0.114","SERVER_PORT":"80","QUERY_STRING":"","SCRIPT_NAME":"","SERVER_SOFTWARE":"Unicorn 5.0.0"},"body":""}
Completed 500 Internal Server Error in 7ms

Docker::Error::ClientError (400 Bad Request: malformed Host header):
  app/models/docker_manager.rb:249:in `validate_docker_remote_api'
  app/models/docker_manager.rb:20:in `initialize'
  app/models/plan.rb:66:in `new'
  app/models/plan.rb:66:in `build_container_manager'
  app/models/plan.rb:26:in `initialize'
  app/models/plan.rb:11:in `new'
  app/models/plan.rb:11:in `build'
  app/models/service.rb:11:in `block in build'
  app/models/service.rb:11:in `map'
  app/models/service.rb:11:in `build'
  app/models/catalog.rb:13:in `block in services'
  app/models/catalog.rb:13:in `map'
  app/models/catalog.rb:13:in `services'
  app/controllers/v2/catalogs_controller.rb:5:in `show'


  Rendered /usr/local/lib/ruby/gems/2.1.0/gems/actionpack-4.2.4/lib/action_dispatch/middleware/templates/rescues/_source.erb (5.5ms)
  Rendered /usr/local/lib/ruby/gems/2.1.0/gems/actionpack-4.2.4/lib/action_dispatch/middleware/templates/rescues/_trace.html.erb (2.1ms)
  Rendered /usr/local/lib/ruby/gems/2.1.0/gems/actionpack-4.2.4/lib/action_dispatch/middleware/templates/rescues/_request_and_response.html.erb (0.7ms)
  Rendered /usr/local/lib/ruby/gems/2.1.0/gems/actionpack-4.2.4/lib/action_dispatch/middleware/templates/rescues/diagnostics.html.erb within rescues/layout (15.2ms)

Any help is appreciated.

disk quota for docker container

Can we set the disk volume limitation for each one created bu container broker?

664e4b3813# docker run -it ubuntu /bin/bash
root@75534586daa1:/# df -h
Filesystem Size Used Avail Use% Mounted on
none 63G 8.4G 52G 15% /
tmpfs 1000M 0 1000M 0% /dev
tmpfs 1000M 0 1000M 0% /sys/fs/cgroup
/dev/xvdf1 63G 8.4G 52G 15% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 1000M 0 1000M 0% /sys/firmware

664e4b3813# docker run -it --storage-opt size=1G ubuntu /bin/bash
/var/vcap/packages/docker/bin/docker: Error response from daemon: --storage-opt is not supported for aufs.

Don't set env SETTINGS_PATH in Dockerfile

Makes it difficult to override because all files in host have been copied over to /app. I think this is also why a local build fails

Alternatively, remove this ADD ./config/settings.yml /config/settings.yml . Seems it overrides the custom env var.

Docker error: port is already allocated on creating service instance

time="2015-09-21T20:23:44.368858845Z" level=error msg="Handler for POST /containers/{name:.*}/start returned error: Cannot start container 162bf2517993b7c9a0dda2f521c14d60abfacea0410fe3ba3e7cbf9483b37bf3: Bind for 0.0.0.0:32898 failed: port is already allocated"

Full Error
==> /var/vcap/sys/log/docker/docker.stderr.log <==
time="2015-09-21T20:23:44.230560556Z" level=info msg="POST /v1.16/containers/162bf2517993b7c9a0dda2f521c14d60abfacea0410fe3ba3e7cbf9483b37bf3/start"
time="2015-09-21T20:23:44.262932660Z" level=warning msg="Failed to allocate and map port 32898: Bind for 0.0.0.0:32898 failed: port is already allocated"
time="2015-09-21T20:23:44.368858845Z" level=error msg="Handler for POST /containers/{name:.*}/start returned error: Cannot start container 162bf2517993b7c9a0dda2f521c14d60abfacea0410fe3ba3e7cbf9483b37bf3: Bind for 0.0.0.0:32898 failed: port is already allocated"
time="2015-09-21T20:23:44.369028791Z" level=error msg="HTTP Error" err="Cannot start container 162bf2517993b7c9a0dda2f521c14d60abfacea0410fe3ba3e7cbf9483b37bf3: Bind for 0.0.0.0:32898 failed: port is already allocated" statusCode=500
time="2015-09-21T20:23:44.371140377Z" level=info msg="GET /v1.16/containers/162bf2517993b7c9a0dda2f521c14d60abfacea0410fe3ba3e7cbf9483b37bf3/json"
time="2015-09-21T20:23:44.373804162Z" level=info msg="DELETE /v1.16/containers/162bf2517993b7c9a0dda2f521c14d60abfacea0410fe3ba3e7cbf9483b37bf3?force=true&v=true"

==> /var/vcap/sys/log/cf-containers-broker/cf-containers-broker.log <==
I, [2015-09-21T20:23:44.474298 #7511] INFO -- : #<Exceptions::BackendError: Cannot start Docker container cf-5edc4f2a-fe06-4d4a-bf2c-64160bdaae2a'> I, [2015-09-21T20:23:44.474549 #7511] INFO -- : /var/vcap/data/packages/cf-containers-broker/cf3a65fcc5d23a958fcc98dcfa61b9cc0070aa99.1-511a4ef372c7725e9347426de45b3d7baec200f8/app/models/docker_manager.rb:77:in create'
/var/vcap/data/packages/cf-containers-broker/cf3a65fcc5d23a958fcc98dcfa61b9cc0070aa99.1-511a4ef372c7725e9347426de45b3d7baec200f8/app/controllers/v2/service_instances_controller.rb:22:inupdate' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/abstract_controller/base.rb:198:in process_action'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_controller/metal/rendering.rb:10:inprocess_action' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/abstract_controller/callbacks.rb:20:in block in process_action'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/callbacks.rb:115:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/callbacks.rb:115:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/callbacks.rb:553:inblock (2 levels) in compile' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/callbacks.rb:503:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/callbacks.rb:503:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/callbacks.rb:88:in run_callbacks'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/abstract_controller/callbacks.rb:19:inprocess_action' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_controller/metal/rescue.rb:29:in process_action'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_controller/metal/instrumentation.rb:32:inblock in process_action' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/notifications.rb:164:in block in instrument'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/notifications/instrumenter.rb:20:ininstrument' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/notifications.rb:164:in instrument'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_controller/metal/instrumentation.rb:30:inprocess_action' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/abstract_controller/base.rb:137:in process'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionview-4.2.3/lib/action_view/rendering.rb:30:inprocess' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_controller/metal.rb:196:in dispatch'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_controller/metal/rack_delegation.rb:13:indispatch' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_controller/metal.rb:237:in block in action'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/routing/route_set.rb:76:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/routing/route_set.rb:76:in dispatch'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/routing/route_set.rb:45:inserve' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/journey/router.rb:43:in block in serve'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/journey/router.rb:30:ineach' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/journey/router.rb:30:in serve'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/routing/route_set.rb:821:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/omniauth-1.2.2/lib/omniauth/strategy.rb:186:in call!'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/omniauth-1.2.2/lib/omniauth/strategy.rb:164:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/omniauth-1.2.2/lib/omniauth/builder.rb:59:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/session/abstract/id.rb:225:incontext' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/session/abstract/id.rb:220:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/etag.rb:24:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/conditionalget.rb:38:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/head.rb:13:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/middleware/params_parser.rb:27:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/middleware/callbacks.rb:29:inblock in call' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/callbacks.rb:84:in run_callbacks'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/middleware/callbacks.rb:27:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/middleware/remote_ip.rb:78:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/middleware/debug_exceptions.rb:17:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/middleware/show_exceptions.rb:30:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/railties-4.2.3/lib/rails/rack/logger.rb:38:incall_app' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/railties-4.2.3/lib/rails/rack/logger.rb:20:in block in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/tagged_logging.rb:68:inblock in tagged' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/tagged_logging.rb:26:in tagged'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/tagged_logging.rb:68:intagged' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/railties-4.2.3/lib/rails/rack/logger.rb:20:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/middleware/request_id.rb:21:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/runtime.rb:18:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/activesupport-4.2.3/lib/active_support/cache/strategy/local_cache_middleware.rb:28:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/actionpack-4.2.3/lib/action_dispatch/middleware/static.rb:116:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/railties-4.2.3/lib/rails/engine.rb:518:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/railties-4.2.3/lib/rails/application.rb:165:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/tempfile_reaper.rb:15:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/lint.rb:49:in _call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/lint.rb:37:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/showexceptions.rb:24:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/commonlogger.rb:33:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/chunked.rb:54:in call'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/rack-1.6.4/lib/rack/content_length.rb:15:incall' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/unicorn-4.9.0/lib/unicorn/http_server.rb:580:in process_client'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/unicorn-4.9.0/lib/unicorn/http_server.rb:674:inworker_loop' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/unicorn-4.9.0/lib/unicorn/http_server.rb:529:in spawn_missing_workers'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/unicorn-4.9.0/lib/unicorn/http_server.rb:140:instart' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/gems/unicorn-4.9.0/bin/unicorn:126:in <top (required)>'
/var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/bin/unicorn:23:inload' /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.1.0/bin/unicorn:23:in

'
I, [2015-09-21T20:23:44.474901 #7511] INFO -- : Response: {"headers":{"X-Frame-Options":"SAMEORIGIN","X-XSS-Protection":"1; mode=block","X-Content-Type-Options":"nosniff"},"body":"{"description":"#<Exceptions::BackendError: Cannot start Docker container`cf-5edc4f2a-fe06-4d4a-bf2c-64160bdaae2a'>"}"}
I, [2015-09-21T20:23:44.475020 #7511] INFO -- : Completed 500 Internal Server Error in 552ms (Views: 0.1ms)

Gemfile issues

Hello, trying to building the image, I got this:

(just edited Gemfile to use https protocol for github because I am behind a proxy)

Fetching https://github.com/cloudfoundry/omniauth-uaa-oauth2.git
Fetching https://github.com/cloudfoundry/cf-registrar.git
Fetching gem metadata from https://rubygems.org/........
Fetching version metadata from https://rubygems.org/...
Fetching dependency metadata from https://rubygems.org/..
Resolving dependencies...
Bundler could not find compatible versions for gem "nats":
  In snapshot (Gemfile.lock):
    nats (= 0.5.0.beta.14)

  In Gemfile:
    cf-registrar (>= 0) ruby depends on
      cf-message-bus (~> 0.3.0) ruby depends on
        nats (< 0.6, >= 0.5.0.beta.16) ruby

    nats (>= 0) ruby

Running `bundle update` will rebuild your snapshot from scratch, using only
the gems in your Gemfile, which may resolve the conflict.

I simply edited the Dockerfile with the bundle update command on RUN

Or you can edit the Gemfile.lock at line 104 to

nats (0.5.0.beta.16) to get it working

Support other URI patterns

When an application is bound to the service, it will receive a credentials hash with an URI following this pattern: mongodb://admin:<RANDOM PASSWORD>@<HOST IP>:<HOST PORT MAPPED TO CONTAINER PORT 27017/tcp>/<RANDOM DBNAME>.

Some database uris, such as influxdb, don't follow this pattern. Uris are then defined without database specificed as uri path ( ex : http://admin:<RANDOM PASSWORD>@<HOST>:<PORT> )

Just wondering if such a pattern could be supported ?

Thanks

undefined local variable or method `protocol'

Even with a custom config settings.yml, I'm still getting undefined local variable or method protocol'` in the logs

settings.yml

defaults: &defaults
  log_path: 'log/<%= Rails.env %>.log'
  auth_username: 'containers'
  cookie_secret: 'e7247dae-a252-4393-afa3-2219c1c02efd'
  session_expiry: 86400

  cc_api_uri: 'http://api.10.244.0.34.xip.io'
  external_ip: 10.244.0.34
  external_host: 'xip.io'
  external_port: 80
  component_name: 'cf-containers-broker'

  ssl_enabled: false
  skip_ssl_validation: true

  host_directory: '/tmp/store/cf-containers-broker/'
  max_containers: 0

  message_bus_servers:
    - 'nats://10.244.0.6:4222'
 emperor@emperor:/tmp/cf-containers-broker$ docker run -d --name cf-containers-broker --publish 8585:8585 --volume /var/run:/var/run --volume /tmp/cf-docker-broker/config:/config --volume /tmp/cf-docker-broker/logs:/app/log frodenas/cf-containers-broker               
emperor@emperor:/tmp/cf-containers-broker$ docker logs -f cf-containers-broker                                                                                                                              Fetching Containers Images...
(erb):98:in `<main>': undefined local variable or method `protocol' for main:Object (NameError)
        from /usr/local/lib/ruby/2.1.0/erb.rb:850:in `eval'
        from /usr/local/lib/ruby/2.1.0/erb.rb:850:in `result'
        from /usr/local/lib/ruby/gems/2.1.0/gems/settingslogic-2.0.9/lib/settingslogic.rb:103:in `initialize'
        from /usr/local/lib/ruby/gems/2.1.0/gems/settingslogic-2.0.9/lib/settingslogic.rb:60:in `new'
        from /usr/local/lib/ruby/gems/2.1.0/gems/settingslogic-2.0.9/lib/settingslogic.rb:60:in `instance'
        from /usr/local/lib/ruby/gems/2.1.0/gems/settingslogic-2.0.9/lib/settingslogic.rb:66:in `method_missing'
        from /app/config/application.rb:35:in `<class:Application>'
        from /app/config/application.rb:17:in `<module:CfContainersBroker>'
        from /app/config/application.rb:16:in `<top (required)>'
        from /usr/local/lib/ruby/site_ruby/2.1.0/rubygems/core_ext/kernel_require.rb:54:in `require'
        from /usr/local/lib/ruby/site_ruby/2.1.0/rubygems/core_ext/kernel_require.rb:54:in `require'
        from bin/fetch_container_images:3:in `<main>'

Incorrect "user" and "db" credentials keys in postgresql93 sample config

In config/settings.yml, the postgresql93 sample service uses incorrect "user" (should be "username") and "db" (should be "dbname") keys in its credentials config. If a user derives their initial config from settings.yml, they end up with VCAP_SERVICES configuration which omits the username and password and contains an invalid uri. In addition, the relevant user/db variables don't make their way into the container environment, so postgres db instance isn't created and bootstrapped as expected.

current broken state

$ cf env cf-env
Getting env variables for app cf-env in org cbuben / space development as [email protected]...
OK

{
  "VCAP_SERVICES": {
    "postgresql93": [
      {
        "credentials": {
          "hostname": "127.0.0.1",
          "password": "3ijmil3ju4whfpd3",
          "port": "32768",
          "ports": {
            "5432/tcp": "32768"
          },
          "uri": "postgres://127.0.0.1:32768"
        },
        "label": "postgresql93",
        "name": "mydb",
        "plan": "free",
        "tags": [
          "postgresql93",
          "postgresql",
          "relational"
        ]
      }
    ]
  }
}
...

fix

$ git diff
diff --git a/config/settings.yml b/config/settings.yml
index a9a6c22..b58cf86 100644
--- a/config/settings.yml
+++ b/config/settings.yml
@@ -50,11 +50,11 @@ defaults: &defaults
             persistent_volumes:
               - '/data'
           credentials:
-            user:
+            username:
               key: 'POSTGRES_USERNAME'
             password:
               key: 'POSTGRES_PASSWORD'
-            db:
+            dbname:
               key: 'POSTGRES_DBNAME'
             uri:
               prefix: 'postgres'

after

$ cf env cf-env
Getting env variables for app cf-env in org cbuben / space development as [email protected]...
OK

...

{
  "VCAP_SERVICES": {
    "postgresql93": [
      {
        "credentials": {
          "dbname": "zyjy00hr55mpmau2",
          "hostname": "127.0.0.1",
          "password": "r80vtvwlb3hkwpm6",
          "port": "32768",
          "ports": {
            "5432/tcp": "32768"
          },
          "uri": "postgres://y5yocitwv2bjr9ye:[email protected]:32768/zyjy00hr55mpmau2",
          "username": "y5yocitwv2bjr9ye"
        },
        "label": "postgresql93",
        "name": "mydb",
        "plan": "free",
        "tags": [
          "postgresql93",
          "postgresql",
          "relational"
        ]
      }
    ]
  }
}

Host ports being reassigned on restart

@frodenas we're observing that docker/broker is not using the same host:container port combination on restart (which means that apps cannot connect to the services, and need to recreate the binding). Is this the old behavior? Or a regression from upgrading to docker 1.6?

/cc @djsplice

cf-containers-broker fail to work with uaa and self signed SSL certivicates

Skip_ssl_validation settings isn't passed to oumniauth and cf-uaa-lib. This cause cf-uaa-lib fail to acquire uaa token when using self signed SSL certificate.

Pleas note that cf-uaa-lib, from version 3.0.0, by default try to validate SSL cert. (See cf-uaa-lib 3.0.0 Release Notes.)

Pleas fix passing Skip_ssl_validation cf-uaa-lib.

Stack trace from cf-containers-broker.log, for requesting uaa token first time:

  I, 2016-03-07T22:29:57.270940 #4693 INFO – : (cloudfoundry) Fetching access token
  I, 2016-03-07T22:29:57.271069 #4693 INFO – : (cloudfoundry) Client: p-couchdb16-client auth_server: http://login.<cf_domain> token_server: https://uaa.<cf_domain>
  F, 2016-03-07T22:29:57.288340 #4693 FATAL – :
  CF::UAA::SSLException (Invalid SSL Cert for https://uaa.<cf_domain>/oauth/token. Use '--skip-ssl-validation' to continue with an insecure target):
  cf-uaa-lib (3.2.4) lib/uaa/http.rb:169:in `rescue in net_http_request'
  cf-uaa-lib (3.2.4) lib/uaa/http.rb:157:in `net_http_request'
  cf-uaa-lib (3.2.4) lib/uaa/http.rb:145:in `request'
  cf-uaa-lib (3.2.4) lib/uaa/token_issuer.rb:77:in `request_token'
  cf-uaa-lib (3.2.4) lib/uaa/token_issuer.rb:234:in `authcode_grant'
  /var/vcap/packages/cf-containers-broker/vendor/cache/omniauth-uaa-oauth2-f892fd3415f9/lib/omniauth/strategies/cloudfoundry.rb:176:in `build_access_token'
  /var/vcap/packages/cf-containers-broker/vendor/cache/omniauth-uaa-oauth2-f892fd3415f9/lib/omniauth/strategies/cloudfoundry.rb:127:in `callback_phase'
  omniauth (1.2.2) lib/omniauth/strategy.rb:227:in `callback_call'
  omniauth (1.2.2) lib/omniauth/strategy.rb:184:in `call!'
  omniauth (1.2.2) lib/omniauth/strategy.rb:164:in `call'
  omniauth (1.2.2) lib/omniauth/builder.rb:59:in `call'
  rack (1.6.4) lib/rack/session/abstract/id.rb:225:in `context'
  rack (1.6.4) lib/rack/session/abstract/id.rb:220:in `call'
  rack (1.6.4) lib/rack/etag.rb:24:in `call'
  rack (1.6.4) lib/rack/conditionalget.rb:25:in `call'
  rack (1.6.4) lib/rack/head.rb:13:in `call'
  actionpack (4.2.4) lib/action_dispatch/middleware/params_parser.rb:27:in `call'
  actionpack (4.2.4) lib/action_dispatch/middleware/callbacks.rb:29:in `block in call'
  activesupport (4.2.4) lib/active_support/callbacks.rb:88:in `__run_callbacks__'
  activesupport (4.2.4) lib/active_support/callbacks.rb:778:in `_run_call_callbacks'
  activesupport (4.2.4) lib/active_support/callbacks.rb:81:in `run_callbacks'
  actionpack (4.2.4) lib/action_dispatch/middleware/callbacks.rb:27:in `call'
  actionpack (4.2.4) lib/action_dispatch/middleware/remote_ip.rb:78:in `call'
  actionpack (4.2.4) lib/action_dispatch/middleware/debug_exceptions.rb:17:in `call'
  actionpack (4.2.4) lib/action_dispatch/middleware/show_exceptions.rb:30:in `call'
  railties (4.2.4) lib/rails/rack/logger.rb:38:in `call_app'
  railties (4.2.4) lib/rails/rack/logger.rb:20:in `block in call'
  activesupport (4.2.4) lib/active_support/tagged_logging.rb:68:in `block in tagged'
  activesupport (4.2.4) lib/active_support/tagged_logging.rb:26:in `tagged'
  activesupport (4.2.4) lib/active_support/tagged_logging.rb:68:in `tagged'
  railties (4.2.4) lib/rails/rack/logger.rb:20:in `call'
  actionpack (4.2.4) lib/action_dispatch/middleware/request_id.rb:21:in `call'
  rack (1.6.4) lib/rack/runtime.rb:18:in `call'
  activesupport (4.2.4) lib/active_support/cache/strategy/local_cache_middleware.rb:28:in `call'
  actionpack (4.2.4) lib/action_dispatch/middleware/static.rb:116:in `call'
  railties (4.2.4) lib/rails/engine.rb:518:in `call'
  railties (4.2.4) lib/rails/application.rb:165:in `call'
  rack (1.6.4) lib/rack/tempfile_reaper.rb:15:in `call'
  rack (1.6.4) lib/rack/lint.rb:49:in `_call'
  rack (1.6.4) lib/rack/lint.rb:37:in `call'
  rack (1.6.4) lib/rack/showexceptions.rb:24:in `call'
  rack (1.6.4) lib/rack/commonlogger.rb:33:in `call'
  rack (1.6.4) lib/rack/chunked.rb:54:in `call'
  rack (1.6.4) lib/rack/content_length.rb:15:in `call'
  vendor/bundle/ruby/2.2.0/gems/unicorn-5.0.0/lib/unicorn/http_server.rb:562:in `process_client'
  vendor/bundle/ruby/2.2.0/gems/unicorn-5.0.0/lib/unicorn/http_server.rb:658:in `worker_loop'
  vendor/bundle/ruby/2.2.0/gems/unicorn-5.0.0/lib/unicorn/http_server.rb:508:in `spawn_missing_workers'
  vendor/bundle/ruby/2.2.0/gems/unicorn-5.0.0/lib/unicorn/http_server.rb:132:in `start'
  unicorn (5.0.0) bin/unicorn:126:in `<top (required)>'
  /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.2.0/bin/unicorn:23:in `load'
  /var/vcap/packages/cf-containers-broker/vendor/bundle/ruby/2.2.0/bin/unicorn:23:in `<main>'

Stack trace for refreshing uaa token

I, [2016-03-10T12:30:01.634196 #23104]  INFO -- : Completed 500 Internal Server Error in 124ms
F, [2016-03-10T12:30:01.635141 #23104] FATAL -- :
CF::UAA::SSLException (Invalid SSL Cert for https://uaa.<domain>/oauth/token. Use '--skip-ssl-validation' to continue with an insecure target):
  lib/uaa_session.rb:39:in `refreshed_token_info'
  lib/uaa_session.rb:9:in `build'
  app/controllers/manage/instances_controller.rb:64:in `build_uaa_session'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.