Coder Social home page Coder Social logo

accenture / adop-docker-compose Goto Github PK

View Code? Open in Web Editor NEW
764.0 82.0 574.0 679 KB

Talk to us on Gitter: https://gitter.im/Accenture/ADOP

Home Page: https://accenture.github.io/adop-docker-compose

License: Apache License 2.0

Shell 79.05% Ruby 3.06% HTML 4.94% CSS 12.27% Dockerfile 0.67%
adop docker-machine

adop-docker-compose's Introduction

As of summer 2020 this repository has been deprecated and is no longer maintained.

Build Status Gitter

The DevOps Platform: Overview

The DevOps Platform is a tools environment for continuously testing, releasing and maintaining applications. Reference code, delivery pipelines, automated testing and environments can be loaded in via the concept of Cartridges.

The platform runs on a docker container cluster so it can be stood up for evaluation purposes on just one server using local storage, or stood up in a multi-data centre cluster with distributed network storage. It will also run anywhere that docker runs.

Here is the front page:

HomePage

Once you have a stack up and running, you can log in with the username and password created upon start-up.

If you provisioned your stack using the start-up CLI, an example workspace containing an example project and an example cartridge will all have been pre-loaded in Jenkins:

HomePage

Once you have explored this the next step is to create your own Workspace and Project and then load another cartridge using a 'Load Cartridge' job in the 'Cartridge Management' folder (that automatically gets created in any Project). The cartridge development cartridge also helps create your own cartridges.

Quickstart Instructions

These instructions will spin up an instance in a single server in AWS (for evaluation purposes). Please check the prerequisites.

NB. the instructions will also work in anywhere supported by Docker Machine, just follow the relevant Docker Machine instructions for your target platform and then start at step 3 below and (you can set the AWS_VPC_ID to NA).

  1. Create a VPC using the VPC wizard in the AWS console by selecting the first option with 1 public subnet.
  2. On the "Step 2: VPC with a Single Public Subnet" page give your VPC a meaningful name and specify the availability zone as 'a', e.g. select eu-west-1a from the pulldown.
  3. Once the VPC is created note the VPC ID (e.g. vpc-1ed3sfgw)
  4. Clone this repository and then in a terminal window (this has been tested in GitBash):
    • Run:

      ./quickstart.sh

      $ ./quickstart.sh
      Usage: ./quickstart.sh -t aws
                             -m <MACHINE_NAME>  
                             -c <AWS_VPC_ID> 
                             -r <AWS_DEFAULT_REGION> 
                             -z <VPC_AVAIL_ZONE>(optional)
                             -a <AWS_ACCESS_KEY>(optional) 
                             -s <AWS_SECRET_ACCESS_EY>(optional) 
                             -u <INITIAL_ADMIN_USER>
                             -p <INITIAL_ADMIN_PASSWORD>(optional) ...
      • You will need to supply:
        • the type of machine to create (aws, in this example)
        • a machine name (anything you want)
        • the target VPC
        • If you don't have your AWS credentials and default region stored locally in ~/.aws you will also need to supply:
          • your AWS key and your secret access key (see getting your AWS access key) via command line options, environment variables or using aws configure
          • the AWS region id in this format: eu-west-1
        • a username and password (optional) to act as credentials for the initial admin user (you will be prompted to re-enter your password if it is considered weak)
          • The initial admin username cannot be set to 'admin' to avoid duplicate entries in LDAP.
        • AWS parameters i.e. a subnet ID, the name of a keypair and an EC2 instance type (these parameters are useful if you would like to extend the platform with additional AWS EC2 services)
    • For example (if you don't have ~/.aws set up):

      ./quickstart.sh -t aws -m adop1 -a AAA -s BBB -c vpc-123abc -r eu-west-1 -u user.name -p userPassword

      • N.B. If you see an error saying that docker-machine cannot find an associated subnet in a zone, go back to the VPC Dashboard on AWS and check the availablity zone for the subnet you've created. Then rerun the startup script and use the -z option to specify the zone for your subnet, e.g. for a zone of eu-west-1c the above command becomes:

        ./quickstart.sh -t aws -m adop1 -a AAA -s BBB -c vpc-123abc -r eu-west-1 -u user.name -p userPassword -z c

  5. If all goes well you will see the following output and you can view the DevOps Platform in your browser
    ##########################################################
    
    SUCCESS, your new ADOP instance is ready!
    
    Run this command in your shell:
      source ./conf/env.provider.sh
      source credentials.generate.sh
      source env.config.sh
      
    You can check if any variables are missing with: ./adop compose config  | grep 'WARNING'
    
    Navigate to http://<PROXY IP> in your browser to use your new DevOps Platform!
    Login using the following credentials:
      Username: YOUR_USERNAME
      Password: YOUR_PASSWORD
    
  6. Log in using the username and password you specified in the quickstart script:

<INITIAL_ADMIN_USER> / <INITIAL_ADMIN_PASSWORD> ```

  1. Update the docker-machine security group in the AWS console to permit inbound http traffic on ports 80 and 443 (from the machine(s) from which you want to have access only), also UDP on 25826 and 12201 from 127.0.0.1/32.

General Getting Started Instructions

The platform is designed to run on any container platform.

Provision Docker Engine(s)

To run in AWS (single instance) manually

  • Create a VPC using the VPC wizard in the AWS console by selecting the first option with 1 public subnet

  • Create a Docker Engine in AWS (replace the placeholders and their <> markers):

docker-machine create --driver amazonec2 --amazonec2-access-key <YOUR_ACCESS_KEY> --amazonec2-secret-key <YOUR_SECRET_KEY> --amazonec2-vpc-id <YOUR_VPC_ID> --amazonec2-instance-type m4.xlarge --amazonec2-region <YOUR_AWS_REGION, e.g. eu-west-1> <YOUR_MACHINE_NAME>
  • Update the docker-machine security group to permit inbound http traffic on port 80 (from the machine(s) from which you want to have access only), also UDP on 25826 and 12201 from 127.0.0.1/32

  • Set your local environment variables to point docker-machine to your new instance:

eval $(docker-machine env <YOUR_MACHINE_NAME>)

To run locally

  • Create a local Docker Engine (replace the placeholders and their <> markers):
docker-machine create --driver virtualbox --virtualbox-memory 2048 <YOUR_MACHINE_NAME>
  • Set your local environment variables to point docker-machine to your new instance:
eval $(docker-machine env <YOUR_MACHINE_NAME>)

To run with Docker Swarm

Create a Docker Swarm that has a publicly accessible Engine with the label "tier=public" to bind Nginx and Logstash to that node

Launching

  • Run: export TARGET_HOST=<IP_OF_PUBLIC_HOST>
  • Run: export CUSTOM_NETWORK_NAME=<CUSTOM_NETWORK_NAME>
  • Create a custom network: docker network create $CUSTOM_NETWORK_NAME
  • Run: docker-compose -f compose/elk.yml up -d
  • Run: export LOGSTASH_HOST=<IP_OF_LOGSTASH_HOST>
  • Source all the required parameters for your chosen cloud provider.
    • For example, for AWS you will need to source AWS_VPC_ID, AWS_SUBNET_ID, AWS_KEYPAIR, AWS_INSTANCE_TYPE and AWS_DEFAULT_REGION. To do this, make a copy of the env.aws.provider.sh.example file in /conf/provider/examples and save it as env.provider.aws.sh in /conf/provider. You can then replace all the tokens with your values.
    • You should then run: source ./conf/env.provider.sh (this will source all the provider-specific environment variable files you have specified).
    • The provider-specific environment variable files should not be uploaded to a remote repository, hence they should not be removed from the .gitignore file.
  • Run: source credentials.generate.sh [This creates a file containing your generated passwords, platform.secrets.sh, which is sourced. If the file already exists, it will not be created.]
    • platform.secrets.sh should not be uploaded to a remote repository hence do not remove this file from the .gitignore file
  • Run: source env.config.sh
    • If you delete platform.secrets.sh or if you edit any of the variables manually, you will need to re-run credentials.generate.sh in order to recreate the file or re-source the variables.
    • If you change the values in platform.secrets.sh, you will need to remove your existing docker containers and re-run docker-compose in order to re-create the containers with the new password values.
    • When creating a new instance of ADOP, you must delete platform.secrets.sh and regenerate it using credentials.generate.sh, else your old environment variables will get sourced as opposed to the new ones.
  • Choose a volume driver - either "local" or "nfs" are provided, and if the latter is chosen then an NFS server is expected along with the NFS_HOST environment variable
  • Pull the images first (this is because we can't set dependencies in Compose yet so we want everything to start at the same time): docker-compose pull
  • Run (logging driver file optional): docker-compose -f docker-compose.yml -f etc/volumes/<VOLUME_DRIVER>/default.yml -f etc/logging/syslog/default.yml up -d

Required environment variable on the host

  • MACHINE_NAME the name of your docker machine
  • TARGET_HOST the dns/ip of proxy
  • LOGSTASH_HOST the dns/ip of logstash
  • CUSTOM_NETWORK_NAME: The name of the pre-created custom network to use
  • [OPTIONAL] NFS_HOST: The DNS/IP of your NFS server

Using the platform

Generate ssl certificate

Create ssl certificate for jenkins to allow connectivity with docker engine.

  • RUN : source ./conf/env.provider.sh
  • RUN : source credentials.generate.sh
  • RUN : source env.config.sh
  • RUN : ./adop compose gen-certs ${DOCKER_CLIENT_CERT_PATH}

Note : For Windows run this command from a terminal (Git Bash) as administrator.

Load Platform
  • Access the target host url http://<TARGET_HOST> with the your username and password
  • This page presents the links to all the tools.
  • Click: Jenkins link.
  • Run: Load_Platform job
  • Once the Load_Platform job and other downstream jobs are finished your platform is ready to be used.
  • This job generates a example workspace folder, example project folder and jenkins jobs/pipelines for java reference application.
  • Create environment to deploy the reference application
  • Navigate to http://<TARGET_HOST>/jenkins/job/ExampleWorkspace/job/ExampleProject/job/Create_Environment
  • Build with Parameters keeping the default value.
  • Run Example pipeline
  • Navigate to http://<TARGET_HOST>/jenkins/job/ExampleWorkspace/job/ExampleProject/view/Java_Reference_Application/
  • Click on run.
  • Browse the environment
  • Click on the url for your environment from deploy job.
  • You should be able to see the spring petclinic application.
  • Now, you can clone the repository from gerrit and make a code change to see the example pipeline triggered automatically.

Define Default Elastic Search Index Pattern

Kibana 4 does not provide a configuration property that allow to define the default index pattern so the following manual procedure should be adopted in order to define an index pattern:

  • Navidate to Settings > Indices using Kibana dashboard
  • Set index name or pattern as "logstash-*"
  • For the below drop-down select @timestamp for the Time-field name
  • Click on create button

User Feedback

Documentation

Documentation can be found on our GitHub Pages site.

Issues

If you have any problems with or questions about this project, please contact us through Gitter or a GitHub issue.

Contribute

You are invited to contribute new features, fixes, or updates, large or small; we are always thrilled to receive pull requests, and do our best to process them as fast as we can. You can find more information in our documentation.

Before you start to code, we recommend discussing your plans through a GitHub issue, especially for more ambitious contributions. This gives other contributors a chance to point you in the right direction, give you feedback on your design, and help you find out if someone else is working on the same thing.

Roadmap

We use this working Roadmap to evolve and summarise plans for future features and the merge of existing PRs.

adop-docker-compose's People

Contributors

alexpinhel avatar anton-kasperovich avatar dantarl avatar deors avatar dependabot[bot] avatar dsingh07 avatar ifourmanov avatar javimoya avatar kramos avatar kristapsm avatar larrywright avatar luismsousa avatar marisbahtins avatar mc-slava avatar michael-t-dukes avatar nickdgriffin avatar oscarrenalias avatar quirinobrizi avatar robertnorthard avatar robwells57 avatar sachinksingh28 avatar samizdam avatar stephenmcnicholas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

adop-docker-compose's Issues

'Ngnix was unavailable' message loop during quickstart installation

I am new to ADOP trying to get started with quickstart script. After setting up AWS credentials and prerequisites for ADOP evaluation mode, I tried executing quickstart.sh using default region eu-west-1 ( Ireland) where the script executes well up until Ngnix installation !

The script fails at "Waiting for Ngnix to become available " message followed by "Ngnix was unavailable" message loop. Is there any work around for this ? Am I missing something here. I looked up for some solution but couldn't find any.

Kibana cannot be configured, it says 'Unable to fetch mapping'

As per instructions, when a brand new ADOP instance is provisioned, I try to configure indices in Kibana dashboard. However, when I type logstash-* as index name, below appears a message saying 'Unable to fetch mapping. Do you have indices matching the pattern?'.
Any ideas about what could be causing this, or any known workaround?
Thanks in advance.

Sensitive passwords show up as environment variables in jobs

When installing ADOP on-premise we noticed that the sensitive passwords were shown in the ENVIRONMENT VARIABLES of all jobs.

This means any person with access to "View" any job in Jenkins will be able to see these sensitive passwords, the following we picked out:

  • CREDENTIALS_LDAP_SERVICE_USER_PASSWORD
  • GERRIT_JENKINS_PASSWORD
  • INITIAL_ADMIN_PASSWORD
  • LDAP_MANAGER_PASSWORD
  • SONAR_ACCOUNT_PASSWORD

I suggest that we change the way docker provisionins Jenkins so they are loaded in as credentials rather than system variables.

image

Selenium Grid will not work when using a docker overlay network

I'm currently using a swarm cluster with docker overlay network and had the same issue as below. I could not access Selenium Grid.

Problem Reference: SeleniumHQ/docker-selenium#51
http://automatictester.co.ukk/2012/10/27/selenium-grid-2-0-and-remotehost-parameter/

Logs from selenium hub container:
selenium-hub | 16:59:45.925 INFO - Registered a node http://172.20.0.4:5555
selenium-hub | 16:59:50.253 INFO - Registered a node http://172.20.0.2:5555

The above is fine when running the selenium stack on a docker bridge type network but will not work against a docker overlay type network.

To recreate the issue:
use a CUSTOM_NETWORK_NAME that is a docker overlay network type instead of the default bridge.

To fix the issue I have to add a new environment variable entry.
For chrome:
REMOTE_HOST: "http://selenium-node-chrome:5555"
For firefox:
REMOTE_HOST: "http://selenium-node-firefox:5555"

As said in the topic link above and as I've observed, selenium by default uses the bridge network container ip instead of their overlay network ip.

Logs from selenium-hub after applying the fix:
selenium-hub | 17:02:47.526 INFO - Registered a node http://selenium-node-chrome:5555
selenium-hub | 17:02:51.736 INFO - Registered a node http://selenium-node-firefox:5555

Startup shell script fails with >1 line in AWS config

When using quickstart.sh or startup.sh using default AWS region, the scripts fail when more than one properly line is present in ~/.aws/config - for example, I set my output format to 'table', which added a new config like to 'config'. The sed command "eval $(grep -v '^[' ~/.aws/config | sed 's/^(region)\s?=\s?/export AWS_DEFAULT_REGION=/')" does not properly filter for only the line beginning with region, but rather returns all lines in the file. Script fails with the following error "./quickstart.sh: line 84: export: `=': not a valid identifier".

Unfortunately, simply hardcoding the region with '-r' does not prevent this error, as the config files appears to be parsed regardless of the input value.

Removing the format config line from the files resolves this issue.

Test connection at "Docker Builder" on Jenkins fails

Using ADOP with default configuration on my local machine, under Jenkins global configuration, I have specified Docker REST API URL (somethig like http://x.x.x.x (docker host ip, i.e. ADOP machine):2376) to use docker command, however it gives result

Something went wrong, cannot connect to http://x.x.x.x:2376, cause: null

Could anyone advise this please?

Below is the Docker version (version should be new):

Client:
Version: 1.12.5
API version: 1.24
Go version: go1.6.4
Git commit: 7392c3b
Built: Fri Dec 16 06:14:34 2016
OS/Arch: linux/amd64

Server:
Version: 1.12.5
API version: 1.24
Go version: go1.6.4
Git commit: 7392c3b
Built: Fri Dec 16 06:14:34 2016
OS/Arch: linux/amd64

Below is the "Docker build step plugin" version of Jenkins:

Release 1.35 (2016-05-11)
https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin

Please update quick start documentation

In the process of setting up ADOP locally, I believe there is a couple of things that would be helpful to the consumer in the documentation.

  1. In the section to run locally on the setup of the machine
    docker-machine create --driver virtualbox --virtualbox-memory 2048 YOUR_MACHINE_NAME

2048 is not enough memory to run ADOP, it is suggested to have at least 8192 (or 8G).

  1. Adding a example command to get the IP of the IP_OF_PUBLIC_HOST. Something like,
    docker-machine ip YOUR_MACHINE_NAME

Thanks,
ADOP User

Update the Provisioning Documentation

Opening this ticket to track the improvement of the site section on provisioning.

Plan:

  • Copy the quickstart guide on the main readme to the provisioning section
  • Add page for using the adop cli to stand up on an existing instance
  • updat the troubleshooting steps page

Selenium Grid is not respecting timeout values

Having a long running selenium test or a test that is running against a page that loads very slow can cause unexpected timeouts.

To fix this I have to increase the timeout in the selenium node containers SE_OPTS variable.

Example:

SE_OPTS: "-nodeConfig /var/selenium-config/config-chrome.json -browserTimeout 86400 -timeout 86400"
selenium-node-chrome:
    restart: always
    image: selenium/node-chrome:2.53.0
    environment:
      SE_OPTS: "-nodeConfig /var/selenium-config/config-chrome.json -browserTimeout 86400 -timeout 86400"
      REMOTE_HOST: "http://selenium-node-chrome:5555"
      HUB_PORT_4444_TCP_ADDR: "selenium-hub"
      HUB_PORT_4444_TCP_PORT: "4444"

Similar issue: SeleniumHQ/docker-selenium#150

compose is not tested before Commit.

Line 26 : -printf " %-22s %s\n" "init" <--without-pull> "Initialises ADOP without pulling images"

where <--without-pull> should be enclosed with double quotes.

This errors out while executing from git bash on windows platform(Win 7)

works fine from Mac.

Docker daemon not starting with quickstart.sh

Hi,
Im getting below error while running ADOP on aws with quickstart.sh

./quickstart.sh -t aws -m sealabadop -c vpc-xxx -r us-east-1 -z a  -a xxxx -s xxxxx -u rajiv -p xxxxxx

      ###    ########   #######  ########
     ## ##   ##     ## ##     ## ##     ##
    ##   ##  ##     ## ##     ## ##     ##
   ##     ## ##     ## ##     ## ########
   ######### ##     ## ##     ## ##
   ##     ## ##     ## ##     ## ##
   ##     ## ########   #######  ##

Creating a new AWS variables file...
Running pre-create checks...
Creating machine...
(sealabadop) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "107.23.221.162:2376": read tcp 10.0.0.17:45111->107.23.221.162:2376: read: connection reset by peer
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.

Here's is my docker-machine version

docker-machine version
docker-machine version 0.6.0, build e27fb87

I got below logs on docke-machine

 journalctl -fu docker.service
-- Logs begin at Mon 2017-02-20 04:21:48 UTC. --
Feb 20 04:23:25 sealabadop systemd[1]: Stopping Docker Application Container Engine...
Feb 20 04:23:25 sealabadop dockerd[3927]: time="2017-02-20T04:23:25.948709206Z" level=info msg="Processing signal 'terminated'"
Feb 20 04:23:26 sealabadop dockerd[3927]: time="2017-02-20T04:23:26.025723719Z" level=info msg="stopping containerd after receiving terminated"
Feb 20 04:23:27 sealabadop systemd[1]: Stopped Docker Application Container Engine.
Feb 20 04:23:29 sealabadop systemd[1]: Started docker.service.
Feb 20 04:23:29 sealabadop docker[4639]: time="2017-02-20T04:23:29.094308614Z" level=info msg="libcontainerd: new containerd process, pid: 4649"
Feb 20 04:23:30 sealabadop docker[4639]: time="2017-02-20T04:23:30.128295561Z" level=fatal msg="Error starting daemon: error initializing graphdriver: driver not supported"
Feb 20 04:23:30 sealabadop systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 04:23:30 sealabadop systemd[1]: docker.service: Unit entered failed state.
Feb 20 04:23:30 sealabadop systemd[1]: docker.service: Failed with result 'exit-code'.

I have started the quickstart.sh from ubuntu 10.04 LTS VM.
Kindly help me to resolve this issue.

Custom network name parameter is ignored in startup.sh

Currently, startup.sh script, in line 32, seems to be ignoring the custom network name parameter that, if passed through CLI (-n ) should be used instead of the default name (adopnetwork). In previous versions of the script, the assignment to the env var was correct.
Should line 32 be restored as found in previous versions? That is:
export CUSTOM_NETWORK_NAME=${OPTARG}

Jenkins Slave restarting on ADOP creation

I used the quickstart.sh script to create a new ADOP instance. I noticed that the Load_Platform job was hanging and on closer inspection I found that the jenkins slave is restarting. The docker logs had multiple lines of

Error: Invalid or corrupt jarfile /bin/swarm-client.jar

Reference_Application_Code_Analysis fails when Reference_Application_Build build number is ahead of Reference_Application_Unit_Tests

Hi,

In the Java_Reference_Application pipeline in Jenkins, it seems like Reference_Application_Code_Analysis fails when the build number of the Reference_Application_Build is higher than the build number of the Reference_Application_Unit_Tests (this would happen if a build fails at some point). I was getting the following error:

[EnvInject] - Inject global passwords.
Started by upstream project "ExampleWorkspace/ExampleProject/Reference_Application_Unit_Tests" build number 6
originally caused by:
 Started by upstream project "ExampleWorkspace/ExampleProject/Reference_Application_Build" build number 7
 originally caused by:
  Triggered by Gerrit: http://gerrit:8080/gerrit/
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting as environment variables the properties content 
WORKSPACE_NAME=ExampleWorkspace
PROJECT_NAME=ExampleWorkspace/ExampleProject
PROJECT_NAME_KEY=exampleworkspace-exampleproject

[EnvInject] - Variables injected successfully.
[EnvInject] - Injecting contributions.
Building remotely on Swarm_Slave-6839cf1d (swarm java8 ldap aws docker) in workspace /workspace/ExampleWorkspace/ExampleProject/Reference_Application_Code_Analysis

Deleting project workspace... done

[ssh-agent] Using credentials jenkins (ADOP Jenkins Master)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent]   Java/JNR ssh-agent
[ssh-agent] Started.
[ssh-agent] Stopped.
ERROR: Unable to find a build for artifact copy from: Reference_Application_Unit_Tests
Warning: you have no plugins providing access control for builds, so falling back to legacy behavior of permitting any downstream builds to be triggered
Finished: FAILURE

After triggering a manual build on Reference_Application_Unit_Tests the whole flow was working again.

Got 404 while creating index in elasticsearch

I'm trying to create a new index in elasticsearch from jenkins, here is the code that I'm executing:

curl -XPUT -u <user>:<password> -H kbn-version: 4.3.1 -i http://<ip>/elasticsearch/edms?pretty

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    38  100    38    0     0  17335      0 --:--:-- --:--:-- --:--:-- 19000
HTTP/1.1 404 Not Found
Server: nginx
Date: Mon, 03 Oct 2016 22:45:08 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 38
Connection: keep-alive
Vary: Accept-Encoding
kbn-name: kibana
kbn-version: 4.3.1
cache-control: no-cache

{"statusCode":404,"error":"Not Found"}

I'm following instructions from here: elasticsearch

Several env vars are not properly sourced when re-running adop compose command

Once you have created a platform, if at a later point you wish to docker-compose commands, for example to restart some of the services, first you need to source 3 files defining env vars. However, there are a few variables that seem not to be sourced, or at least that what it seems. For example if running docker-compose run proxy, the output is this one:
WARNING: The CUSTOM_NETWORK_NAME variable is not set. Defaulting to a blank string.
WARNING: The TARGET_HOST variable is not set. Defaulting to a blank string.
WARNING: The JENKINS_PWD variable is not set. Defaulting to a blank string.
WARNING: The INITIAL_ADMIN_PASSWORD variable is not set. Defaulting to a blank string.
WARNING: The GERRIT_PWD variable is not set. Defaulting to a blank string.

Should those environment variables be properly sourced to ensure repeatability?

As a workaround "adop compose init" command still works, although it works on the entire stack, and not on individual containers/services.

Create a Non Production or Lite version CF Swarm Template that deploys ADOP

A a CF template that deploys ADOP docker container tools in a swarm cluster but only in a single availability zone with a minimal setup and security for simple development environment.

Here is a sample work that I did for one of my project. https://github.com/bzon/adop-docker-compose-1/blob/master/provision/aws/swarm/CF-ADOP-Cluster-Lite.json

Some extensions has been made here like adding gitlab, extending adop jenkins, extending adop nginx, docker container, override docker-compose files. But can definitely deploy the vanilla ADOP Gen5 tool set.

Below is the diagram on how CF deploys the instances and ADOP tools.

ADOP-CF-template-lite.pdf

Kindly review. Thanks!

Edit: I've modified the CF template to launch the swarm infrastructure without deploying any ADOP containers.

https://github.com/bzon/adop-docker-compose/blob/master/provision/aws/quickstart-swarm/CF-ADOP-Cluster-Lite.json

Selenium link on release note broken if stack created using Swarm

If stack has been created using swarm, Selenium link on release note page is broken:
https://selenium.swarm-tes-proxyela-random-number.us-east-1.elb.amazonaws.com.xip.io

It tries to create the same link, as it would have done if we would have been using just the IP address of the stack, meaning it has a subdomain and also adds the extra "xip.io"

Reference_Application_Deploy_ProdA failing

The Reference_Application_Deploy_ProdA task on the pre packaged example is failing for me with the following error. Am I doing something wrong when executing it?

+ docker cp /workspace/ExampleWorkspace/ExampleProject/Reference_Application_Deploy_ProdA/target/petclinic.war ExampleWorkspace_ExampleProject_PRODA:/usr/local/tomcat/webapps/ no such directory Build step 'Execute shell' marked build as failure [ssh-agent] Stopped. Warning: you have no plugins providing access control for builds, so falling back to legacy behavior of permitting any downstream builds to be triggered Finished: FAILURE

Generating certificates in OS X 10.11 doesn't work

Overall output:

bash -x ./adop compose gen_certs ~/.foo
++ basename ./adop
+ CMD_NAME=adop
+++ echo ./adop
+++ sed -e 's,\\,/,g'
++ dirname ./adop
+ export CLI_DIR=.
+ CLI_DIR=.
+ export CONF_DIR=.
+ CONF_DIR=.
+ export CONF_PROVIDER_DIR=./conf/provider
+ CONF_PROVIDER_DIR=./conf/provider
+ CLI_CMD_DIR=./cmd
+ main compose gen_certs /Users/oscar.renalias/.foo
+ '[' 3 -lt 1 ']'
+ SUBCOMMAND=compose
+ shift
+ '[' '!' -e ./cmd/compose ']'
+ . ./cmd/compose gen_certs /Users/oscar.renalias/.foo
++ SUB_CMD_NAME=compose
++ DEFAULT_MACHINE_NAME=default
++ export MACHINE_NAME=default
++ MACHINE_NAME=default
++ export VOLUME_DRIVER=local
++ VOLUME_DRIVER=local
++ export LOGGING_DRIVER=syslog
++ LOGGING_DRIVER=syslog
++ export CUSTOM_NETWORK_NAME=local_network
++ CUSTOM_NETWORK_NAME=local_network
++ export OVERRIDES=
++ OVERRIDES=
++ export TOTAL_OVERRIDES=
++ TOTAL_OVERRIDES=
++ export PULL=YES
++ PULL=YES
++ getopts m:f:F:v:l:n:i: opt
++ shift 0
++ SUBCOMMAND_OPT=gen_certs
++ '[' 2 -ge 1 ']'
++ shift
++ ADOPFILEOPTS='-f ./docker-compose.yml -f ./etc/volumes/local/default.yml -f ./etc/logging/syslog/default.yml'
++ ELKFILEOPTS='-f ./compose/elk.yml'
++ case ${SUBCOMMAND_OPT} in
++ gen_certs /Users/oscar.renalias/.foo
++ echo 'Generating client certificates for TLS-enabled Engine'
Generating client certificates for TLS-enabled Engine
++ CERT_PATH=/Users/oscar.renalias/.foo
++ '[' -z /Users/oscar.renalias/.foo ']'
+++ uname
++ HOST_OS=Darwin
++ CLIENT_SUBJ=/CN=client
++ echo Darwin
++ grep -E 'MINGW*'
++ TEMP_CERT_PATH=/Users/oscar.renalias/docker_certs
++ rm -rf /Users/oscar.renalias/docker_certs
++ mkdir -p /Users/oscar.renalias/docker_certs
++ set +e
++ openssl genrsa -out /Users/oscar.renalias/docker_certs/key.pem 4096
++ openssl req -subj /CN=client -new -key /Users/oscar.renalias/docker_certs/key.pem -out /Users/oscar.renalias/docker_certs/client.csr
++ echo 'extendedKeyUsage = clientAuth'
++ openssl x509 -req -days 365 -sha256 -in /Users/oscar.renalias/docker_certs/client.csr -CA /Users/oscar.renalias/.docker/machine/certs/ca.pem -CAkey /Users/oscar.renalias/.docker/machine/certs/ca-key.pem -CAcreateserial -out /Users/oscar.renalias/docker_certs/cert.pem -extfile /Users/oscar.renalias/docker_certs/extfile.cnf
++ set -e
++ CERT_FILE=/Users/oscar.renalias/docker_certs/cert.pem
++ [[ -s /Users/oscar.renalias/docker_certs/cert.pem ]]
++ echo '/Users/oscar.renalias/docker_certs/cert.pem was not created successfully and is empty.'
/Users/oscar.renalias/docker_certs/cert.pem was not created successfully and is empty.
++ echo 'This is because you have not run your shell window in Administrator mode or with root access.'
This is because you have not run your shell window in Administrator mode or with root access.
++ echo 'Please run your shell window in Administrator mode or with root access and re-run the quickstart script with the same flags provided in this run.'
Please run your shell window in Administrator mode or with root access and re-run the quickstart script with the same flags provided in this run.
++ exit 1

The last openssl command is failing. When running it manually, this is the output:

Signature ok
subject=/CN=client
Getting CA Private Key
/Users/oscar.srl: Permission denied
43913:error:02001002:system library:fopen:No such file or directory:/BuildRoot/Library/Caches/com.apple.xbs/Sources/OpenSSL098/OpenSSL098-59.40.2/src/crypto/bio/bss_file.c:356:fopen('/Users/oscar.srl','r')
43913:error:20074002:BIO routines:FILE_CTRL:system lib:/BuildRoot/Library/Caches/com.apple.xbs/Sources/OpenSSL098/OpenSSL098-59.40.2/src/crypto/bio/bss_file.c:358:
43913:error:0200100D:system library:fopen:Permission denied:/BuildRoot/Library/Caches/com.apple.xbs/Sources/OpenSSL098/OpenSSL098-59.40.2/src/crypto/bio/bss_file.c:356:fopen('/Users/oscar.srl','w')
43913:error:20074002:BIO routines:FILE_CTRL:system lib:/BuildRoot/Library/Caches/com.apple.xbs/Sources/OpenSSL098/OpenSSL098-59.40.2/src/crypto/bio/bss_file.c:358:

I'm not sure why it's trying to create file /Users/oscar.srl, because that's never passed as a parameter anywhere. And I don't know enough of the openssl set of commands to troubleshoot this myself.

In case it helps, the version of OpenSSL is OpenSSL 0.9.8zh 14 Jan 2016 as provided in OS X 10.11.4.

Any ideas?

Move temporary folders out of $HOME

Quoting @oscarrenalias in #75:

When creating temporary folders as in TEMP_CERT_PATH, we may want to use something like mktmp to create temporary folders in the correct place for doing so (/tmp or /var/tmp, let the shell decide) instead of creating them in places like $HOME/docker_certs, as users may be led to believe that this is an official folder.

Alternatively, we could create the folder relative to the script's current path, so that everything remains within the adop directory.

Environment Variable Fixed

Hi,

Do you know how we can change the Jenkins Environments Variables ?

It seems that these variables are hardcoded and for my part, it has created some issues.

For example, whatever the IP address for your ADOP instance (192.168.99.XXX), the variable 'Jenkins_Url' is fixed to '192.168.99.103/Jenkins' instead of '192.168.99.XXX/Jenkins'.

My solution was to have the ADOP instance IP address to 192.168.99.103 but it's not a clean solution, best will be to change the environment variable to what you need.

Tried to install this platform and getting the following errors.

Attempt 1:

No availability zone specified - using default [a].
Your AWS parameters file already exists, deleting it...
Creating a new AWS variables file...
Running pre-create checks...
Creating machine...
(myadop) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Error creating machine: Error running provisioning: Unable to verify the Docker daemon is listening: Maximum number of retries (10) exceeded

Attempt 2:
No availability zone specified - using default [a].
Your AWS parameters file already exists, deleting it...
Creating a new AWS variables file...
Docker machine 'myadop' already exists

      ###    ########   #######  ########  
     ## ##   ##     ## ##     ## ##     ## 
    ##   ##  ##     ## ##     ## ##     ## 
   ##     ## ##     ## ##     ## ########  
   ######### ##     ## ##     ## ##        
   ##     ## ##     ## ##     ## ##        
   ##     ## ########   #######  ##        
  • Initialising ADOP
    Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "...:2376": dial tcp ...:2376: getsockopt: connection refused
    You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
    Be advised that this will trigger a Docker daemon restart which will stop running containers.

Sourcing provider-specific environment files...
Sourcing ./conf/provider/env.provider.aws.sh parameters file...
Your secrets file already exists, will not re-create...
Sourcing variables from platform.secrets.sh file...
The version of your secrets file is up to date, moving on...

  • Setting up Docker Network
    Network already exists: local_network
  • Pulling Docker Images
    Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "...:2376": dial tcp ...:2376: getsockopt: connection refused
    You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
    Be advised that this will trigger a Docker daemon restart which will stop running containers.

Sourcing provider-specific environment files...
Sourcing ./conf/provider/env.provider.aws.sh parameters file...
Your secrets file already exists, will not re-create...
Sourcing variables from platform.secrets.sh file...
The version of your secrets file is up to date, moving on...
Pulling elasticsearch (elasticsearch:2.1.1)...
ERROR: Couldn't connect to Docker daemon - you might need to run docker-machine start default.

Please let me know what may be wrong.

Reference_Application_Performance_Tests job failing due to not finding apache-jmeter-2.13.tgz

The job Reference_Application_Performance_Tests job is failing due to not finding apache-jmeter-2.13.tgz file specified on section "Execute shell". The file is not listed on https://www.apache.org/dist/jmeter/binaries/. The file listed is apache-jmeter-3.0.tgz.

I modified the job (on a deployed Jenkins) to use the apache-jmeter-3.0.tgz and I am getting error "BUILD FAILED
/workspace/ExampleWorkspace/ExampleProject/Reference_Application_Performance_Tests/jmeter-test/apache-jmeter-3.0/extras/build.xml:132: stylesheet /workspace/ExampleWorkspace/ExampleProject/Reference_Application_Performance_Tests/jmeter-test/apache-jmeter-3.0/extras/jmeter-results-detail-report_21.xsl doesn't exist."

I checked the /workspace/ExampleWorkspace/ExampleProject/Reference_Application_Performance_Tests/jmeter-test/apache-jmeter-3.0/extras/build.xml and on it comes "". It seems the style_version depends on the value on property format. As the value is 2.1, the style_version value is returned as "_21"

I added this to the "Execute shell" section:

Idalia

if [ ! -f ${WORKSPACE}/$JMETER_TESTDIR/apache-jmeter-3.0/extras/jmeter-results-detail-report_21.xsl  ]; then
    echo "File jmeter-results-detail-report_21.xsl not found!"
    echo "Copying jmeter-results-detail-report.xsl as file jmeter-results-detail-report_21.xsl"
    cp ${WORKSPACE}/$JMETER_TESTDIR/apache-jmeter-3.0/extras/jmeter-results-detail-report.xsl ${WORKSPACE}/$JMETER_TESTDIR/apache-jmeter-3.0/extras/jmeter-results-detail-report_21.xsl
fi

and the job was successful.

Reading variables from ~/.aws/config and ~/.aws/credentials is broken

If data is present in either ~/.aws/config or ~/.aws/credentials, whatever bash magic is happening in the scripts isn't working. I enabled debug mode in the scripts by adding #!/bin/bash -ex at the top to see what's going on, and this is what I get:

+ echo ' 
      ###    ########   #######  ########  
     ## ##   ##     ## ##     ## ##     ## 
    ##   ##  ##     ## ##     ## ##     ## 
   ##     ## ##     ## ##     ## ########  
   ######### ##     ## ##     ## ##        
   ##     ## ##     ## ##     ## ##        
   ##     ## ########   #######  ##        
'

      ###    ########   #######  ########  
     ## ##   ##     ## ##     ## ##     ## 
    ##   ##  ##     ## ##     ## ##     ## 
   ##     ## ##     ## ##     ## ########  
   ######### ##     ## ##     ## ##        
   ##     ## ##     ## ##     ## ##        
   ##     ## ########   #######  ##        

+ getopts t:m:a:s:c:z:r:u:p: opt
+ case ${opt} in
+ export MACHINE_TYPE=aws
+ MACHINE_TYPE=aws
+ getopts t:m:a:s:c:z:r:u:p: opt
+ case ${opt} in
+ export AWS_ACCESS_KEY_ID=AKIAIR4X6GNTAQLFE36Q
+ AWS_ACCESS_KEY_ID=AKIAIR4X6GNTAQLFE36Q
+ getopts t:m:a:s:c:z:r:u:p: opt
+ case ${opt} in
+ export AWS_SECRET_ACCESS_KEY=rWqNA6sq87qjusvG/vF1x/UnRmYpUDlTsQYSxiwV
+ AWS_SECRET_ACCESS_KEY=rWqNA6sq87qjusvG/vF1x/UnRmYpUDlTsQYSxiwV
+ getopts t:m:a:s:c:z:r:u:p: opt
+ case ${opt} in
+ export MACHINE_NAME=ilmarinen-adop
+ MACHINE_NAME=ilmarinen-adop
+ getopts t:m:a:s:c:z:r:u:p: opt
+ case ${opt} in
+ export AWS_VPC_ID=vpc-c303f0a7
+ AWS_VPC_ID=vpc-c303f0a7
+ getopts t:m:a:s:c:z:r:u:p: opt
+ case ${opt} in
+ export AWS_DEFAULT_REGION=eu-west-1
+ AWS_DEFAULT_REGION=eu-west-1
+ getopts t:m:a:s:c:z:r:u:p: opt
+ '[' -z aws ']'
+ CLI_COMPOSE_OPTS=
+ case ${MACHINE_TYPE} in
+ provision_aws
+ '[' -z ilmarinen-adop ']'
+ '[' -z vpc-c303f0a7 ']'
+ '[' -z ']'
+ echo 'No availability zone specified - using default [a].'
No availability zone specified - using default [a].
+ export VPC_AVAIL_ZONE=a
+ VPC_AVAIL_ZONE=a
+ '[' -f /Users/oscar.renalias/.aws/credentials ']'
+ echo 'Using default AWS credentials from ~/.aws/credentials'
Using default AWS credentials from ~/.aws/credentials
+ '[' -z AKIAIR4X6GNTAQLFE36Q ']'
++ grep -v '^\[' /Users/oscar.renalias/.aws/credentials
++ sed 's/^\(.*\)\s=\s/export \U\1=/'
+ eval aws_access_key_id = AKIAIR4X6GNTAQLFE36Q aws_secret_access_key = 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
++ aws_access_key_id = AKIAIR4X6GNTAQLFE36Q aws_secret_access_key = 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
./quickstart.sh: line 119: aws_access_key_id: command not found

The version of bash in OS X 10.11 definitely doesn't like whatever is going on.

Job /job/ExampleWorkspace/job/ExampleProject/job/Create_Environment fails

Job /job/ExampleWorkspace/job/ExampleProject/job/Create_Environment doesn't work:

[EnvInject] - Inject global passwords.
Started by user Admin User
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting as environment variables the properties content 
WORKSPACE_NAME=ExampleWorkspace
PROJECT_NAME=ExampleWorkspace/ExampleProject

[EnvInject] - Variables injected successfully.
[EnvInject] - Injecting contributions.
Building remotely on Swarm_Slave-68f7e58b (swarm java8 ldap aws docker) in workspace /workspace/ExampleWorkspace/ExampleProject/Create_Environment

Deleting project workspace... done

[ssh-agent] Using credentials jenkins (ADOP Jenkins Master)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent]   Java/JNR ssh-agent
[ssh-agent] Started.
Cloning the remote Git repository
Cloning repository ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template
 > git init /workspace/ExampleWorkspace/ExampleProject/Create_Environment # timeout=10
Fetching upstream changes from ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template
 > git --version # timeout=10
using GIT_SSH to set credentials ADOP Jenkins Master
 > git -c core.askpass=true fetch --tags --progress ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template # timeout=10
Fetching upstream changes from ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template
using GIT_SSH to set credentials ADOP Jenkins Master
 > git -c core.askpass=true fetch --tags --progress ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 86f2e0b110d7569d7933d77c6478d7e8e425c483 (refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 86f2e0b110d7569d7933d77c6478d7e8e425c483
 > git rev-list 86f2e0b110d7569d7933d77c6478d7e8e425c483 # timeout=10
[Create_Environment] $ /bin/sh -xe /tmp/hudson3927040214662628747.sh
+ set +x
CI, tomcat.conf
TLS configuration is invalid - make sure your DOCKER_TLS_VERIFY and DOCKER_CERT_PATH are set correctly.
You might need to run `eval "$(docker-machine env default)"`
Build step 'Execute shell' marked build as failure
[ssh-agent] Stopped.
Warning: you have no plugins providing access control for builds, so falling back to legacy behavior of permitting any downstream builds to be triggered
Finished: FAILURE

I tried defining DOCKER_TLS_VERIFY and DOCKER_CERT_PATH within the shell script that is embedded in the job, but it does not seem to make any difference. Where exactly are the TLS certs for the engine, and how should they be transferred to the node?

Please note that this instance is a single node ADOP server, created from OS X, where the gen-certs step was skipped because it won't work on OS X (see related issue #73)

Host Timeout on OSX

local platform: OSX 10.10.5
Docker version 1.12.2, build bb80604
docker-machine version 0.8.2, build e18a919

I'm trying to get ADOP running in my personal AWS instance using the quickstart guide but I keep getting the error bellow.

I've run it about 6 different times. A few times from master, a few times from the tagged branch 0.2.2. I've tried it in different regions.

Any help would be appreciated.

      ###    ########   #######  ########
     ## ##   ##     ## ##     ## ##     ##
    ##   ##  ##     ## ##     ## ##     ##
   ##     ## ##     ## ##     ## ########
   ######### ##     ## ##     ## ##
   ##     ## ##     ## ##     ## ##
   ##     ## ########   #######  ##

Your AWS parameters file already exists, deleting it...
Creating a new AWS variables file...
Running pre-create checks...
Creating machine...
(sa-adop) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "52.211.83.224:2376": dial tcp 52.211.83.224:2376: i/o timeout
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.

ADOP setup in Client Environment.

Hi,

I was trying to setup ADOP platform in client environment. ADOP is not starting because of jenkin is unavailable.

Regards,
-Ankush

UnicodeDecodeError thrown by docker-compose while running nexus image

Today I refreshed adop-docker-compose local repo to the latest bytes and tried to provision a new instance in AWS. Everything went well as usual until the "Bringing up ADOP" phase. Shortly after that, docker-compose returned a UnicodeDecodeError. This is the console output:

`* Bringing up ADOP...
Your secrets file already exists, moving on...
Sourcing variables from platform.secrets.sh file...
Creating elasticsearch
Creating logstash
Creating kibana

Creating selenium-hub
Creating nexus
Traceback (most recent call last):
File "", line 3, in
File "compose\cli\main.py", line 56, in main
File "compose\cli\docopt_command.py", line 23, in sys_dispatch
File "compose\cli\docopt_command.py", line 26, in dispatch
File "compose\cli\main.py", line 191, in perform_command

File "compose\project.py", line 318, in up
File "compose\service.py", line 351, in execute_convergence_plan

File "compose\service.py", line 614, in _get_container_create_options

File "compose\utils.py", line 87, in json_hash
File "json__init__.py", line 251, in dumps
File "json\encoder.py", line 209, in encode
File "json\encoder.py", line 434, in _iterencode
File "json\encoder.py", line 408, in _iterencode_dict
File "json\encoder.py", line 408, in _iterencode_dict
File "json\encoder.py", line 390, in _iterencode_dict
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe7 in position 11: invalid continuation byte
docker-compose returned -1`

Any ideas of what could be causing the issue, and possible workarounds?

docker-machine unable to find a subnet in the zone if VPC is not in the AZ "a"

Context

Follow the QuickStart instructions.
In my case, on the step 1, I created VPC in eu-west-1 region.
On step 3 run startup.sh without AWS credentials and default region stored locally in ~/.aws.
I used the following command :

./startup.sh -m adop1 -c <newly_vpcid> -r eu-west-1 -a AWS_ACCESS_KEY -s AWS_SECRET_ACCESS_KEY -v local -n adopnetwork

Expected

docker-machine EC2 instance creation is OK

Actual

docker-machine exited with the follwing error :

"Error creating machine: Error with pre-create check: unable to find a subnet in the zone: eu-west-1a"

Analysis

In fact docker-machine create for amazon EC2 will always use the AZ "a" by default.

But on step "1.Create a VPC using the VPC wizard in the AWS console by selecting the first option with 1 public subnet" I did not fix the AZ to "a" and let the predefined "Availability Zone" field to "No preference".

So either you must fix "Availibility Zone" to "a" in the VPC creation wizard screen on step 1 or startup.sh needs to be fixed in order to introduce a new argument for docker-machine "--amazonec2-zone".

Unknown log opt 'syslog-tag' for syslog log driver

My environment:

  • OSX 10.11.6
  • Docker version 1.12.0-rc4, build e4a0dbc, experimental

Docker info:

Ismars-iMac:~ ismarslomic$ docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 61
Server Version: 1.12.0-rc4
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 47
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.15-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.788 GiB
Name: moby
ID: 6KZS:ZCKP:VICU:BVCX:UBPK:GYBT:5UPI:G5QY:4LCZ:4RV6:EMJB:R5ON
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 16
 Goroutines: 28
 System Time: 2016-07-23T12:23:09.78445108Z
 EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8

Steps to reproduce error:

Error displayed in console (after pulling Docker images is done):

ERROR: for sensu-rabbitmq  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for sensu-client  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for nexus  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for gerrit-mysql  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for gerrit  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for jenkins-slave  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for sensu-api  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for sonar  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for selenium-hub  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for sensu-redis  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for selenium-node-chrome  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for proxy  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for ldap  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for selenium-node-firefox  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for sensu-uchiwa  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for jenkins  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for sonar-mysql  unknown log opt 'syslog-tag' for syslog log driver

ERROR: for sensu-server  unknown log opt 'syslog-tag' for syslog log driver
ERROR: Encountered errors while bringing up the project.

Gerrit and SonarQube URLs in Jenkins should refer to public URL, not internal

Currently in docker-compose.yml file, lines 224 and 227, inside Jenkins configuration, we are using internal URLs for front-ends of Gerrit and SonarQube. As a result, links in Jenkins do not work. We should replace them with public URLs, for example:

224: GERRIT_FRONT_END_URL: "http://gerrit:8080/gerrit" => "${PROTO}://${TARGET_HOST}/gerrit/"
227: SONAR_SERVER_URL: "http://sonar:9000/sonar/" => "${PROTO}://${TARGET_HOST}/sonar/"

Pre-creation network access validation

I've heard about some folk having a bad experience spinning up ADOP from a location with various ports blocked. Has anyone experienced this and if so what failed and when? Obviously if we can move the failures earlier in provisioning people will be happier.

Broken TravisCI builds with no reason

From time to time we have broken TravisCI builds, even if functionality wasn't changed at all, like changing README files, some of examples:

If we would re-trigger these builds - they would be "green" and it's a bit confusing, why they fail from the beginning, i mean when build triggered automatically. Need to investigate.

ADOP VM memory specs

I have been playing with ADOP, and getting it up (installed) and running takes a very good deal of time! I did not change the default memory allocated to creating the VM, and was wondering if something was not working fine. After a while, I simply increase the default memory from 2GB to 4GB and creation/startup times got "more reasonable".

Do you have any experience on "minimal" memory that should be allocated to the VM in order to have the whole platform running smoothly?

Cheers!

Latest Tag of sstarcher/uchiwa broke ADOP sensu-uchiwa service

I just noticed a while ago while testing adop that sensu uchiwa service is not working as it used to be last night and I saw that there is a new tag that was pushed to this repo. To fix, I had to hardcode the last working version to 0.15.0.

sensu-uchiwa:
container_name: sensu-uchiwa
restart: always
image: sstarcher/uchiwa:0.15.0
net: ${CUSTOM_NETWORK_NAME}
environment:
SENSU_HOSTNAME: sensu-api
expose:
- "3000"

quickstart issue on linux

The quick start continually fails on aws using Amazon Linux AMI 2016.03.3 (HVM) with the error

ERROR: .IOError: [Errno 2] No such file or directory: u'././compose/elk.yml'

this is the parameter passed to the docker-compose command. I've confirmed that compose/elk.yml exists. I've also started with a fresh image and a fresh clone of the code and quick start continually fails at this step.

Jenkins Nexus integration

After all pipilines become finished I don't find any package into nexus repositories.

So I tried to deploy petclinic.war by curl command using Execute shell step in build task after install maven goal; but nexus respond with "500 Server Error".

Instead using deploy:deploy-file (I am sure that administrator password is ok in settings.xml) Nexus response is Unauthorized:
[WARNING] Could not transfer metadata org.springframework.samples:spring-petclinic:4.2.6-SNAPSHOT/maven-metadata.xml from/to snapshots (http://nexus:8081/nexus/content/repositories/snapshots): Not authorized , ReasonPhrase:Unauthorized.

Property in jenkins mvn pipeline:
groupId=org.springframework.samples
artifactId=spring-petclinic
version=4.2.6-SNAPSHOT
generatePom=false
packaging=war
repositoryId=snapshots
url=http://nexus:8081/nexus/content/repositories/snapshots/
file=target/petclinic.war

Logs Curl:

  • cd target

  • curl -v -F r=snapshots -F hasPom=false -F e=war -F g=com.spring.test -F a=petclinic -F v=1.0 -F p=war -F [email protected] -u administrator:qwerty11 http://nexus:8081/nexus/content/repositories/snapshots
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed

    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to nexus port 8081 (#0)

  • Trying 172.18.0.5...
  • Connected to nexus (172.18.0.5) port 8081 (#0)
  • Server auth using Basic with user 'administrator'

POST /nexus/content/repositories/snapshots HTTP/1.1
Authorization: Basic
User-Agent: curl/7.29.0
Host: nexus:8081
Accept: /
Content-Length: 39280496
Expect: 100-continue
Content-Type: multipart/form-data; boundary=----------------------------e707398e1d13

< HTTP/1.1 100 Continue
} [data not shown]
2 37.4M 0 0 2 1088k 0 1811k 0:00:21 --:--:-- 0:00:21 1811k
7 37.4M 0 0 7 2704k 0 1711k 0:00:22 0:00:01 0:00:21 1711k
11 37.4M 0 0 11 4288k 0 1634k 0:00:23 0:00:02 0:00:21 1633k
16 37.4M 0 0 16 6416k 0 1792k 0:00:21 0:00:03 0:00:18 1792k
24 37.4M 0 0 24 9264k 0 2022k 0:00:18 0:00:04 0:00:14 2022k
49 37.4M 0 0 49 18.5M 0 3410k 0:00:11 0:00:05 0:00:06 3603k
77 37.4M 0 0 77 29.1M 0 4532k 0:00:08 0:00:06 0:00:02 5422k< HTTP/1.1 500 Server Error
< Date: Mon, 27 Feb 2017 19:30:27 GMT
< Server: Nexus/2.11.3-01
< X-Frame-Options: SAMEORIGIN
< X-Content-Type-Options: nosniff
< Accept-Ranges: bytes
< Content-Length: 0

  • HTTP error before end of send, stop sending
    <

100 37.4M 0 0 100 37.4M 0 5228k 0:00:07 0:00:07 --:--:-- 7230k

  • Closing connection 0

Enhancement: Platform Extension for CloudFormation to be orchestrated by Ansible

Enhancement Summary

Instead of using AWS cli in the Platform Extension job's execute shell to run Cloudformation templates, we can utilize Ansible to orchestrate it and have a flexible way of launching user's templates without locking them with predefined 'CF parameters'. For now let's just focus on CloudFormation.

[Update]

Attached the sample screenshot from my Jenkins instance.

I just tested the whole suggestion working using our:

Terminologies

  • Playbook - the Ansible file or script. It's like the cookbook of Chef.
  • Ansible CF module - cloudformation_module

Requirements to accomplish this:

  • Update the jenkins-slave - yum install ansible python-boto in the jenkins-slave.
  • Update the platform extension specification project - Have a standardized name for the playbook like cf-runner.yml. Users will place the cf-runner.yml in the aws directory.
service
|__aws
   |__ service.template
   |__ cf-runner.yml
  • Users customized their own cf-runner.yml - so they can add their own Cloudformation parameters if they have to. The example below runs the adop chef server extension's service.template using the ansible cloudformation module.
- hosts: localhost
  connection: local
  tasks:
    - name: Launch cloudformation for Chef extension
      cloudformation:
        stack_name: "ansible-cloudformation"
        state: "present"
        region: "{{ lookup('env','AWS_REGION') }}"
        disable_rollback: true
        template: "service.template"
        # User defined parameters
        template_parameters:
          KeyName: "{{ lookup('env','AWS_KEYPAIR') }}"
          InstanceType: "t2.large"
          EnvironmentName: "ChefServer"
          EnvironmentSubnet: "{{ lookup('env','AWS_SUBNET_ID') }}"
          VPCId: "{{ lookup('env','AWS_VPC_ID') }}"
          InboundCIDR: "0.0.0.0/0"
        tags:
          Stack: "chef-stack"

      # Register the cloudformation output facts into a variable
      register: cf_out

    # Print the output to stdout
    - debug: var=cf_out

    - name: Copy all the output facts into a file
      copy: content={{ cf_out }} dest=./cf_out.json

    # Get the important data and create a file for it.
    # In this example, we want to save the ip address of the provisioned IP so we can parse it to a variable
    # for modifying the configuration file that will be copied over the ADOP nginx proxy container.
    - name: Copy the ip address attribute of the output facts into a file
      copy: content={{ cf_out["stack_outputs"]["EC2InstancePrivateIp"] }} dest=./instance_ip.txt
  • Jenkins execute shell now will have something like this..
# Provision any EC2 instances in the AWS folder
if [ -d ${WORKSPACE}/service/aws ]; then

    if [ -f ${WORKSPACE}/service/aws/service.template ] && [ -f ${WORKSPACE}/service/aws/cf-runner.yml ]; then

        echo "#######################################"
        echo "Adding EC2 platform extension on AWS..."

        cd service/aws/
        ansible-playbook cf-runner.yml
        if [[ $? -gt 0 ]]; then exit 1; fi
        cd -

        if [ -f ${WORKSPACE}/service/aws/ec2-extension.conf ]; then

            echo "#######################################"
            echo "Adding EC2 instance to NGINX config using xip.io..."

            export SERVICE_NAME="EC2-Service-Extension-${BUILD_NUMBER}"
            cp ${WORKSPACE}/service/aws/ec2-extension.conf ec2-extension.conf
            NODE_IP=$(cat ${WORKSPACE}/service/aws/instance_ip.txt)

            ## Add nginx configuration
            sed -i "s/###EC2_SERVICE_NAME###/${SERVICE_NAME}/" ec2-extension.conf
            sed -i "s/###EC2_HOST_IP###/${NODE_IP}/" ec2-extension.conf
            docker cp ec2-extension.conf proxy:/etc/nginx/sites-enabled/${SERVICE_NAME}.conf

            ## Reload nginx
            docker exec proxy /usr/sbin/nginx -s reload

            ## Don't stop jenkins to exit with error if nginx reload has failed.
            if [[ $? -gt 0 ]]; then
              echo "An error has been encountered while reloading nginx. There might be some upstreams that are not reachable."
              echo "Please run 'docker exec proxy /usr/sbin/nginx -s reload' to debug nginx."      
            fi

            echo "You can check that your EC2 instance has been succesfully proxied by accessing the following URL: ${SERVICE_NAME}.${PUBLIC_IP}.xip.io"
        else
            echo "INFO: /service/aws/ec2-extension.conf not found"
        fi

    else
        echo "INFO: /service/aws/service.template or /service/aws/cf-runner.yml not found."
    fi
fi

Console Output

adop-ext

When using the startup.sh script my inline options get overridden by what is in the .aws folder

When I try running the startup.sh with my aws access key in order to spin up an instance in aws and use that for my ADOP, whatever I put inline gets overridden by my .aws folder content. This behavior is bad as it means I cannot use the command line for one time operations and I need to fill in new parameters in my .aws options every time I want to use new access credentials. Also I was met by the error ./startup.sh: line 115: output: command not found.

A workaround is to temporarily move the .aws folder and run the startup.sh

Jenkins Create_Environment job failes with TLS Error

When running Create_Environment, Docker complained that the TLS configuration was invalid.
"TLS configuration is invalid - make sure your DOCKER_TLS_VERIFY and DOCKER_CERT_PATH are set correctly"

The project console output pasted here. I added 'env' to the build script and 'set -x' so all commands are displayed.
'''
[EnvInject] - Inject global passwords.
Started by user Admin User
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting as environment variables the properties content
WORKSPACE_NAME=ExampleWorkspace
PROJECT_NAME=ExampleWorkspace/ExampleProject

[EnvInject] - Variables injected successfully.
[EnvInject] - Injecting contributions.
Building remotely on Swarm_Slave-a9fd4801 (swarm java8 ldap aws docker) in workspace /workspace/ExampleWorkspace/ExampleProject/Create_Environment

Deleting project workspace... done

[ssh-agent] Using credentials jenkins (ADOP Jenkins Master)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Java/JNR ssh-agent
[ssh-agent] Started.
Cloning the remote Git repository
Cloning repository ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template

git init /workspace/ExampleWorkspace/ExampleProject/Create_Environment # timeout=10
Fetching upstream changes from ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template
git --version # timeout=10
using GIT_SSH to set credentials ADOP Jenkins Master
git -c core.askpass=true fetch --tags --progress ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template +refs/heads/:refs/remotes/origin/
git config remote.origin.url ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template # timeout=10
git config --add remote.origin.fetch +refs/heads/:refs/remotes/origin/ # timeout=10
git config remote.origin.url ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template # timeout=10
Fetching upstream changes from ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template
using GIT_SSH to set credentials ADOP Jenkins Master
git -c core.askpass=true fetch --tags --progress ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template +refs/heads/:refs/remotes/origin/
git rev-parse refs/remotes/origin/master^{commit} # timeout=10
git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 86f2e0b110d7569d7933d77c6478d7e8e425c483 (refs/remotes/origin/master)
git config core.sparsecheckout # timeout=10
git checkout -f 86f2e0b110d7569d7933d77c6478d7e8e425c483
git rev-list 86f2e0b110d7569d7933d77c6478d7e8e425c483 # timeout=10
[Create_Environment] $ /bin/sh -xe /tmp/hudson4894424651019497212.sh

  • set -x
  • env
    BUILD_URL=http://52.91.218.227/jenkins/job/ExampleWorkspace/job/ExampleProject/job/Create_Environment/12/
    JAVA_TARBALL=server-jre-8u45-linux-x64.tar.gz
    HOSTNAME=1b861298ccd5
    SWARM_MASTER=http://jenkins:8080/jenkins/
    HUDSON_SERVER_COOKIE=65676e8ba7afc02b
    WORKSPACE_NAME=ExampleWorkspace
    DOCKER_HOST=tcp://52.91.218.227:2376
    BUILD_TAG=jenkins-ExampleWorkspace-ExampleProject-Create_Environment-12
    DOCKER_NETWORK_NAME=local_network
    GIT_PREVIOUS_COMMIT=86f2e0b110d7569d7933d77c6478d7e8e425c483
    ROOT_BUILD_CAUSE=MANUALTRIGGER
    ENVIRONMENT_TYPE=DEV
    WORKSPACE=/workspace/ExampleWorkspace/ExampleProject/Create_Environment
    JOB_URL=http://52.91.218.227/jenkins/job/ExampleWorkspace/job/ExampleProject/job/Create_Environment/
    SLAVE_LABELS=aws ldap java8 docker
    SWARM_USER=jenkins
    SSH_AUTH_SOCK=/tmp/jenkins2994977569656450663.jnr
    GIT_AUTHOR_NAME=ADOP Jenkins
    GIT_COMMITTER_NAME=ADOP Jenkins
    DOCKER_TLS_VERIFY=1
    SLAVE_EXECUTORS=1
    SLAVE_DESCRIPTION=Core Jenkins Slave
    NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
    GIT_COMMIT=86f2e0b110d7569d7933d77c6478d7e8e425c483
    SWARM_PASSWORD=ee42d31a287510567
    JENKINS_HOME=/var/jenkins_home
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    PROJECT_NAME=ExampleWorkspace/ExampleProject
    BUILD_CAUSE_MANUALTRIGGER=true
    GIT_COMMITTER_EMAIL=[email protected]
    INITIAL_ADMIN_PASSWORD=freeman87
    PWD=/workspace/ExampleWorkspace/ExampleProject/Create_Environment
    JAVA_HOME=/opt/java/jdk1.8.0_45
    HUDSON_URL=http://52.91.218.227/jenkins/
    JAVA_VERSION=1.8.0_45
    SLAVE_NAME=Swarm_Slave
    JOB_NAME=ExampleWorkspace/ExampleProject/Create_Environment
    XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
    BUILD_DISPLAY_NAME=#12
    BUILD_CAUSE=MANUALTRIGGER
    BUILD_ID=12
    JENKINS_URL=http://52.91.218.227/jenkins/
    HOME=/root
    DOCKER_CERT_PATH=//root/.docker/
    SHLVL=2
    INITIAL_ADMIN_USER=davidefreeman
    GIT_BRANCH=origin/master
    EXECUTOR_NUMBER=0
    JENKINS_SERVER_COOKIE=65676e8ba7afc02b
    GIT_URL=ssh://jenkins@gerrit:29418/ExampleWorkspace/ExampleProject/adop-cartridge-java-environment-template
    NODE_LABELS=Swarm_Slave-a9fd4801 aws docker java8 ldap swarm
    HUDSON_HOME=/var/jenkins_home
    NODE_NAME=Swarm_Slave-a9fd4801
    BUILD_NUMBER=12
    ROOT_BUILD_CAUSE_MANUALTRIGGER=true
    HUDSON_COOKIE=98984de5-1ef7-41bd-8823-6501c003df19
    GIT_AUTHOR_EMAIL=[email protected]
    SLAVE_MODE=exclusive
    _=/usr/bin/env
  • '[' DEV == DEV ']'
  • createDockerContainer CI tomcat.conf
  • echo CI, tomcat.conf
    CI, tomcat.conf
  • export ENVIRONMENT_NAME=CI
  • ENVIRONMENT_NAME=CI
    ++ echo ExampleWorkspace/ExampleProject
    ++ tr / _
  • export SERVICE_NAME=ExampleWorkspace_ExampleProject_CI
  • SERVICE_NAME=ExampleWorkspace_ExampleProject_CI
  • docker-compose -p ExampleWorkspace_ExampleProject_CI up -d
    TLS configuration is invalid - make sure your DOCKER_TLS_VERIFY and DOCKER_CERT_PATH are set correctly.
    You might need to run eval "$(docker-machine env default)"
    Build step 'Execute shell' marked build as failure
    [ssh-agent] Stopped.
    Warning: you have no plugins providing access control for builds, so falling back to legacy behavior of permitting any downstream builds to be triggered
    Finished: FAILURE
    '''

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.