Coder Social home page Coder Social logo

aws-samples / aws-microservices-deploy-options Goto Github PK

View Code? Open in Web Editor NEW
421.0 39.0 178.0 105.7 MB

This repo contains a simple application that consists of three microservices. Each application is deployed using different Compute options on AWS.

License: MIT No Attribution

Smarty 0.01% Shell 0.04% JavaScript 0.12% Dockerfile 0.02% Jsonnet 99.80%
aws amazon-web-services java containers kubernetes docker lambda serverless microservices wildfly-swarm

aws-microservices-deploy-options's Introduction

Deploying Microservices on AWS Cloud

Table of Contents

This repo contains a simple application that consists of three microservices. The sample application uses three services:

  1. webapp: Web application microservice calls greeting and name microservice to generate a greeting for a person.

  2. greeting: A microservice that returns a greeting.

  3. name: A microservice that returns a person’s name based upon {id} in the URL.

Each application is deployed using different AWS Compute options.

Build and Test Services using Maven

  1. Each microservice is in a different repo:

    greeting

    https://github.com/arun-gupta/microservices-greeting

    name

    https://github.com/arun-gupta/microservices-name

    webapp

    https://github.com/arun-gupta/microservices-webapp

  2. Clone all the repos. Open each one in a separate terminal.

  3. Run greeting service: mvn wildfly-swarm:run

  4. Run name service: mvn wildfly-swarm:run

  5. Run webapp service: mvn wildfly-swarm:run

  6. Run the application: curl http://localhost:8080/

Docker

Create Docker Images

mvn package -Pdocker for each repo will create the Docker image.

By default, the Docker image name is arungupta/<service> where <service> is greeting, name or webapp. The image can be created in your repo:

mvn package -Pdocker -Ddocker.repo=<repo>

By default, the latest tag is used for the image. A different tag may be specified as:

mvn package -Pdocker -Ddocker.tag=<tag>

Push Docker Images to Registry

Push Docker images to the registry:

mvn install -Pdocker

Deployment to Docker Swarm

  1. docker swarm init

  2. cd apps/docker

  3. docker stack deploy --compose-file docker-compose.yaml myapp

  4. Access the application: curl http://localhost:8080

    1. Optionally test the endpoints:

  5. Remove the stack: docker stack rm myapp

Debug

  1. List stack:

    docker stack ls
  2. List services in the stack:

    docker stack services myapp
  3. List containers:

    docker container ls -f name=myapp*
  4. Get logs for all the containers in the webapp service:

    docker service logs myapp_webapp-service

Amazon ECS and AWS Fargate

This section will explain how to deploy these microservices using Fargate on Amazon ECS cluster.

Note
AWS Fargate is not supported in all AWS regions. These instructions will only work in supported regions. Check the AWS’s Regions Table for details.

Deployment: Create Cluster using AWS Console

This section will explain how to create an ECS cluster using AWS Console.

Use the cluster name fargate-cluster.

Deployment: Create Cluster using AWS CloudFormation

This section will explain how to create an ECS cluster using CloudFormation.

The following resources are needed in order to deploy the sample application:

  • Private Application Load Balancer for greeting and name and a public ALB for webapp

  • Target groups registered with the ALB

  • Security Group that allows the services to talk to each other and be externally accessible

    1. Create an ECS cluster with these resources:

      cd apps/ecs/fargate/templates
      aws cloudformation deploy \
        --stack-name fargate-cluster \
        --template-file infrastructure.yaml \
        --region us-east-1 \
        --capabilities CAPABILITY_IAM
    2. View the output from the cluster:

      aws cloudformation \
        describe-stacks \
        --region us-east-1 \
        --stack-name fargate-cluster \
        --query 'Stacks[].Outputs[]' \
        --output text

Deployment: Simple ECS Cluster

This section explains how to create a ECS cluster with no additional resources. The cluster can be created with a private VPC or a public VPC. The CloudFormation templates for different types are available at https://github.com/awslabs/aws-cloudformation-templates/tree/master/aws/services/ECS/EC2LaunchType/clusters.

This section will create a 3-instance cluster using a public VPC:

curl -O https://raw.githubusercontent.com/awslabs/aws-cloudformation-templates/master/aws/services/ECS/EC2LaunchType/clusters/public-vpc.yml
aws cloudformation deploy \
  --stack-name MyECSCluster \
  --template-file public-vpc.yml \
  --region us-east-1 \
  --capabilities CAPABILITY_IAM

List the cluster using aws ecs list-clusters command:

{
    "clusterArns": [
        "arn:aws:ecs:us-east-1:091144949931:cluster/MyECSCluster-ECSCluster-197YNE1ZHPSOP"
    ]
}

Deployment: Create Cluster and Deploy Services using Fargate CLI

This section explains how to create a Fargate cluster and run services on it.

  1. Download CLI from http://somanymachines.com/fargate/

  2. Create the LoadBalancer:

    fargate lb create \
      microservices-lb \
      --port 80
  3. Create greeting service:

    fargate service create greeting-service \
      --lb microservices-lb \
      -m 1024 \
      -i arungupta/greeting \
      -p http:8081 \
      --rule path=/resources/greeting
  4. Create name service:

    fargate service create name-service \
      --lb microservices-lb \
      -m 1024 \
      -i arungupta/name \
      -p http:8082 \
      --rule path=/resources/names/*
  5. Get URL of the LoadBalancer:

    fargate lb info microservices-lb
  6. Create webapp service:

    fargate service create webapp-service \
      --lb microservices-lb \
      -m 1024 \
      -i arungupta/webapp \
      -p http:8080 \
      -e GREETING_SERVICE_HOST=<lb> \
      -e GREETING_SERVICE_PORT=80 \
      -e GREETING_SERVICE_PATH=/resources/greeting \
      -e NAME_SERVICE_HOST=<lb> \
      -e NAME_SERVICE_PORT=80 \
      -e NAME_SERVICE_PATH=/resources/names
  7. Test the application:

    curl http://<lb>
    curl http://<lb>/0
  8. Scale the service: fargate service scale webapp-service +3

  9. Clean up the resources:

    fargate service scale greeting-service 0
    fargate service scale name-service 0
    fargate service scale webapp-service 0
    fargate lb destroy microservices-lb
Note
As described at https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_limits.html, the number of tasks using the Fargate launch type, per region, per account is 20. This limit can be increased by filing a support ticket from the AWS Console.

Deployment: Deploy Tasks and Service using ECS CLI

This section will explain how to create an ECS cluster using a CloudFormation template. The tasks are then deployed using ECS CLI and Docker Compose definitions.

Pre-requisites

  1. Install ECS CLI.

  2. Install - Perl.

Deploy the application

  1. Run the CloudFormation template to create the AWS resources:

    Region

    Launch Template

    N. Virginia (us-east-1)

    deploy to aws
  2. Run the follow command to capture the output from the CloudFormation template as key/value pairs in the file ecs-cluster.props. These will be used to setup environment variables which are used subseqently.

    aws cloudformation describe-stacks \
      --stack-name aws-microservices-deploy-options-ecscli \
      --query 'Stacks[0].Outputs' \
      --output=text | \
      perl -lpe 's/\s+/=/g' | \
      tee ecs-cluster.props
  3. Setup the environment variables using this file:

    set -o allexport
    source ecs-cluster.props
    set +o allexport
  4. Configure ECS CLI:

    ecs-cli configure --cluster $ECSCluster --region us-east-1 --default-launch-type FARGATE
  5. Create the task definition parameters for each of the service:

    ecs-params-create.sh greeting
    ecs-params-create.sh name
    ecs-params-create.sh webapp
  6. Start the greeting service up:

    ecs-cli compose --verbose \
      --file greeting-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_greeting.yaml \
      --project-name greeting \
      service up \
      --target-group-arn $GreetingTargetGroupArn \
      --container-name greeting-service \
      --container-port 8081
  7. Bring the name service up:

    ecs-cli compose --verbose \
      --file name-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_name.yaml  \
      --project-name name \
      service up \
      --target-group-arn $NameTargetGroupArn \
      --container-name name-service \
      --container-port 8082
  8. Bring the webapp service up:

    ecs-cli compose --verbose \
      --file webapp-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_webapp.yaml \
      --project-name webapp \
      service up \
      --target-group-arn $WebappTargetGroupArn \
      --container-name webapp-service \
      --container-port 8080

    Docker Compose supports environment variable substitution. The webapp-docker-compose.yaml uses $PrivateALBCName to refer to the private Application Load Balancer for greeting and name service.

  9. Check the healthy status of different services:

    aws elbv2 describe-target-health \
      --target-group-arn $GreetingTargetGroupArn \
      --query 'TargetHealthDescriptions[0].TargetHealth.State' \
      --output text
    aws elbv2 describe-target-health \
      --target-group-arn $NameTargetGroupArn \
      --query 'TargetHealthDescriptions[0].TargetHealth.State' \
      --output text
    aws elbv2 describe-target-health \
      --target-group-arn $WebappTargetGroupArn \
      --query 'TargetHealthDescriptions[0].TargetHealth.State' \
      --output text
  10. Once all the services are in healthy state, get a response from the webapp service:

    curl http://"$ALBPublicCNAME"
    Hello Sheldon

Tear down the resources

ecs-cli compose --verbose \
      --file greeting-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_greeting.yaml \
      --project-name greeting \
      service down
ecs-cli compose --verbose \
      --file name-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_name.yaml \
      --project-name name \
      service down
ecs-cli compose --verbose \
      --file webapp-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_webapp.yaml \
      --project-name webapp \
      service down
aws cloudformation delete-stack --region us-east-1 --stack-name aws-microservices-deploy-options-ecscli

Deployment: Create Cluster and Deploy Fargate Tasks using CloudFormation

This section creates an ECS cluster and deploys Fargate tasks to the cluster:

Region

Launch Template

N. Virginia (us-east-1)

deploy to aws

Retrieve the public endpoint to test your application deployment:

aws cloudformation \
  describe-stacks \
  --region us-east-1 \
  --stack-name aws-compute-options-fargate \
  --query 'Stacks[].Outputs[?OutputKey==`PublicALBCNAME`].[OutputValue]' \
  --output text

Use the command to test:

curl http://<public_endpoint>

Deployment: Create Cluster and Deploy EC2 Tasks using CloudFormation

This section creates an ECS cluster and deploys EC2 tasks to the cluster:

Region

Launch Template

N. Virginia (us-east-1)

deploy to aws

Retrieve the public endpoint to test your application deployment:

aws cloudformation \
  describe-stacks \
  --region us-east-1 \
  --stack-name aws-compute-options-ecs \
  --query 'Stacks[].Outputs[?OutputKey==`PublicALBCNAME`].[OutputValue]' \
  --output text

Use the command to test:

curl http://<public_endpoint>

Deployment Pipeline: Fargate with AWS CodePipeline

This section will explain how to deploy a Fargate task via CodePipeline

  1. Fork each of the repositories in the Build and Test Services using Maven section.

  2. Clone the forked repositories to your local machine:

    git clone https://github.com/<your_github_username>/microservice-greeting
    git clone https://github.com/<your_github_username>/microservice-name
    git clone https://github.com/<your_github_username>/microservice-webapp
  3. Create the CloudFormation stack:

    Region

    Launch Template

    N. Virginia (us-east-1)

    deploy to aws

The CloudFormation template requires the following input parameters:

  1. Cluster Configuration

    1. Launch Type: Select Fargate.

  2. GitHub Configuration

    1. Repo: The repository name for each of the sample services. These have been populated for you.

    2. Branch: The branch of the repository to deploy continuously, e.g. master.

    3. User: Your GitHub username.

    4. Personal Access Token: A token for the user specified above. Use https://github.com/settings/tokens to create a new token. See Creating a personal access token for the command line for more details.

The CloudFormation stack has the following outputs:

  1. ServiceUrl: The URL of the sample service that is being deployed.

  2. PipelineUrl: A deep link for the pipeline in the AWS Management Console.

Once the stack has been provisioned, click the link for the PipelineUrl. This will open the CodePipline console. Clicking on the pipeline will display a diagram that looks like this:

Fargate Pipeline

Now that a deployment pipeline has been established for our services, you can modify files in the repositories we cloned earlier and push your changes to GitHub. This will cause the following actions to occur:

  1. The latest changes will be pulled from GitHub.

  2. A new Docker image will be created and pushed to ECR.

  3. A new revision of the task definition will be created using the latest version of the Docker image.

  4. The service definition will be updated with the latest version of the task definition.

  5. ECS will deploy a new version of the Fargate task.

Cleaning up the example resources

To remove all the resources created by the example, do the following:

  1. Delete the main CloudFromation stack which deletes the sub stacks and resouces.

  2. Manually delete the resources which may contain content:

    1. S3 Bucket: ArtifactBucket

    2. ECR Repository: Repository

Monitoring: AWS X-Ray

#55

Monitoring: Prometheus and Grafana

#78

Kubernetes

Deployment: Create EKS Cluster

Create an EKS cluster based upon Limited Preview instructions.

Deployment: Create Cluster using kops

  1. Install kops

    brew update && brew install kops
  2. Create an S3 bucket and setup KOPS_STATE_STORE:

    aws s3 mb s3://kubernetes-aws-io
    export KOPS_STATE_STORE=s3://kubernetes-aws-io
  3. Define an envinronment variable for Availability Zones for the cluster:

    export AWS_AVAILABILITY_ZONES="$(aws ec2 describe-availability-zones --query 'AvailabilityZones[].ZoneName' --output text | awk -v OFS="," '$1=$1')"
  4. Create cluster:

    kops create cluster \
      --name=cluster.k8s.local \
      --zones=$AWS_AVAILABILITY_ZONES \
      --yes

By default, it creates a single master and 2 worker cluster spread across the AZs.

Deployment: Standalone Manifests

Make sure kubectl CLI is installed and configured for the Kubernetes cluster.

  1. Apply the manifests: kubectl apply -f apps/k8s/standalone/manifest.yml

  2. Access the application: curl http://$(kubectl get svc/webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

  3. Delete the application: kubectl delete -f apps/k8s/standalone/manifest.yml

Deployment: Helm

Make sure kubectl CLI is installed and configured for the Kubernetes cluster. Also, make sure Helm is installed on that Kubernetes cluster.

  1. Install the Helm CLI: brew install kubernetes-helm

  2. Install Helm in Kubernetes cluster: helm init

  3. Install the Helm chart: helm install --name myapp apps/k8s/helm/myapp

    1. By default, the latest tag for an image is used. Alternatively, a different tag for the image can be used:

      helm install --name myapp apps/k8s/helm/myapp --set "docker.tag=<tag>"
  4. Access the application:

    curl http://$(kubectl get svc/myapp-webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  5. Delete the Helm chart: helm delete --purge myapp

Deployment: Ksonnet

Make sure kubectl CLI is installed and configured for the Kubernetes cluster.

  1. Install ksonnet from homebrew tap: brew install ksonnet/tap/ks

  2. Change into the ksonnet sub directory: cd apps/k8s/ksonnet/myapp

  3. Add the environment: ks env add default

  4. Deploy the manifests: ks apply default

  5. Access the application: curl http://$(kubectl get svc/webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

  6. Delete the application: ks delete default

Deployment: Kubepack

This section will explain how to use Kubepack to deploy your Kubernetes application.

  1. Install kubepack CLI:

    wget -O pack https://github.com/kubepack/pack/releases/download/0.1.0/pack-darwin-amd64 \
      && chmod +x pack \
      && sudo mv pack /usr/local/bin/
  2. Move to package root directory: cd apps/k8s/kubepack

  3. Pull dependent packages:

    pack dep -f .

    This will generate manifests/vendor folder.

  4. Generate final manifests: Combine the manifests for this package and its dependencies and potential patches into the final manifests:

    pack up -f .

    This will create manifests/output folder with an installer script and final manifests.

  5. Install package: ./manifests/output/install.sh

  6. Access the application:

    curl http://$(kubectl get svc/webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  7. Delete the application: kubectl delete -R -f manifests/output

Deployment: Local Dev & Test using Draft

  1. Install Draft:

    brew tap Azure/draft
    brew install Azure/draft/draft
  2. Initialize:

    draft init
  3. Create Draft artifacts to containerize and deploy to k8s:

    draft create

Following issues are identified so far:

Deployment Pipeline: AWS Codepipeline

This section explains how to setup a deployment pipeline using AWS CodePipeline.

CloudFormation templates for different regions are listed at https://github.com/aws-samples/aws-kube-codesuite. us-west-2 is listed below.

Region

Launch Template

Oregon (us-west-2)

deploy to aws
  1. Create Git credentials for HTTPS connections to AWS CodeCommit: https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html?icmpid=docs_acc_console_connect#setting-up-gc-iam

  2. Reset any stored git credentials for CodeCommit in the keychain. Open Keychain Access, search for codecommit and remove any related entries.

  3. Get CodeCommit repo URL from CloudFormation output and follow the instructions at https://github.com/aws-samples/aws-kube-codesuite#test-cicd-platform.

Deployment Pipeline: Jenkins

Create a deployment pipeline using Jenkins X.

  1. Install Jenkins X CLI:

    brew tap jenkins-x/jx
    brew install jx
  2. Create the Kubernetes cluster:

    jx create cluster aws

    This will create a Kubernetes cluster on AWS using kops. This cluster will have RBAC enabled. It will also have insecure registries enabled. These are needed by the pipeline to store Docker images.

  3. Clone the repo:

    git clone https://github.com/arun-gupta/docker-kubernetes-hello-world
  4. Import the project in Jenkins X:

    jx import

    This will generate Dockerfile and Helm charts, if they don’t already exist. It also creates a Jenkinsfile with different build stages identified. Finally, it triggers a Jenkins build and deploy the application in a staging environment by default.

  5. View Jenkins console using jx console. Select the user, project and branch to see the deployment pipeline.

  6. Get the staging URL using jx get apps and view the output from the application in a browser window.

  7. Now change the message in displayed from HelloHandler and push to the GitHub repo. Make sure to change the corresponding test as well otherwise the pipeline will fail. Wait for the deployment to complete and then refresh the browser page to see the updated output.

Deployment Pipeline: Gitkube

#88

  1. Deploy the greeting service

  2. Install Gitkube:

    kubectl create -f https://storage.googleapis.com/gitkube/gitkube-setup-stable.yaml
    kubectl --namespace kube-system expose deployment gitkubed --type=LoadBalancer --name=gitkubed
  3. Configure secret for Docker registry in the cluster:

    kubectl create secret \
      docker-registry gitkube-secret \
      --docker-server=https://index.docker.io/v1/ \
      --docker-username=arungupta \
      --docker-password='<password>' \
      [email protected]
  4. Create a Remote resource manifest based upon greeting-remote.yaml

  5. Create the Remote resource:

    kubectl apply -f greeting-remote.yaml
  6. Add remote to git repo:

    git remote add gitkube `kubectl get remote greeting -o jsonpath='{.status.remoteUrl}'`

Deployment Pipeline: Spinnaker

Deployment Pipeline: Skaffold

Deployment: Canary Deployment with Istio

Istio allows the deployment of canary services. This is done by using a simple DSL that controls how API calls and layer-4 traffic flow across various services in the application deployment.

  1. Install Istio in the Kubernetes cluster:

    curl -L https://git.io/getLatestIstio | sh -
    cd istio-0.7.1/
    kubectl apply -f install/kubernetes/istio.yaml
  2. Istio uses the Envoy proxy to manage all inbound/outbound traffic in the service mesh. Envoy proxy needs to be injected as sidecar into the application. So, we’ll deploy the application:

    kubectl apply -f <(istioctl kube-inject -f apps/k8s/istio/manifest.yaml)

    This will deploy the application with 3 microservices. Each microservice is deployed in its own pod, with the Envoy proxy injected into the pod; Envoy will now take over all network communications between the pods.

  3. Create route rules:

    kubectl apply -f apps/k8s/istio/route-50-50.yaml
  4. Access the application:

    curl http://$(kubectl get svc/webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

    Access the endpoint multiple times and notice how Hello and Howdy greeting is returned. Its not a round-robin but over 100 requests, 50% would be split between different greeting message.

    This is causing #239.

Here are some convenient commands to manage route rules:

  1. istioctl get routerules shows the list of all route rules

  2. istioctl delete routerule <name> deletes a route rule by name

Another route with the traffic split of 90% and 10% is at apps/k8s/istio/route-90-10.yaml.

Monitoring: AWS X-Ray

  1. arungupta/xray:us-west-2 Docker image is already available on Docker Hub. Optionally, you may build the image:

    cd config/xray
    docker build -t arungupta/xray:latest .
    docker image push arungupta/xray:us-west-2
  2. Deploy the DaemonSet: kubectl apply -f xray-daemonset.yaml

  3. Deploy the application using Helm charts:

    helm install --name myapp apps/k8s/helm/myapp
  4. Access the application:

    curl http://$(kubectl get svc/myapp-webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  5. Open the X-Ray console and watch the service map and traces.

X-Ray Service map looks like:

k8s xray service map

X-Ray traces looks like:

k8s xray traces

Monitoring: Conduit

Conduit is a small, ultralight, incredibly fast service mesh centered around a zero config approach. It can be used for gaining remarkable visibility in your Kubernetes deployments.

  1. Confirm that both Kubernetes client and server versions are v1.8.0 or greater using kubectl version --short

  2. Install the Conduit CLI on your local machine:

    curl https://run.conduit.io/install | sh
  3. Add the conduit command into your PATH:

    export PATH=$PATH:$HOME/.conduit/bin
  4. Verify the CLI is installed and running correctly. You will see a message that says 'Server version: unavailable' because you have not installed Conduit in your deployments.

    conduit version
  5. Install Conduit on your Kubernetes cluster. It will install into a separate conduit namespace, where it can be easily removed.

    conduit install | kubectl apply -f -
  6. Verify installation of Conduit into your cluster. Your Client and Server versions should now be the same.

    conduit version
  7. Verify the Conduit dashboard opens and that you can connect to Conduit in your cluster.

    conduit dashboard
  8. Install the demo app to see how Conduit handles monitoring of your Kubernetes applications.

    curl https://raw.githubusercontent.com/runconduit/conduit-examples/master/emojivoto/emojivoto.yml | conduit inject - | kubectl apply -f -
  9. You now have a demo application running on your Kubernetes cluster and also added to the Conduit service mesh. You can see a live version of this app (not in your cluster) to understand what this demo app is. Click to vote your favorite emoji. One of them has an error. Which one is it? You can also see the local version of this app running in your cluster:

    kubectl get svc web-svc -n emojivoto -o jsonpath="{.status.loadBalancer.ingress[0].*}"

The demo app includes a service (vote-bot) constantly running traffic through the demo app. Look back at the conduit dashboard. You should be able to browse all the services that are running as part of the application to view success rate, request rates, latency distribution percentiles, upstream and downstream dependencies, and various other bits of information about live traffic.

You can also see useful data about live traffic from the conduit CLI.

  1. Check the status of the demo app (emojivoto) deployment named web. You should see good latency, but a success rate indicating some errors.

    conduit stat -n emojivoto deployment web
  2. Determine what other deployments in the emojivoto namespace talk to the web deployment.

    conduit stat deploy --all-namespaces --from web --from-namespace emojivoto
  3. You should see that web talks to both the emoji and voting services. Based on their success rates, you should see that the voting service is responsible for the low success rate of requests to web. Determine what else talks to the voting service.

    conduit stat deploy --to voting --to-namespace emojivoto --all-namespaces
  4. You should see that it only talks to web. You now have a plausible target to investigate further since the voting service is returning a low success rate. From here, you might look into the logs, or traces, or other forms of deeper investigation to determine how to fix the error.

Monitoring: Istio and Prometheus

Istio is deployed as a sidecar proxy into each of your pods; this means it can see and monitor all the traffic flows between your microservices and generate a graphical representation of your mesh traffic.

  1. Prometheus addon will obtain the metrics from Istio. Install Prometheus:

    kubectl apply -f install/kubernetes/addons/prometheus.yaml
  2. Install the Servicegraph addon; Servicegraph queries Prometheus, which obtains details of the mesh traffic flows from Istio:

    kubectl apply -f install/kubernetes/addons/servicegraph.yaml
  3. Generate some traffic to the application:

    curl http://$(kubectl get svc/webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  4. View the ServiceGraph UI:

    kubectl -n istio-system \
      port-forward $(kubectl -n istio-system \
        get pod \
        -l app=servicegraph \
        -o jsonpath='{.items[0].metadata.name}') \
        8088:8088 &
    open http://localhost:8088/dotviz
  5. You should see a distributed trace that looks something like this. It may take a few seconds for Servicegraph to become available, so refresh the browser if you do not receive a response.

    istio servicegraph

Monitoring: Prometheus and Grafana

#79

AWS Lambda

Deployment: Package Lambda Functions

mvn clean package -Plambda in each repo will build the deployment package for each microservice.

Deployment: Test Lambda Functions Locally

Serverless Application Model (SAM) defines a standard application model for serverless applications. It extends AWS CloudFormation to provide a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.

sam is the AWS CLI tool for managing Serverless applications written with SAM. Install SAM CLI as:

npm install -g aws-sam-local

The complete installation steps for SAM CLI are at https://github.com/awslabs/aws-sam-local#installation.

In Mac

All commands are given from apps/lambda directory.

  1. Start greeting service:

    sam local start-api --template greeting-sam.yaml --port 3001
  2. Test greeting endpoint:

    curl http://127.0.0.1:3001/resources/greeting
  3. Start name service:

    sam local start-api --template name-sam.yaml --port 3002
  4. Test name endpoint:

    curl http://127.0.0.1:3002/resources/names
    curl http://127.0.0.1:3002/resources/names/1
  5. Start webapp service:

    sam local start-api --template webapp-sam.yaml --env-vars test/env-mac.json --port 3000
  6. Test webapp endpoint:

    curl http://127.0.0.1:3000/1

In Windows

Firstly start the Greeting and Name service as Mac, and then start the WebApp service using the following command

  1. sam local start-api --template webapp-sam.yaml --env-vars test/env-win.json --port 3000

  2. Test the urls above in a browser

Deployment: Debug using IntelliJ

This section will explain how to debug your Lambda functions locally using SAM Local and IntelliJ.

  1. Start functions using SAM Local and a debug port:

    sam local start-api \
      --env-vars test/env-mac.json \
      --template sam.yaml \
      --debug-port 5858
  2. In IntelliJ, setup a break point in your Lambda function.

  3. Go to Run, Debug, Edit Configurations, specify the port 5858 and click on Debug. The breakpoint will hit and you can see the debug state of the function.

Deployment: Deploy using Serverless Application Model

  1. Serverless applications are stored as a deployment packages in a S3 bucket. Create a S3 bucket:

    aws s3api create-bucket \
      --bucket aws-microservices-deploy-options \
      --region us-west-2 \
      --create-bucket-configuration LocationConstraint=us-west-2

    Make sure to use a bucket name that is unique.

  2. Package the SAM application. This uploads the deployment package to the specified S3 bucket and generates a new file with the code location:

    sam package \
      --template-file sam.yaml \
      --s3-bucket aws-microservices-deploy-options \
      --output-template-file \
      sam.transformed.yaml
  3. Create the resources:

    sam deploy \
      --template-file sam.transformed.yaml \
      --stack-name aws-microservices-deploy-options-lambda \
      --capabilities CAPABILITY_IAM
  4. Test the application:

    1. Greeting endpoint:

      curl `aws cloudformation \
        describe-stacks \
        --stack-name aws-microservices-deploy-options-lambda \
        --query "Stacks[].Outputs[?OutputKey=='GreetingApiEndpoint'].[OutputValue]" \
        --output text`
    2. Name endpoint:

      curl `aws cloudformation \
        describe-stacks \
        --stack-name aws-microservices-deploy-options-lambda \
        --query "Stacks[].Outputs[?OutputKey=='NamesApiEndpoint'].[OutputValue]" \
        --output text`
    3. Webapp endpoint:

      curl `aws cloudformation \
        describe-stacks \
        --stack-name aws-microservices-deploy-options-lambda \
        --query "Stacks[].Outputs[?OutputKey=='WebappApiEndpoint'].[OutputValue]" \
        --output text`/1

Deployment: Deploy to Serverless Application Repository

The AWS Serverless Application Repository (SAR) enables you to quickly deploy code samples, components, and complete applications for common use cases such as web and mobile back-ends, event and data processing, logging, monitoring, IoT, and more. Each application is packaged with an AWS Serverless Application Model (SAM) template that defines the AWS resources used.

The complete list of applications can be seen at https://serverlessrepo.aws.amazon.com/applications.

This section explains how to publish your SAM application to SAR. Detailed instructions are at https://docs.aws.amazon.com/serverlessrepo/latest/devguide/serverless-app-publishing-applications.html.

  1. Applications packaged as SAM can be published at https://console.aws.amazon.com/serverlessrepo/home?locale=en&region=us-east-1#/published-applications

  2. Add the following policy to your S3 bucket:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service":  "serverlessrepo.amazonaws.com"
                },
                "Action": "s3:GetObject",
                "Resource": "arn:aws:s3:::<your-bucket-name>/*"
            }
        ]
    }
  3. Use sam.transformed.yaml as the SAM template

  4. Publish the application

  5. Test the application:

    curl `aws cloudformation \
      describe-stacks \
      --stack-name aws-serverless-repository-aws-microservices \
      --query "Stacks[].Outputs[?OutputKey=='WebappApiEndpoint'].[OutputValue]" \
      --output text`/1
  6. List of your published applications: https://console.aws.amazon.com/serverlessrepo/home?locale=en&region=us-east-1#/published-applications

Deployment Pipeline: AWS CodePipeline

This section will explain how to deploy Lambda + API Gateway via CodePipeline.

  1. Generate new GitHub personal access token.

  2. Create CloudFormation stack:

    1. Create pipeline for greeting and name services. The default repository can be overridden to forked public repo by providing URL to the parameter Git:

      cd apps/lambda
      aws cloudformation deploy \
        --template-file microservice-pipeline.yaml \
        --stack-name lambda-microservices-greeting-pipeline \
        --parameter-overrides ServiceName=greeting GitHubOAuthToken=<github-token> \
        --capabilities CAPABILITY_IAM
      aws cloudformation deploy \
        --template-file microservice-pipeline.yaml \
        --stack-name lambda-microservices-name-pipeline \
        --parameter-overrides ServiceName=name GitHubOAuthToken=<github-token> \
        --capabilities CAPABILITY_IAM
    2. Wait for greeting and name pipelines to be created successfully. Then, create the pipeline for the webapp service:

      aws cloudformation deploy \
        --template-file microservice-pipeline.yaml \
        --stack-name lambda-microservices-webapp-pipeline \
        --parameter-overrides ServiceName=webapp GitHubOAuthToken=<github-token> \
        --capabilities CAPABILITY_IAM
  3. Get the Deployment Pipeline URL:

    aws cloudformation \
      describe-stacks \
      --stack-name lambda-microservices-greeting-pipeline \
      --query "Stacks[].Outputs[?OutputKey=='CodePipelineUrl'].[OutputValue]" \
      --output text
    aws cloudformation \
      describe-stacks \
      --stack-name lambda-microservices-name-pipeline \
      --query "Stacks[].Outputs[?OutputKey=='CodePipelineUrl'].[OutputValue]" \
      --output text
    aws cloudformation \
      describe-stacks \
      --stack-name lambda-microservices-webapp-pipeline \
      --query "Stacks[].Outputs[?OutputKey=='CodePipelineUrl'].[OutputValue]" \
      --output text
  4. Get the URL to test microservices:

    curl `aws cloudformation \
      describe-stacks \
      --stack-name aws-compute-options-lambda-greeting \
      --query "Stacks[].Outputs[?OutputKey=='greetingApiEndpoint'].OutputValue" \
      --output text`
    curl `aws cloudformation \
      describe-stacks \
      --stack-name aws-compute-options-lambda-name \
      --query "Stacks[].Outputs[?OutputKey=='nameApiEndpoint'].OutputValue" \
      --output text`
    curl `aws cloudformation \
      describe-stacks \
      --stack-name aws-compute-options-lambda-webapp \
      --query "Stacks[].Outputs[?OutputKey=='webappApiEndpoint'].OutputValue" \
      --output text`/1

    Deployment pipeline in the AWS console looks like as shown:

    Lambda Pipeline
  5. After one run of the webapp pipeline, access the endpoint:

    curl `aws cloudformation \
      describe-stacks \
      --stack-name lambda-microservices-webapp \
      --query "Stacks[].Outputs[?OutputKey=='webappApiEndpoint'].[OutputValue]" \
      --output text`/1

Deployment: Canary Deployment for Lambda Functions

The greeting service has implemented Lambda SAM Safe Deployment. By default, the function is deployed using Canary10Percent5Minutes deployment type. This means that 10% of the traffic will be shifted to the new Lambda function. If there are no errors or CloudWatch alarms are triggered, the remaining traffic is shifted after 5 minutes. This is further explained at https://docs.aws.amazon.com/lambda/latest/dg/automating-updates-to-serverless-apps.html.

In the microservice-greeting repository, we prepared the greeting-sam.yaml template allows users to change the deployment types supported by safe deployment. You can update the default setting to another support deployment types.

To test the Canary deployment, please follow the following steps

  1. Fork (microservice-greeting github repository.

  2. Checkout the forked repository localy

  3. Modify the Lambda function source code src/main/java/org/aws/samples/compute/greeting/GreetingEndpoint.java to return response "Hi" instead of "Hello".

  4. Use the following command to commit the change.

    git add src/main/java/org/aws/samples/compute/greeting/GreetingEndpoint.java
    git commit -m "say hi to canary"
    git push origin master
  5. Navigate to this repo, and run the following command to update CodePipeline stack for the greeting service.

    cd apps/lambda
    aws cloudformation deploy \
      --template-file microservice-pipeline.yaml \
      --stack-name lambda-microservices-greeting-pipeline \
      --parameter-overrides ServiceName=greeting GitHubOAuthToken=<github-token> GitHubSetting=OVERRIDE GitHubRepo=<forked-repo-name> GitHubOwner=<github-owner-user-name> GitHubBranch=master \
      --capabilities CAPABILITY_IAM

To checkout Canary deployment progress, navigate to AWS Console CodeDeploy Service and open Application lambda-microservices-greeting-ServerlessDeploymentApplication-<random-string> in the console. Please see the following example:

Lambda CodeDeploy

To monitor the deployment progress, select the in progress deployment link, you will see the progress like the following screenshot.

Lambda Canary

Deployment: Composition using AWS Step Functions

#76

Monitoring: AWS X-Ray

AWS X-Ray is fully integrated with AWS Lambda. This can be easily enabled for functions published using SAM by the following property:

Tracing: Active

More details about AWS Lambda and X-Ray integration is at https://docs.aws.amazon.com/lambda/latest/dg/lambda-x-ray.html.

Deploying the functions as explained above will generate X-Ray service map and traces.

Deployment: Remove the stack

aws cloudformation delete-stack \
  --stack-name aws-microservices-deploy-options-lambda

License

This sample code is made available under the MIT-0 license. See the LICENSE file.

aws-microservices-deploy-options's People

Contributors

allinwonder avatar arun-gupta avatar chriscoombs avatar christopherhein avatar ckassen avatar gmiranda23 avatar hyandell avatar jicowan avatar kenfinnigan avatar mattsb42-aws avatar omarlari avatar paulopontesm avatar sapessi avatar tamalsaha avatar tiffanyfay avatar yanaga avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-microservices-deploy-options's Issues

Cannot apply ksonnet environment

myapp $ ks apply default
WARNING Your application's apiVersion is below 0.1.0. In order to use all ks features, you can upgrade your application using `ks upgrade`.
ERROR generate objects for namespace : unable to read /Users/argu/workspaces/aws-compute-options/apps/k8s/ksonnet/myapp/environments/default/main.jsonnet: RUNTIME ERROR: Couldn't open import "k.libsonnet": No match locally or in the Jsonnet library paths.
-------------------------------------------------
	/Users/argu/workspaces/aws-compute-options/apps/k8s/ksonnet/myapp/components/greeting.jsonnet:2:11-30	thunk <k> from <$>

local k = import "k.libsonnet";

-------------------------------------------------
	/Users/argu/workspaces/aws-compute-options/apps/k8s/ksonnet/myapp/components/greeting.jsonnet:28:1-2	$

k.core.v1.list.new([appService, appDeployment])

-------------------------------------------------
	<extvar:__ksonnet/components>:2:13-114	object <anonymous>

  greeting: import "/Users/argu/workspaces/aws-compute-options/apps/k8s/ksonnet/myapp/components/greeting.jsonnet",

-------------------------------------------------
	During manifestation

Configuration:

$ ks version
ksonnet version: 0.9.0
jsonnet version: v0.9.5
client-go version: 1.8

Kubernetes:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-09T21:51:06Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

FMP cannot generate Helm charts

Trying to generate Helm charts using FMP and getting the following error:

services $ mvn -pl webapp fabric8:resource fabric8:helm -Pdocker
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building webapp 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- fabric8-maven-plugin:3.5.34:resource (default-cli) @ webapp ---
[INFO] F8: Running in Kubernetes mode
[INFO] F8: Running generator wildfly-swarm
[INFO] F8: wildfly-swarm: Using Docker image fabric8/java-jboss-openjdk8-jdk:1.3 as base / builder
[INFO] F8: fmp-controller: Adding a default Deployment
[INFO] F8: fmp-service: Adding a default service 'webapp' with ports [8080]
[INFO] F8: fmp-revision-history: Adding revision history limit to 2
[INFO] F8: f8-icon: Adding icon for deployment
[INFO] F8: f8-icon: Adding icon for service
[INFO] F8: validating /Users/argu/workspaces/aws-compute-options/services/webapp/target/classes/META-INF/fabric8/openshift/webapp-deploymentconfig.yml resource
[INFO] F8: validating /Users/argu/workspaces/aws-compute-options/services/webapp/target/classes/META-INF/fabric8/openshift/webapp-route.yml resource
[INFO] F8: validating /Users/argu/workspaces/aws-compute-options/services/webapp/target/classes/META-INF/fabric8/openshift/webapp-svc.yml resource
[INFO] F8: validating /Users/argu/workspaces/aws-compute-options/services/webapp/target/classes/META-INF/fabric8/kubernetes/webapp-deployment.yml resource
[INFO] F8: validating /Users/argu/workspaces/aws-compute-options/services/webapp/target/classes/META-INF/fabric8/kubernetes/webapp-svc.yml resource
[INFO] 
[INFO] --- fabric8-maven-plugin:3.5.34:helm (default-cli) @ webapp ---
[WARNING] F8: Chart source directory /Users/argu/workspaces/aws-compute-options/services/webapp/target/classes/META-INF/fabric8/k8s-template does not exist so cannot make chart webapp. Probably you need run 'mvn fabric8:resource' before.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5.040 s
[INFO] Finished at: 2018-02-13T15:00:04-08:00
[INFO] Final Memory: 34M/612M
[INFO] ------------------------------------------------------------------------

Docker Swarm ingress ports

The docker-compose.yaml has statically assigned ingress ports as:

version: '3'
services:
  greeting-service:
    image: arungupta/greeting:latest
    ports:
      - 8081:8080
  name-service:
    image: arungupta/name:latest
    ports:
      - 8082:8080
  webapp-service:
    image: arungupta/webapp:latest
    ports:
      - 80:8080

How to make sure that ingress ports are dynamically assigned?

webapp pod is not getting deployed by the Helm chart

webapp pod is not able to find the Main class:

. Deploy the Helm chart: helm install --name myapp apps/app-helm
. Check the pods:

$ kubectl get pods
NAME                                   READY     STATUS    RESTARTS   AGE
greeting-deployment-85d8477864-hptvp   1/1       Running   0          22s
myapp-app-helm-8856f8fcf-spc6f         0/1       Running   0          22s
name-deployment-b47747956-2xr7x        1/1       Running   0          21s
webapp-deployment-788fbbbc4b-rvbkh     0/1       Error     2          21s

. Get pod logs:

$ kubectl logs webapp-deployment-788fbbbc4b-rvbkh
exec java -XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:+ExitOnOutOfMemoryError -cp .:/deployments/* org.aws.samples.compute.webapp.Main
Error: Could not find or load main class org.aws.samples.compute.webapp.Main

This is related to #5

JerseyClientBuilder CFNE when running greeting tests

Running greeting tests as mvn -pl greeting package gives the following error:

services $ mvn -pl greeting clean package
[INFO] Scanning for projects...
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Building greeting 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ greeting ---
[INFO] Deleting /Users/argu/workspaces/aws-compute-options/services/greeting/target
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ greeting ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ greeting ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to /Users/argu/workspaces/aws-compute-options/services/greeting/target/classes
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ greeting ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /Users/argu/workspaces/aws-compute-options/services/greeting/src/test/resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ greeting ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to /Users/argu/workspaces/aws-compute-options/services/greeting/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ greeting ---
[INFO] Surefire report directory: /Users/argu/workspaces/aws-compute-options/services/greeting/target/surefire-reports

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.aws.samples.compute.greeting.GreetingTest
Scanning for needed WildFly Swarm fractions with mode: when_missing
Detected fractions: jaxrs:2018.2.0
Adding fractions: jaxrs-cdi:2018.2.0, jaxrs:2018.2.0, security:2018.2.0, undertow:2018.2.0
Resolving 0 out of 524 artifacts
Fri Feb 23 19:39:20 PST 2018 INFO [org.wildfly.swarm.bootstrap] (main) Dependencies not bundled; resolving from M2REPO.
2018-02-23 19:39:22,893 INFO  [org.wildfly.swarm] (main) WFSWARM0013: Installed fraction:               Arquillian - STABLE          org.wildfly.swarm:arquillian:2018.2.0
2018-02-23 19:39:22,903 INFO  [org.wildfly.swarm] (main) WFSWARM0013: Installed fraction:                  Logging - STABLE          org.wildfly.swarm:logging:2018.2.0
2018-02-23 19:39:22,904 INFO  [org.wildfly.swarm] (main) WFSWARM0013: Installed fraction:                  Elytron - STABLE          org.wildfly.swarm:elytron:2018.2.0
2018-02-23 19:39:22,904 INFO  [org.wildfly.swarm] (main) WFSWARM0013: Installed fraction:                   JAX-RS - STABLE          org.wildfly.swarm:jaxrs:2018.2.0
2018-02-23 19:39:22,904 INFO  [org.wildfly.swarm] (main) WFSWARM0013: Installed fraction:                 Undertow - STABLE          org.wildfly.swarm:undertow:2018.2.0
2018-02-23 19:39:24,455 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.7.SP1
2018-02-23 19:39:24,645 INFO  [org.jboss.as] (MSC service thread 1-7) WFLYSRV0049: WildFly Swarm 2018.2.0 (WildFly Core 3.0.8.Final) starting
2018-02-23 19:39:24,686 INFO  [org.wildfly.swarm] (MSC service thread 1-7) WFSWARM0019: Install MSC service for command line args: []
2018-02-23 19:39:24,872 INFO  [org.wildfly.swarm.arquillian.daemon.server.Server] (MSC service thread 1-2) Arquillian Daemon server started on localhost:12345
2018-02-23 19:39:25,329 INFO  [org.wildfly.security] (ServerService Thread Pool -- 10) ELY00001: WildFly Elytron version 1.1.6.Final
2018-02-23 19:39:25,360 INFO  [org.jboss.as.jaxrs] (ServerService Thread Pool -- 12) WFLYRS0016: RESTEasy version 3.0.24.Final
2018-02-23 19:39:25,363 INFO  [org.jboss.as.security] (ServerService Thread Pool -- 11) WFLYSEC0002: Activating Security Subsystem
2018-02-23 19:39:25,367 INFO  [org.jboss.as.naming] (ServerService Thread Pool -- 14) WFLYNAM0001: Activating Naming Subsystem
2018-02-23 19:39:25,371 INFO  [org.jboss.as.security] (MSC service thread 1-2) WFLYSEC0001: Current PicketBox version=5.0.2.Final
2018-02-23 19:39:25,397 INFO  [org.xnio] (ServerService Thread Pool -- 18) XNIO version 3.5.4.Final
2018-02-23 19:39:25,424 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-5) WFLYUT0003: Undertow 1.4.18.Final starting
2018-02-23 19:39:25,446 INFO  [org.xnio.nio] (ServerService Thread Pool -- 18) XNIO NIO Implementation Version 3.5.4.Final
2018-02-23 19:39:25,476 INFO  [org.jboss.as.naming] (MSC service thread 1-6) WFLYNAM0003: Starting Naming Service
2018-02-23 19:39:25,500 INFO  [org.wildfly.extension.io] (ServerService Thread Pool -- 18) WFLYIO001: Worker 'default' has auto-configured to 16 core threads with 128 task threads based on your 8 available processors
2018-02-23 19:39:25,692 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0012: Started server default-server.
2018-02-23 19:39:25,748 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0006: Undertow HTTP listener default listening on 0.0.0.0:8080
2018-02-23 19:39:25,820 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server
2018-02-23 19:39:25,823 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Swarm 2018.2.0 (WildFly Core 3.0.8.Final) started in 1428ms - Started 102 of 119 services (27 services are lazy, passive or on-demand)
2018-02-23 19:39:26,247 INFO  [org.wildfly.swarm.runtime.deployer] (main) deploying 3311e10b-14b4-48ba-b954-1a856b3467f3.war
2018-02-23 19:39:26,268 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-6) WFLYSRV0027: Starting deployment of "3311e10b-14b4-48ba-b954-1a856b3467f3.war" (runtime-name: "3311e10b-14b4-48ba-b954-1a856b3467f3.war")
2018-02-23 19:39:26,583 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0018: Host default-host starting
2018-02-23 19:39:26,799 INFO  [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 4) RESTEASY002225: Deploying javax.ws.rs.core.Application: class org.aws.samples.compute.greeting.MyApplication
2018-02-23 19:39:26,816 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 4) WFLYUT0021: Registered web context: '/' for server 'default-server'
2018-02-23 19:39:26,858 INFO  [org.jboss.as.server] (main) WFLYSRV0010: Deployed "3311e10b-14b4-48ba-b954-1a856b3467f3.war" (runtime-name : "3311e10b-14b4-48ba-b954-1a856b3467f3.war")
2018-02-23 19:39:26,868 INFO  [org.wildfly.swarm] (main) WFSWARM99999: WildFly Swarm is Ready
2018-02-23 19:39:30,517 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0008: Undertow HTTP listener default suspending
2018-02-23 19:39:30,517 INFO  [stdout] (MSC service thread 1-1) [Server] Requesting shutdown...
2018-02-23 19:39:30,517 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 10) WFLYUT0022: Unregistered web context: '/' from server 'default-server'
2018-02-23 19:39:30,516 INFO  [null] (MSC service thread 1-1) Requesting shutdown...
2018-02-23 19:39:30,518 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0007: Undertow HTTP listener default stopped, was bound to 0.0.0.0:8080
2018-02-23 19:39:30,525 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0019: Host default-host stopping
2018-02-23 19:39:30,525 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0004: Undertow 1.4.18.Final stopping
2018-02-23 19:39:30,526 INFO  [stdout] (MSC service thread 1-1) [Server] Server shutdown.
2018-02-23 19:39:30,526 INFO  [null] (MSC service thread 1-1) Server shutdown.
2018-02-23 19:39:30,545 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-1) WFLYSRV0028: Stopped deployment 3311e10b-14b4-48ba-b954-1a856b3467f3.war (runtime-name: 3311e10b-14b4-48ba-b954-1a856b3467f3.war) in 38ms
2018-02-23 19:39:30,550 INFO  [org.jboss.as] (MSC service thread 1-7) WFLYSRV0050: WildFly Swarm 2018.2.0 (WildFly Core 3.0.8.Final) stopped in 39ms
2018-02-23 19:39:30,559 INFO  [org.jboss.weld.Bootstrap] (pool-1-thread-1) WELD-ENV-002001: Weld SE container internal shut down

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 14.041 sec <<< FAILURE!
testGreeting(org.aws.samples.compute.greeting.GreetingTest)  Time elapsed: 0.037 sec  <<< ERROR!
java.lang.RuntimeException: java.lang.ClassNotFoundException: org.glassfish.jersey.client.JerseyClientBuilder
	at javax.ws.rs.client.ClientBuilder.newBuilder(ClientBuilder.java:103)
	at javax.ws.rs.client.ClientBuilder.newClient(ClientBuilder.java:114)
	at org.aws.samples.compute.greeting.GreetingTest.setUp(GreetingTest.java:43)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
	at org.jboss.arquillian.junit.Arquillian$StatementLifecycleExecutor.invoke(Arquillian.java:459)
	at org.jboss.arquillian.container.test.impl.execution.ClientBeforeAfterLifecycleEventExecuter.execute(ClientBeforeAfterLifecycleEventExecuter.java:99)
	at org.jboss.arquillian.container.test.impl.execution.ClientBeforeAfterLifecycleEventExecuter.on(ClientBeforeAfterLifecycleEventExecuter.java:72)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
	at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
	at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
	at org.jboss.arquillian.container.test.impl.client.ContainerEventController.createContext(ContainerEventController.java:142)
	at org.jboss.arquillian.container.test.impl.client.ContainerEventController.createBeforeContext(ContainerEventController.java:124)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
	at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
	at org.jboss.arquillian.test.impl.TestContextHandler.createTestContext(TestContextHandler.java:130)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
	at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
	at org.jboss.arquillian.test.impl.TestContextHandler.createClassContext(TestContextHandler.java:92)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
	at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
	at org.jboss.arquillian.test.impl.TestContextHandler.createSuiteContext(TestContextHandler.java:73)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
	at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
	at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:145)
	at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:116)
	at org.jboss.arquillian.test.impl.EventTestRunnerAdaptor.before(EventTestRunnerAdaptor.java:108)
	at org.jboss.arquillian.junit.Arquillian$4.evaluate(Arquillian.java:241)
	at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:422)
	at org.jboss.arquillian.junit.Arquillian.access$200(Arquillian.java:54)
	at org.jboss.arquillian.junit.Arquillian$5.evaluate(Arquillian.java:259)
	at org.jboss.arquillian.junit.Arquillian$7$1.invoke(Arquillian.java:315)
	at org.jboss.arquillian.container.test.impl.execution.ClientBeforeAfterLifecycleEventExecuter.execute(ClientBeforeAfterLifecycleEventExecuter.java:99)
	at org.jboss.arquillian.container.test.impl.execution.ClientBeforeAfterLifecycleEventExecuter.on(ClientBeforeAfterLifecycleEventExecuter.java:72)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
	at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
	at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
	at org.jboss.arquillian.container.test.impl.client.ContainerEventController.createContext(ContainerEventController.java:142)
	at org.jboss.arquillian.container.test.impl.client.ContainerEventController.createBeforeContext(ContainerEventController.java:124)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
	at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
	at org.jboss.arquillian.test.impl.TestContextHandler.createTestContext(TestContextHandler.java:130)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
	at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
	at org.jboss.arquillian.test.impl.TestContextHandler.createClassContext(TestContextHandler.java:92)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
	at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
	at org.jboss.arquillian.test.impl.TestContextHandler.createSuiteContext(TestContextHandler.java:73)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
	at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
	at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:145)
	at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:116)
	at org.jboss.arquillian.test.impl.EventTestRunnerAdaptor.fireCustomLifecycle(EventTestRunnerAdaptor.java:159)
	at org.jboss.arquillian.junit.Arquillian$7.evaluate(Arquillian.java:311)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.jboss.arquillian.junit.Arquillian$2.evaluate(Arquillian.java:204)
	at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:422)
	at org.jboss.arquillian.junit.Arquillian.access$200(Arquillian.java:54)
	at org.jboss.arquillian.junit.Arquillian$3.evaluate(Arquillian.java:218)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:166)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: java.lang.ClassNotFoundException: org.glassfish.jersey.client.JerseyClientBuilder
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:264)
	at javax.ws.rs.client.FactoryFinder.newInstance(FactoryFinder.java:113)
	at javax.ws.rs.client.FactoryFinder.find(FactoryFinder.java:206)
	at javax.ws.rs.client.ClientBuilder.newBuilder(ClientBuilder.java:86)
	... 125 more


Results :

Tests in error: 
  testGreeting(org.aws.samples.compute.greeting.GreetingTest): java.lang.ClassNotFoundException: org.glassfish.jersey.client.JerseyClientBuilder

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 17.125 s
[INFO] Finished at: 2018-02-23T19:39:31-08:00
[INFO] Final Memory: 39M/408M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test (default-test) on project greeting: There are test failures.
[ERROR] 
[ERROR] Please refer to /Users/argu/workspaces/aws-compute-options/services/greeting/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

Running webapp container is not able to find the Main class

When a Docker image is generated for webapp using FMP, the Docker context has the following directory structure:

maven/webapp.war

The packaged WAR file has:

WEB-INF/classes/org/aws/samples/compute/webapp/Main.class

The java command to run the app is:

java -XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:+ExitOnOutOfMemoryError -cp .:/maven/* org.aws.samples.compute.webapp.Main

Running the Docker container arungupta/webapp as:

docker container run -d arungupta/webapp

gives the following error:

Error: Could not find or load main class org.aws.samples.compute.webapp.Main

WARNING with creating ksonnet environment

$ ks env add default
INFO  Using context 'example.cluster.k8s.local' from the kubeconfig file specified at the environment variable $KUBECONFIG
WARNING Your application's apiVersion is below 0.1.0. In order to use all ks features, you can upgrade your application using `ks upgrade`.
INFO  Creating environment "default" with namespace "default", pointing to cluster at address "https://api-example-cluster-k8s-l-1dt7vk-1038153803.us-west-2.elb.amazonaws.com"

We should look at how to get rid of the WARNING.

Configuration:

$ ks version
ksonnet version: 0.9.0
jsonnet version: v0.9.5
client-go version: 1.8

Services not compiling in default profile

mvn clean pacakge is failing:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project greeting: Compilation failure: Compilation failure: 
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[4,47] package com.amazonaws.serverless.proxy.internal does not exist
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[5,57] package com.amazonaws.serverless.proxy.internal.testutils does not exist
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[6,45] package com.amazonaws.serverless.proxy.jersey does not exist
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[7,44] package com.amazonaws.serverless.proxy.model does not exist
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[8,44] package com.amazonaws.serverless.proxy.model does not exist
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[9,45] package com.amazonaws.services.lambda.runtime does not exist
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[10,45] package com.amazonaws.services.lambda.runtime does not exist
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[12,36] package org.glassfish.jersey.jackson does not exist
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[13,35] package org.glassfish.jersey.server does not exist
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[21,17] package org.slf4j does not exist
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[22,17] package org.slf4j does not exist
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[24,41] cannot find symbol
[ERROR]   symbol: class RequestStreamHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[25,26] cannot find symbol
[ERROR]   symbol:   class ResourceConfig
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[28,26] cannot find symbol
[ERROR]   symbol:   class JerseyLambdaContainerHandler
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[28,55] cannot find symbol
[ERROR]   symbol:   class AwsProxyRequest
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[28,72] cannot find symbol
[ERROR]   symbol:   class AwsProxyResponse
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[31,26] cannot find symbol
[ERROR]   symbol:   class Logger
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[38,83] cannot find symbol
[ERROR]   symbol:   class Context
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[27,72] cannot find symbol
[ERROR]   symbol:   class JacksonFeature
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[25,65] cannot find symbol
[ERROR]   symbol:   class ResourceConfig
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[29,15] cannot find symbol
[ERROR]   symbol:   variable JerseyLambdaContainerHandler
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[31,42] cannot find symbol
[ERROR]   symbol:   variable LoggerFactory
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[37,5] method does not override or implement a method from a supertype
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[41,9] cannot find symbol
[ERROR]   symbol:   class AwsProxyRequest
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[41,99] cannot find symbol
[ERROR]   symbol:   class AwsProxyRequest
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[41,35] cannot find symbol
[ERROR]   symbol:   variable LambdaContainerHandler
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[43,9] cannot find symbol
[ERROR]   symbol:   class AwsProxyResponse
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] /Users/argu/workspaces/aws-compute-options/services/greeting/src/main/java/org/aws/samples/compute/greeting/GreetingHandler.java:[45,9] cannot find symbol
[ERROR]   symbol:   variable LambdaContainerHandler
[ERROR]   location: class org.aws.samples.compute.greeting.GreetingHandler
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :greeting

Developer workflow for Lambda

  • How can a different parameter value be passed to name service from webapp service?
  • What is the minimal workflow after one of the Lambda function is changed?
    • How can this be done for SAM Local using CLI?
  • Is the workflow to test the functions locally using SAM Local, then deploy it manually and finally deploy to AWS?
  • Is there a way to create test, dev and staging environments?
  • X-Ray integration?
  • A/B deployment

Resteasy exception when running tests

Running the tests as mvn clean package gives the following error:

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 14.637 sec <<< FAILURE!
testGreeting(org.aws.samples.compute.greeting.GreetingTest)  Time elapsed: 0.049 sec  <<< ERROR!
java.lang.NoSuchMethodError: org.jboss.resteasy.spi.ResteasyProviderFactory.newInstance()Lorg/jboss/resteasy/spi/ResteasyProviderFactory;
	at org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder.getProviderFactory(ResteasyClientBuilder.java:333)
	at org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder.build(ResteasyClientBuilder.java:367)
	at org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder.build(ResteasyClientBuilder.java:35)
	at javax.ws.rs.client.ClientBuilder.newClient(ClientBuilder.java:114)
	at org.aws.samples.compute.greeting.GreetingTest.setUp(GreetingTest.java:43)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

It may be related to https://stackoverflow.com/questions/24139097/resteasy-client-nosuchmethoderror.

Following up with @kenfinnigan

Add Pure K8s Manifests

Add a pure k8s manifest directory app/k8s to show more examples of deployment models.

Original Question:

Should we have a directory under apps/ that has the manifest files un-templatized? Thinking so that we can support all kinds of deployments w/ helm pure k8s, w/ docker, w/ fargate, w/ ecs etc?

Create `apps/k8s/helm` directory

apps/k8s should have a directory helm and then myapp should be moved there. This will clarify the two deployment models for k8s.

Rename webapp.* classes

Rename org.aws.samples.compute.webapp.GreetingController -> org.aws.samples.compute.webapp.WebappController

Rename org.aws.samples.compute.webapp.GreetingHandler -> org.aws.samples.compute.webapp.WebappHandler

Pushing the Docker image using FMP gives access denied

Pushing the Docker image using FMP gives the following error:

services $ mvn fabric8:push -Pdocker
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO] 
[INFO] services
[INFO] name
[INFO] greeting
[INFO] webapp
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Building services 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- fabric8-maven-plugin:3.5.34:push (default-cli) @ services ---
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Building name 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- fabric8-maven-plugin:3.5.34:push (default-cli) @ name --
[INFO] F8> Running generator wildfly-swarm
[INFO] F8> wildfly-swarm: Using Docker image fabric8/java-jboss-openjdk8-jdk:1.3 as base / builder
[INFO] F8> The push refers to repository [docker.io/arungupta/name]
0e014839ff57: Preparing   
f14ef5f204ff: Preparing   
85bb608219b7: Preparing   
160c15b8811e: Preparing   
498396e67e5c: Preparing   
2e544e952df4: Waiting     
e07090bc1b2d: Waiting     
c8af1cb1c035: Waiting     
64062a6d7af2: Waiting     
dfb51d70f8af: Waiting     
56939e74679c: Waiting     
d1be66a59bc5: Waiting     
[ERROR] F8> Unable to push 'arungupta/name' : denied: requested access to the resource is denied  [denied: requested access to the resource is denied ]
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] services ........................................... SUCCESS [  3.287 s]
[INFO] name ............................................... FAILURE [  1.382 s]
[INFO] greeting ........................................... SKIPPED
[INFO] webapp ............................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5.026 s
[INFO] Finished at: 2018-02-13T15:22:44-08:00
[INFO] Final Memory: 35M/415M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.fabric8:fabric8-maven-plugin:3.5.34:push (default-cli) on project name: Unable to push 'arungupta/name' : denied: requested access to the resource is denied  -> [Help 1]

Pushing the image using Docker CLI works fine.

FMP profile needs to be specified in child pom.xml when there is no Main class

The profile to create Docker image is defined in the parent services/pom.xml and used in other child pom.xml. The Docker image is generated correctly for webapp which has a Main class. The greeting and name does not have a Main class and creating a Docker image with inherited profile gives the following error:

[ERROR] Failed to execute goal io.fabric8:fabric8-maven-plugin:3.5.34:build (default) on project name: Execution default of goal io.fabric8:fabric8-maven-plugin:3.5.34:build failed: Cannot extract generator config: org.apache.maven.plugin.MojoExecutionException: Cannot extract main class to startup -> [Help 1]

If the profile is defined in the child pom.xml, then the image can be generated correctly. Same error is shown for both greeting and name.

Strip unnecessary dependencies from Lambda zip

The Lambda zip assembly currently includes all of the dependencies, even those pulled in from Wildfly Swarm and JBoss. Those are not required in Lambda and makes the deployment package unnecessarily big.

CloudFormation stack query returns empty results

Following up from #53 ...

The stack output is:

{
    "Stacks": [
        {
            "StackId": "arn:aws:cloudformation:us-east-1:091144949931:stack/aws-compute-options-ecs/9923cd70-23c5-11e8-a71f-500c217dbefe", 
            "Description": "This template illustrates how to use Fargate to deploy a three service microservice architecture.\n", 
            "Tags": [], 
            "EnableTerminationProtection": false, 
            "CreationTime": "2018-03-09T18:13:42.072Z", 
            "Capabilities": [
                "CAPABILITY_NAMED_IAM"
            ], 
            "StackName": "aws-compute-options-ecs", 
            "NotificationARNs": [], 
            "StackStatus": "CREATE_COMPLETE", 
            "DisableRollback": false, 
            "RollbackConfiguration": {
                "RollbackTriggers": []
            }
        }
    ]
}

So the query:

aws cloudformation \
    describe-stacks \
    --region us-east-1 \
    --stack-name aws-compute-options-ecs \
    --query "Stacks[].Outputs[?OutputKey=='PublicALBCNAME'.[OutputValue]]" \
    --output text

returns empty results.

ECS + Fargate: Greeting service is not able to find the container

Bringing the greeting service up gives the following error:

$ ecs-cli compose --verbose \
>       --file greeting/greeting-docker-compose.yaml \
>       --task-role-arn $ECSRole \
>       --ecs-params greeting/ecs-params_greeting.yml \
>       service up \
>       --target-group-arn $GreetingTargetGroupArn \
>       --container-name greeting \
>       --container-port 8081
DEBU[0000] Parsing the compose yaml...                  
DEBU[0000] Opening compose files: greeting/greeting-docker-compose.yaml 
DEBU[0000] [0/1] [greeting-service]: Adding             
DEBU[0000] [0/1] [default]: EventType: 32               
DEBU[0000] Parsing the ecs-params yaml...               
DEBU[0000] Transforming yaml to task definition...      
WARN[0000] Skipping unsupported YAML option for service...  option name=container_name service name=greeting-service
DEBU[0003] Finding task definition in cache or creating if needed  TaskDefinition="{\n  ContainerDefinitions: [{\n      Command: [],\n      Cpu: 0,\n      DnsSearchDomains: [],\n      DnsServers: [],\n      DockerLabels: {\n\n      },\n      DockerSecurityOptions: [],\n      EntryPoint: [],\n      Environment: [],\n      Essential: true,\n      ExtraHosts: [],\n      Image: \"arungupta/greeting:latest\",\n      Links: [],\n      LinuxParameters: {\n        Capabilities: {\n\n        }\n      },\n      LogConfiguration: {\n        LogDriver: \"awslogs\",\n        Options: {\n          awslogs-group: \"/ecs/ecs-demo\",\n          awslogs-region: \"us-east-1\",\n          awslogs-stream-prefix: \"greeting\"\n        }\n      },\n      Memory: 512,\n      MountPoints: [],\n      Name: \"greeting-service\",\n      PortMappings: [{\n          ContainerPort: 8081,\n          HostPort: 8081,\n          Protocol: \"tcp\"\n        }],\n      Privileged: false,\n      ReadonlyRootFilesystem: false,\n      Ulimits: [],\n      VolumesFrom: []\n    }],\n  Cpu: \"256\",\n  ExecutionRoleArn: \"ecs-demo-ECSRole-L1KRVRCY68IH\",\n  Family: \"greeting\",\n  Memory: \"0.5GB\",\n  NetworkMode: \"awsvpc\",\n  RequiresCompatibilities: [\"FARGATE\"],\n  TaskRoleArn: \"ecs-demo-ECSRole-L1KRVRCY68IH\",\n  Volumes: []\n}"
INFO[0005] Using ECS task definition                     TaskDefinition="greeting:1"
ERRO[0006] Error creating service                        error="InvalidParameterException: The container greeting does not exist in the task definition.\n\tstatus code: 400, request id: 2123a60b-2483-11e8-a6aa-3dc3cca34c43" service=greeting
FATA[0006] InvalidParameterException: The container greeting does not exist in the task definition.
	status code: 400, request id: 2123a60b-2483-11e8-a6aa-3dc3cca34c43 

ECS cluster creation failed using CloudFormation template

Creating an ECS cluster using CloudFormation gives the following error:

Embedded stack arn:aws:cloudformation:us-east-1:091144949931:stack/aws-compute-options-ecs-Infrastructure-AHCL2W4UVW4I/8e090760-2312-11e8-969a-500c288f18d1 was not successfully created: The following resource(s) failed to create: [Route2, ALBPublic, PublicLaunchConfiguration, ALBPrivate, NAT1].

Here is the snapshot:

screen shot 2018-03-08 at 1 05 40 pm

X-Ray service map and traces are not shown

Followed the blog https://aws.amazon.com/blogs/compute/application-tracing-on-kubernetes-with-aws-x-ray/

  • Attached AWSXrayFullAccess to the IAM user
  • k8s cluster is running in us-west-2, so updated Dockerfile to download daemon from that region
  • Pushed the image to Docker Hub
  • Deployed the Daemon Set
  • Updated Helm charts by adding AWS_XRAY_DAEMON_ADDRESS
  • invoked the app, but no traces are found
  • Service name is resolvable from greeting:
sh-4.2$ ping xray-service.default
PING xray-service.default.svc.cluster.local (172.20.32.113) 56(84) bytes of data.
64 bytes from ip-172-20-32-113.us-west-2.compute.internal (172.20.32.113): icmp_seq=1 ttl=63 time=1.15 ms
64 bytes from ip-172-20-32-113.us-west-2.compute.internal (172.20.32.113): icmp_seq=2 ttl=63 time=1.36 ms
64 bytes from ip-172-20-32-113.us-west-2.compute.internal (172.20.32.113): icmp_seq=3 ttl=63 time=1.30 ms
64 bytes from ip-172-20-32-113.us-west-2.compute.internal (172.20.32.113): icmp_seq=4 ttl=63 time=1.26 ms
^C
--- xray-service.default.svc.cluster.local ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 1.156/1.274/1.367/0.088 ms

But no service map or traces are updated in the console.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.