Coder Social home page Coder Social logo

spring-cloud-deployer-kubernetes's Introduction

Spring Cloud Connectors

Spring Cloud Connectors provides a simple abstraction that JVM-based applications can use to discover information about the cloud environment on which they are running, connect to services, and have discovered services registered as Spring beans. It provides out-of-the-box support for discovering common services on Heroku and Cloud Foundry cloud platforms, and it supports custom service definitions through Java Service Provider Interfaces (SPI).

Note
This project is in maintenance mode, in favor of the newer Java CFEnv project. We will continue to release security-related updates but will not address enhancement requests.

Learn more

Build

The project is built with Gradle. The Gradle wrapper allows you to build the project on multiple platforms and even if you do not have Gradle installed; run it in place of the gradle command (as ./gradlew) from the root of the main project directory.

To compile the project and run tests

./gradlew build

To build a JAR

./gradlew jar

To generate Javadoc API documentation

./gradlew api

To list all available tasks

./gradlew tasks

Contributing

Spring Cloud is released under the non-restrictive Apache 2.0 license, and follows a very standard Github development process, using Github tracker for issues and merging pull requests into master. If you want to contribute even something trivial please do not hesitate, but follow the guidelines below.

Sign the Contributor License Agreement

Before we accept a non-trivial patch or pull request we will need you to sign the Contributor License Agreement. Signing the contributor’s agreement does not grant anyone commit rights to the main repository, but it does mean that we can accept your contributions, and you will get an author credit if we do. Active contributors might be asked to join the core team, and given the ability to merge pull requests.

Code of Conduct

This project adheres to the Contributor Covenant code of conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to [email protected].

Code Conventions and Housekeeping

None of these is essential for a pull request, but they will all help. They can also be added after the original pull request but before a merge.

  • Use the Spring Framework code format conventions. If you use Eclipse you can import formatter settings using the eclipse-code-formatter.xml file from the Spring Cloud Build project. If using IntelliJ, you can use the Eclipse Code Formatter Plugin to import the same file.

  • Make sure all new .java files to have a simple Javadoc class comment with at least an @author tag identifying you, and preferably at least a paragraph on what the class is for.

  • Add the ASF license header comment to all new .java files (copy from existing files in the project)

  • Add yourself as an @author to the .java files that you modify substantially (more than cosmetic changes).

  • Add some Javadocs and, if you change the namespace, some XSD doc elements.

  • A few unit tests would help a lot as well — someone has to do it.

  • If no-one else is using your branch, please rebase it against the current master (or other target branch in the main project).

  • When writing a commit message please follow these conventions, if you are fixing an existing issue please add Fixes gh-XXXX at the end of the commit message (where XXXX is the issue number).

spring-cloud-deployer-kubernetes's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spring-cloud-deployer-kubernetes's Issues

Non-default external port is appended to container args

App external port (default is 8080) can be modified by setting 'server.port' key in the AppDefintion properties map.
However, all properties in map are later appended into container args (implementation in DefaultContainerFactory).
This behavior is incorrect since server.port is not a known arg when executing Docker run --server.port, thus causing an error while deploying app.

I managed to work-around this issue in DefaultContainerFactory by checking if property key is 'server.port' before appending to container args:

Map<String, String> args = request.getDefinition().getProperties();
    for (Map.Entry<String, String> entry : args.entrySet()) {
        if(!entry.getKey().equals("server.port")) { //work-around
            cmdArgs.add(String.format("--%s=%s", entry.getKey(), entry.getValue()));
        }
 }

Ability to add app environment variables

Currently, application specific environment variables are not being added to the container. By allowing application specific environment variables, the user can better tailor Spring Stream apps to their needs. For instance, if the developer creates a stream app with the base image "fabric8/java-alpine-openjdk8-jdk" that allows for more environment variables, they would then be able to deploy with:

stream create --name mystream --definition "mytime | log"
stream deploy mystream --properties "app.mytime.spring.cloud.deployer.kubernetes.memory=256Mi, app.mytime.spring.cloud.deployer.kubernetes.environmentVariables=JAVA_OPTIONS=-Xmx64m"

Related issues:
spring-cloud/spring-cloud-dataflow-server-kubernetes#38

Add support for volumes

I have an app that I deploy as part of a stream with Spring Cloud Dataflow on a Kubernetes cluster. The Docker image for the app contains a VOLUME instruction and I'd like to specify a directory on the host to mount the volume to. (This is network-attached storage that all hosts in the cluster can access.)

Retrieve app count correctly

The deployment property spring.cloud.deployer.count needs to be retrieved from deployment request environment variables instead of request definition.

Currently, the countProperty is always set to null and thereby the replication controller has the count value as 1 always.

Make loadbalancer retry count configurable

    public void undeploy(String appId) {
        logger.debug("Undeploying module: {}", appId);

        try {
            if ("LoadBalancer".equals(client.services().withName(appId).get().getSpec().getType())) {
                Service svc = client.services().withName(appId).get();
                int tries = 0;
                while (tries++ < 30) {
    ```

NullPointerException when running against Kubernetes 1.5.2

2017-02-10 08:29:55.395 DEBUG 1 --- [nio-9393-exec-9] o.s.c.d.s.k.AbstractKubernetesDeployer : Deploying app: xxx
2017-02-10 08:29:58.748 DEBUG 1 --- [nio-9393-exec-9] o.s.c.d.s.k.AbstractKubernetesDeployer : Building AppStatus for app: xxx
2017-02-10 08:29:58.763 ERROR 1 --- [nio-9393-exec-9] o.s.c.d.s.k.AbstractKubernetesDeployer : null

java.lang.NullPointerException: null
at org.springframework.cloud.deployer.spi.kubernetes.KubernetesAppDeployer.status(KubernetesAppDeployer.java:174) ~[spring-cloud-deployer-kubernetes-1.0.4.RELEASE.jar!/:1.0.4.RELEASE]
at org.springframework.cloud.deployer.spi.kubernetes.KubernetesAppDeployer.deploy(KubernetesAppDeployer.java:79) ~[spring-cloud-deployer-kubernetes-1.0.4.RELEASE.jar!/:1.0.4.RELEASE]
at org.springframework.cloud.dataflow.server.controller.StreamDeploymentController.deployStream(StreamDeploymentController.java:250) [spring-cloud-dataflow-server-core-1.0.1.RELEASE.jar!/:1.0.1.RELEASE]
at org.springframework.cloud.dataflow.server.controller.StreamDeploymentController.deploy(StreamDeploymentController.java:186) [spring-cloud-dataflow-server-core-1.0.1.RELEASE.jar!/:1.0.1.RELEASE]

Standardize instance id env-var name

In the case of partitioning, the running application should know which partition/instance id it is handling.

spring.application.index as an environment variable for the deployed app when using partitioning`

Allow customization how properties are passed when launching the Docker image

This should improve compatibility with Docker images for Spring Boot apps that can use either exec or shell form entry points.

For Boot apps we should use SPRING_APPLICATION_JSON env var for app properties.

Add a config property spring.cloud.deployer.kubernetes.entryPointStyle with values of EXEC, SHELL and BOOT. EXEC should be the default and then we pass properties as arguments, for SHELL we would pass properties as env vars, and with BOOT we would be using SPRING_APPLICATION_JSON for app properties.

Allow "task launch" to use existing job

When launching tasks we create a job and execute it. To be able to re-launch a task we need to detect an existing job and reuse that for the next execution. Currently the launch fails.

Implement Kubernetes TaskLauncher

As a developer, I'd like to implement TaskLauncher contracts for k8s deployer.

Acceptance:

  • launch() API allows launching Task applications
  • status() API queries the current state of the Task application
  • delete()/destroy() API removes the running Task application
  • Unit tests included
  • Docs included

Pass an environment variable for the stream name

Much like ${xd.stream.name} was used in XD, the deployed application should have information of what stream it is a part of.

This is important for MetricWriter implementations to associate which applications belong to a given stream.

spring.cloud.application.group as the environment variable?

Resource limits per deployment should be taken from environment variables.

See https://github.com/spring-cloud/spring-cloud-deployer/blob/master/spring-cloud-deployer-spi/src/main/java/org/springframework/cloud/deployer/spi/core/AppDeploymentRequest.java#L30-L37

The impl that is in CF is good one to model, see https://github.com/spring-cloud/spring-cloud-cloudfoundry-deployer/blob/master/src/main/java/org/springframework/cloud/deployer/spi/cloudfoundry/CloudFoundryAppDeployer.java#L147

This applies to disk, memory and instances.

Need to figure out how to give distinct property/env-vars names in each case.

For the CloudFoundryAppDeployer, the prefix is spring.cloud.deployer.kubernetes so spring.cloud.deployer.kubernetes.memory which are the default and the name used per app instance ATM is kubernetes.memory, which should probably have some prefix.

Perhaps from kubernetes.memory it can be spring.cloud.deployer.kubernetes.app.memory ?

Need to be aligned with other deployer impls.

Enable deployer to deploy non-boot apps

In order to deploy non-boot apps we need to be able to set the container port without using the --server.port app property. We should also allow override of the command used to start the docker container.

I suggest we add two new deployer properties that can be set on the deployer via the KubernetesDeployerProperties class or as part of each deployment in the AppDeploymentRequest#deploymentProperties.

We can call them spring.cloud.deployer.kubernetes.containerPort and spring.cloud.deployer.kubernetes.containerCommand respectively. We might want to allow multiple values so these properties might be pluralized and accept arrays.

In case server.port is defined in AppDefinition properties (Spring Boot apps), then it will override the spring.cloud.deployer.kubernetes.containerPort property.

Cannot deploy to Kubernetes v1.5.1 cluster

I've upgraded my cluster to Kubernetes 1.5.1 (from 1.4.4) and I'm now getting the following exception when deploying streams using SCDF 1.0.1.RELEASE:

2016-12-21 08:40:36.938 DEBUG 1 --- [io-9393-exec-10] o.s.c.d.s.k.AbstractKubernetesDeployer   : Deploying app: xxx
2016-12-21 08:40:37.767 DEBUG 1 --- [io-9393-exec-10] o.s.c.d.s.k.AbstractKubernetesDeployer   : Building AppStatus for app: xxx
2016-12-21 08:40:37.783 ERROR 1 --- [io-9393-exec-10] o.s.c.d.s.k.AbstractKubernetesDeployer   : null
java.lang.NullPointerException: null
	at org.springframework.cloud.deployer.spi.kubernetes.KubernetesAppDeployer.status(KubernetesAppDeployer.java:174) ~[spring-cloud-deployer-kubernetes-1.0.4.RELEASE.jar!/:1.0.4.RELEASE]
	at org.springframework.cloud.deployer.spi.kubernetes.KubernetesAppDeployer.deploy(KubernetesAppDeployer.java:79) ~[spring-cloud-deployer-kubernetes-1.0.4.RELEASE.jar!/:1.0.4.RELEASE]
	at org.springframework.cloud.dataflow.server.controller.StreamDeploymentController.deployStream(StreamDeploymentController.java:250) [spring-cloud-dataflow-server-core-1.0.1.RELEASE.jar!/:1.0.1.RELEASE]
	at org.springframework.cloud.dataflow.server.controller.StreamDeploymentController.deploy(StreamDeploymentController.java:186) [spring-cloud-dataflow-server-core-1.0.1.RELEASE.jar!/:1.0.1.RELEASE]

The status() method doesn't handle the situation where not a single pod is scheduled for the stream, which is the real problem. I verified this with kubectl get pod.

After downgrading the cluster to Kubernetes v1.4.7 everything works as expected again.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.