Coder Social home page Coder Social logo

dvallin / spacegun Goto Github PK

View Code? Open in Web Editor NEW
11.0 2.0 3.0 2.7 MB

Version controlled multi-cluster deployment manager

License: MIT License

JavaScript 0.72% TypeScript 96.77% Dockerfile 0.22% CSS 0.01% HTML 2.08% Nix 0.20%
multicluster deployment-manager kubernetes

spacegun's Introduction

Spacegun

Version License Build Status codecov Sponsoring

Straight-forward deployment management to get your docker images to kubernetes, without the headaches of fancy ui.

Features

  • deployment pipelines as yaml
  • managing multiple kubernetes clusters
  • managing a single kubernetes cluster with namespaces
  • version controlled configuration
  • generating configuration from existing clusters
  • slack integration
  • some colorful cli
  • a static but informative ui
  • more cool features in the backlog

Getting Started

There is a neat tutorial and a medium article

Installing

If you only want the cli you can install it with

npm install -g spacegun

and then run it from the console. You will have spacegun, spg and spacegun-server as commands available in your console. These are the standalone, client and server builds respectively.

Build the sources

Just run

yarn build

then you can run the cli with

node bin/spacegun

There is also a Dockerfile in the repo in case you want to run spacegun in a container.

Three modes of operation

Spacegun comes in three flavors

  1. Server (bin/spacegun-server)
  2. Client (bin/spg)
  3. Standalone (bin/spacegun)

The server build is meant to be deployed in an environment that can reach all your clusters and your image repository, but can also be reached by the clients. The server build runs cronsjobs, keeps caches of the current state of all resources and runs an HTTP Api (autogenerated messy rest).

The client build is meant to be run on developers consoles as a cli interface to the server.

The standalone build is just the client and server functionality compiled directly into one executable. So you can play around with your configurations before deploying to an actual environment.

Configuring Spacegun

Config.yml

Spaceguns main configuration file is just a yml containing information about your cluster and image repository. By default Spacegun will look under ./config.yml relative to its working directory.

A configuration may look like this

docker: https://my.docker.repository.com
artifact: artifactsFolder
pipelines: pipelinesFolder
kube: kube/config
slack: https://hooks.slack.com/services/SOMEFUNKY/ID
namespaces: ["service1", "service2"]
server:
  host: http://localhost
  port: 8080
git:
  remote: https://some.git
  cron: "0 */5 * * * MON-FRI"

docker gives a url of a docker repository
artifacts folder for spacegun to put cluster snapshots
pipelines folder for spacegun to load pipelines from
kube gives a path to a kubernetes config file (relative to the config.yml)
slack optional webhook to get notifactions of cluster updates
namespaces gives a list of namespaces for spacegun to operate on.
server gives hostname and port of the server (client uses both, server uses the port). Additionally spacegun can be started with the --port parameter so you can override this value. git contains the path to the remote git where all configurations are kept. And the (optional) crontab configures how often the service should poll for configuration changes.

Pipelines

Spacegun is driven by deployment pipelines. A pipeline is configured as a <pipelinename>.yml. By default spacegun scans the configPath/pipelines folder relative to its configuration file for such files.

Here is an example of a pipeline that deploys the newest images that are tagged as latest from your docker registry to your develop kubernetes cluster

cluster: k8s.develop.my.cluster.com
cron: "0 */5 * * * MON-FRI"
start: "plan1"
steps:
- name: "plan1"
  type: "planImageDeployment"
  tag: "latest"
  onSuccess: "apply1"

- name: "apply1"
  type: "applyDeployment"

cluster is the url of your cluster
cron is just a crontab. This one is defined to trigger the job every 5 minutes from Monday to Friday.
start the step to start the execution of the pipeline.
steps is a list of deployment steps. Please note: Spacegun will not validate semantical correctness of your pipeline. It will only check that you are not missing any filds or have typos in step types.

type describes the type of the Pipeline Step. planImageDeployment will look into your cluster and compare the deployments in each namespace with the tag given. In this case, it will plan to update all deployments to the newest image tagged with latest. See the next section for more information about deploying using tags.

onSuccess defines the action that should be taken after this. planImageDeployment will be followed by the apply1 step. The applyDeployment step will apply all previously planned deployments.

Here is an example of a job that deploys from a develop to a live environemt

cluster: k8s.live.my.cluster.com
start: "plan"
steps:
- name: "plan"
  type: "planClusterDeployment"
  cluster: "k8s.develop.my.cluster.com"
  onSuccess: "apply"
  onFailure: "rollback1"

- name: "apply"
  type: "applyDeployment"
  onSuccess: "snapshot1"
  onFailure: "rollback1"

The planClusterDeployment step will plan updates by looking into the develop cluster and comparing the versions running with the live cluster. Wherever there is a difference in the image tag or hash it will plan a deployment.

If cron is not present the server will not create a cronjob and the deployment needs to be manually run by a client.

Deploying to differing namespaces

If the namespaces in the clusters are not called the same, you can use the planNamespaceDeployment step, which allows you to provide a source and a target namespace. Deployments present in both namespaces will be compared and updated analogously to the planClusterDeployment step. Here is an example:

cluster: k8s.live.my.cluster.com
start: 'plan'
steps:
    - name: 'plan'
      type: 'planNamespaceDeployment'
      cluster: 'k8s.prelive.my.cluster.com'
      source: 'namespace1'
      target: 'namespace2'
      onSuccess: 'apply'

    - name: 'apply'
      type: 'applyDeployment'

This will update all deployments on the live cluster in namespace2 which have more recent versions on the prelive cluster in namespace1.

A special case for this is the deployment in a different namespace inside the same cluster. For this you can either omit the cluster inside the step or fill it with the same url as the global cluster.

Deciding which tag to deploy

Spacegun will always check for image differences using tag and image hash. So you if just want to deploy latest then do so like in the pipeline above. This will ensure that if you push a new image tagged with a specific tag, Spacegun will deploy it.

The tag field is not mandatory, however. If you leave it out Spacegun will then choose the lexicographically largest tag. So if you tag your images by unix timestamp, it will deploy the most recent tag. Granted, very implicitely. That is why there is also the semanticTagExtractor field that can either hold a regex or a plain string. Spacegun will extract the first match from this regex and use it as a sorting key. Then it will use the lexcographically largest tag using the sorting key. If you have this step:

- name: "semanticPlan"
  type: "planImageDeployment"
  semanticTagExtractor: /^\d{4}\-\d{1,2}\-\d{1,2}$
  onSuccess: "apply1"

Spacegun will extract a very simple Date format. Say you have tags rev_98ac7cc9_2018-12-24, rev_5da58cc9_2018-12-25, rev_12ff8cff_2018-12-26. Then Spacegun will extract the trailing dates and deploy the lexicographically largest tag using this extracted sorting key. Which is rev_12af8cff_2018-12-26.

Deploy a subset of your cluster

The planning steps can be filtered on namespaces and deployments.

- name: "plan1"
  type: "planImageDeployment"
  tag: "latest"
  filter:
    namespaces:
      - "namespace1"
      - "namespace2"
    resources:
      - "deployment1"
      - "deployment2"
      - "deployment3"
      - "batch1"
  onSuccess: "apply1"

This planning step would only run for two namespaces and in each namespace only update the three deployments listed. Note that this makes sense if you do not have deployments that are uniquely named, else you could omit filtering by namespaces.

Note that once you use filtering in one deployment pipeline, you likely have to add filtering to all your deployments. It might be a good idea, to have such special deployments running in a separated namespace and you might even manage them using a dedicated Spacegun instance.

For planNamespaceDeployment you cannot filter on namespaces and Spacegun will tell you so if you try. Filtering on deployments is still possible.

Deploy only working clusters

You might want to only deploy clusters that meet certain criteria. For example you might check that all systems are healthy and the acceptance tests are green or some other metrics are fine. To tell Spacegun about your cluster state you can add a cluster probe step to your pipeline.

- name: "probe1"
  type: "clusterProbe"
  hook: "https://some.hook.com"
  timeout: 20000
  onSuccess: "plan1"

The tag is an endpoint that Spacegun will call using GET method. If it returns a status code 200, Spacegun will proceed with the onSuccess step. Else the step will fail and proceed with the onFailure step. The timeout is an optional field giving the timeout for the hook call in milliseconds. If no timeout is set, spacegun will not cancel the connection on its own.

Deployments and Cronjobs

Kubernetes deployments are called deployments in Spacegun. Kubernetes Cronjobs however, are called Batches to avoid confusion with the cronjobs that are driving Spacegun Deployments (Also Cronjobs are part of the Kubernetes Batch Api). Deploying, Restarting and Snapshots work exactly the same for batches and deployments.

A snapshot might look like this. (Note that namespace undefined is resolved to the default namespace in kubernetes)

.
└── minikube
    └── undefined
        ├── batches
        │   └── hello.yml
        └── deployments
            └── nginx-deployment.yml

Here hello.yml is taken from the kubernetes tutorial on cron jobs. A spacegun apply would create your batch job. (In production you would push hello.yml to your config repository and your spacegun server would pick it up automatically once merged on master)

Git

All configuration files can be maintained in a git repository. Spacegun can be configured to poll for changes and will automatically load them while runing.

A git repository could have such a folder structure

.
├── config.yml
└── pipelines
│   ├── dev.yml
│   ├── live.yml
│   └── pre.yml
└─ artifacts
    └── ...

You probably do not want to have your Kubernetes config in version control, because it should be considered sensitive data for most users. You should rather generate one dynamically on startup of your node running Spacegun. If you are running on AWS you can use kops for this.

Example of a startup script

Install Node, Spacegun and create a user for Spacegun

#!/usr/bin/env bash

set -e
set -x

if [ "$(id -un)" != "root" ]; then
  exec sudo -E -u root "$0" "$@"
fi

export DEBIAN_FRONTEND=noninteractive
curl -sSL https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add -
VERSION=node_8.x
DISTRO="$(lsb_release -s -c)"
echo "deb https://deb.nodesource.com/$VERSION $DISTRO main" | tee /etc/apt/sources.list.d/nodesource.list
echo "deb-src https://deb.nodesource.com/$VERSION $DISTRO main" | tee -a /etc/apt/sources.list.d/nodesource.list

apt-get update
apt-get install -y nodejs

npm install -g --unsafe-perm spacegun

# Create Spacegun user
useradd -d /var/lib/spacegun -U -M -r spacegun

Install Kubectl and Kops (if you need them)

# Install kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl

# Install kops
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
mv kops-linux-amd64 /usr/local/bin/kops

Creating a daemon and start Spacegun

cat > /etc/systemd/system/spacegun.service <<EOF
[Service]
ExecStart=/usr/bin/spacegun-server
Restart=always
StartLimitBurst=0
StartLimitInterval=60s
PermissionsStartOnly=true
User=spacegun
WorkingDirectory=/var/lib/spacegun/config

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable spacegun.service
systemctl start spacegun.service

Cluster Snapshots

Running spacegun snapshot will download the current state of your cluster's deployments and save them as artifacts (yml files) in your artifacts folder (by default configPath/artifacts).

Now you can just update your deployments by committing changes of the artifacts into your config repository. Spacegun will then apply the changes on config reload.

If you want to apply local changes to your deployments configuration, run spacegun apply.

Note that spacegun will not apply changes to the deployment's image. This is where deployment jobs or the spacegun deploy command are for.

Running the tests

run the tests with

yarn test

Authors

  • Maximilian Schuler - Initial work - dvallin

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Dependencies

spacegun's People

Contributors

alexanderkogan avatar dvallin avatar priegger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

spacegun's Issues

Show server version when running Spg

Is your feature request related to a problem? Please describe.
When running spg it might be running against a different version of Spacegun. The user should at least be informed about that.

Describe the solution you'd like
Add a server version endpoint. If the server can be reached the fetch the version on ANY command and compare to the local version. If command is version or help also show the server version and a warning on version mismatch.
For any other command: print a warning before doing the command on version mismatch.

Describe alternatives you've considered
Just looking at the server version on the servers webpage does not seem to work as good as a direct CLI integration.

Multi-Step jobs

Is your feature request related to a problem? Please describe.
Jobs can currently just deploy stuff. But we need to customize them some more in the future. W

Describe the solution you'd like
Jobs should become Pipelines and Steps. (Or something similar). Pipeline Steps are classically implemented like this

Step.apply(input, [head, ...steps]): {
  onEnter(input)  
  output = head.apply(input, steps)  
  return onLeave(input, output)  
}

We will need jobs like

jobName.yml

steps:
  - type: "deploy"
    ....
  - type: "clusterProbe"
    ....
  - type: "takeSnapshot"
    ...

NOT in this issue is persiting state and retries!

PlanSemanticImageStep

Is your feature request related to a problem? Please describe.
Deploying by versioned image tag is a common approach. Spacegun only supports ImageHash deployments on a fixed tag.

Describe the solution you'd like
Add a new PlanSemanticImageStep step. This step fetches all tags of an image and uses a regex to extract a match from each tag. These can then be lexicographically sorted. Let's say:
tags = ["rev_12_12_a, rev_13_13_b, rev_13_13_a"]
then extracting with regex "\d\d_\d\d_." -> ["12_12_a", "13_13_b", "13_13_a"]
then take the lexicographicaly largest -> 13_13_b

If multiple images are possible slack it as an error!

  1. Add basic support
  2. Add to flag to take the smallest.
  3. Support multiple matches: First sort by first match, if multiple images are possible go to the next match...

Pipeline to same cluster in different namespace

Is your feature request related to a problem? Please describe.
We are thinking about deploying the prelive and live pods in the same cluster separated by namespaces. This would currently not work with the pipeline, since it can only deploy into the same namespace in different clusters.

Describe the solution you'd like
The pipeline should support a definiton, where you can provide a cluster and a source and target namespace. Spacegun would then compare the deployments of these namespaces and update the target namespace to the deployment versions of the source namespace.

I guess the pipeline could look something like this:

cluster: <source (and target) cluster>
namespace: <source namespace> (optional)
start: "plan"
steps:
- name: "plan"
  type: "planDeployment"
  cluster: <target cluster>        | either one
  namespace: <target namespace>    |
...

Describe alternatives you've considered

  • The step type could stay the same and allow only one of cluster or namespace.
  • Alternatively you could add the type planNamespaceDeployment to have a strong differentiation.
  • A target field with the members cluster and namespace would also be possible, but I think that's too much indentation.

Namespaces in logs when planning deployment

Is your feature request related to a problem? Please describe.
When planning a deployment with multiple namespaces you don't see which namespaces spacegun is currently in. If I have deployments with the same name in 2 different namespaces, I can't tell which will be applied when I don't filter one or the other out.

Describe the solution you'd like
Add the namespace to the plan logs. For example:
planning in namespace first:
planning image deployment my-importer in importers
planning image deployment other-importer in importers
planning in namespace second:
planning image deployment my-importer in importers-live
Same for the planned deployment log:
namespace1:
my-importer ... => ...
namespace2:
other-importer ... => ...

Describe alternatives you've considered
You could also incorporate the importer in the same line like:
planning image deployment my-importer in namespace first in importers
other-importer namespace1 ... => ...
but that seems a bit cluttered.

All configs should be configurable by environment variables

Is your feature request related to a problem? Please describe.
In spring I can use SERVER_PORT to override the value of server.port. It would be nice to have such functionality, so the config can be agnostic of the deployment.

Describe the solution you'd like
Parse the env into an appropriate object. Use it to override values during config creation.

A terraform provider

Is your feature request related to a problem? Please describe.
When all platform is managed in Terraform it makes sense to also manage spacegun with it.

Describe the solution you'd like
Write a terraform provider. This could be used to send a configuration to spacegun. Spacegun would evaluate the configuration, try to commit it to the master branch, try to push and reload. Then it would return whatever status code terraform wants.

Additional context
writing a provider

PlanHookStep

Is your feature request related to a problem? Please describe.
I do not want Spacegun to talk to my Docker Registry (because it do not have one or it has a weird api). Instead I want to call a hook with some image information.

Describe the solution you'd like
A PlanHookStep that receives an image and plans a Deployment with it. This is basically a call to spg run but wit additional json in the post body, that can be interpreted by the pipeline. The pipeline will fail if it is missing.

  1. Add an optional Post Body to the run endpoint
  2. Parse it in the PlanHookStep and plan a deployment with it

No log when working directory not clean

Describe the bug
When the workspace of the config is not clean there is no log to indicate, that the config reload did not happen.

To Reproduce

  1. Start spacegun normally with the config in a git.
  2. Change a file.

Expected behavior
I expected a message in the syslog, that the config could not be pulled.

Desktop (please complete the following information):

  • OS: debian
  • Version 0.0.27

Deployment information on landing page

Is your feature request related to a problem? Please describe.
As an addition to the welcome page, some pages that give more detailed information about planned deployments would be nice. Though this can easily be done using Cli, it would make for a nicer user experience

Describe the solution you'd like

  • List of deployments on landing page
  • Detail page for each deployment with all information about them
  • A button to manually start one

Version controlled deployments

Is your feature request related to a problem? Please describe.
Currently only image versions can be controlled with spacegun. But deployment settings are often modified by developers. It is common practice to update environment variables, health and readiness probes, etc... Though this can be done with terraform, it is probably better if Spacegun could do it.

Describe the solution you'd like
Add descriptions of deployments per server group to the config. On config reload iterate over them and diff them against their current state. Put (or patch?) to the target state (from config + current image)

Describe alternatives you've considered
Deployment resources can be described using terraform, but since Spacegun modifies the image tag one has to compensate for that. And I don’t know if this works.

Add age to the Pod overview

Is your feature request related to a problem? Please describe.
Restarts of a pod usually happen in bursts but just might as well accumulate over time. So the age of a pod is an interesting metric to consider.

Describe the solution you'd like
Add Pods age to the model and display it in the cli and frontend.

Artifacts should be loaded from their own repository

Describe the bug
Currently, the config repository has two responsibilities: reload config, load artefacts

The Solution you would like
Add an artefact repository. Rename all artifacts to artefacts. Move the loading of yaml files into an io module. Test more layers.

Npm/Yarn Audit is failing

│ moderate │ Sandbox Breakout / Arbitrary Code Execution
│ Package │ static-eval
│ Patched in │ No patch available
│ Dependency of │ @kubernetes/client-node
│ Path │ @kubernetes/client-node > jsonpath > static-eva
│ More info │ https://nodesecurity.io/advisories/758

Workaround: remove yarn audit from travis build.
When fixable, reintroducde yarn audit and close this Bug.

ClusterProbe Step

A Step to validate the stability of a cluster. Just calling a hook should be enough for this milestone

Create a landing page

Is your feature request related to a problem? Please describe.
Currently the server does not render a welcome page whic makes it unnecessarily difficult for the user to seen when the server is up and correctly configured. So in a first step, show an overview of the clusters when ready.

Describe the solution you'd like
Server side rendered relatively static html with a nerdy css framework. Show some welcome information, information about things inside the config and a list of all pods.
Also keep the dispatcher Module isolated against Spacegun stuff.

Describe alternatives you've considered
A spa but that is too much overhead.
React server side could be a viable alternative, but again I think keeping the rendering as simple as possible is the best option (https://github.com/koajs/react-view)

Additional context
https://github.com/queckezz/koa-views/

More RX

Interfaces should expose Observables instead of Promises

Add cluster probe to the CLI

Is your feature request related to a problem? Please describe.
I want to try out all or most steps of my pipelines individually. Cluster Probe should be no exception.

Describe the solution you'd like
Add a CLI command probe and call the appropriate backend.

Provide a simple way to check the spacegun configuration

Is your feature request related to a problem? Please describe.
In case feature branches and merges are used to update the spacegun configuration, it might be nice to have an automated way to "lint" the new spacegun config before submitting it.

Describe the solution you'd like
A command line switch for one of the spacegun tools (probably standalone) should exist, that makes spacegun read the complete config (either from the filesystem or from a given git branch). The command terminates as soon as possible and the return value indicates, wether the config is ok or not.

Describe alternatives you've considered
It might be possible so start spacegun and parse the (log) output, but this is messy. So there's no real alternative.

Additional context

  • This should be usable from CI (e.g. Jenkins).
  • It might be nice to have more advanced checks, probably later on: Is the k8s config ok? Do the mentioned config maps and secrets exist?

Canary Deployments

Is your feature request related to a problem? Please describe.
It is not clear what is needed to do canary releases. Perhaps it is already possible with the current feature set. Perhaps we need finer control for that.

Describe the solution you'd like
A example setup for canary deployment. Perhaps additional pipeline steps.

CronRegistry Tests are flaky and fail on upgrade of peer dependencies

Describe the bug
CronRegistry Tests fail sometimes, and every time on peer dependency upgrade

To Reproduce
Steps to reproduce the behavior:

  1. run yarn upgrade

It does not look like it is the cron library itself, but some weird interplay with a peer dependency and jest timer mocks.

Expected behavior
Timer mocks work and can test the cron library reliably

Better type safety for dispatchers

Is your feature request related to a problem? Please describe.
Changing an api in a module does not break Code on the invoking side. This is due to the missing type safety and the necessary type casts.

Describe the solution you'd like
Couple the invocation with the function parameters. Each function should have its function parameter class that encapsulates the mapper function and how to map into a requestparams object. Instead of exposing modulenames and function names, factory methods to such objects should be exposed.

Describe alternatives you've considered
Typing the dispatcher directly is not feasible

Approve Via Slack Step

Before applying changes to a productive Cluster I want Approval by some random person on Slack.

This can be done as a Pipeline Step

- name: "plan1"
  type: "planImageDeployment"
  tag: "latest"
  onSuccess: "approve1"

- name: "approve1"
  type: "getApprovalViaSlack"
  onSuccess: "apply1"

- name: "apply1"
  type: "applyDeployment"

Possible Workflow
Post to slack with an interactive message: https://api.slack.com/docs/message-buttons . This will finish the execution of the pipeline. Slack will call a POST endpoint with a defined payload. The payload being the current deployment plan and a description of the Pipeline. Here the pipeline must resume at the right spot again.

Cluster Rollbacks

Is your feature request related to a problem? Please describe.
Rolling back a single pod does not necessarily bring the Cluster into a stable state and it might make things even worse. So you need a tool to roll back the whole cluster into a known stable state.

Describe the solution you'd like
After each deployment: Put the current state of each servergroup in a Json representation, as a Cluster Snapshot. Push it into a dedicated Git repository as a commit. On a branch per cluster. Tag it with the current timestamp.

UI: Show a list of Timestamps per Servergroup. Show the state of the selected cluster snapshot.
CLI: Command cluster rollback. Show a list of Timestamps of the Servergroup. Show the plan to rollback to this state. (basically the current deploy command)

Upgrading to new Spacegun versions might be problematic. Though, we can easily keep things downward compatible, the diff to an older cluster description might not look clean anymore. In git, history can be rewritten but this might be too complicated.

Describe alternatives you've considered
Putting the state into a MongoDB or similar. But we do not need a real database here.

Additional context
Developers can benefit from a Git based solution because it is easy to work with. One can just view the diff of the clusters state between two dates.

Use Minicube and other services in Docker containers to test/develop spacegun

Is your feature request related to a problem? Please describe.
It is difficult to develop spacegun because it is necessary to create a non-production-critical k8s cluster, have a docker registry and a git repo for the config to have the whole feature set.

Describe the solution you'd like
It should be possible to automatically set up a k8s cluster (e.g. using Minicube) and the other dependencies (e.g. using docker) to allow developers to try out/use spacegun.

Describe alternatives you've considered

  • Mocking is possible, but not the same. Having the dependencies gives much more insight into the system.
  • It is possible to set up the necessary dependencies using cloud providers (AWS free tier, github, ...). But this might be more complicated, more complex or overkill for simple checks.

Additional context
This could also lead to automated tests on a higher level (integration tests).

Kill all cron

Is your feature request related to a problem? Please describe.
A lot is done using crontabs right now: polling docker for new images, time-based deployments, ...

Spacegun could be a lot simpler and easier to test if it did not have cron support but used hooks (webhooks) as triggers. That way git commits or docker pushes could directly trigger actions and no polling would be necessary. In case of time-based deployments, web cron or the system crontab and the cli can be used.

Describe the solution you'd like

  • Get rid of cron, replace it with web hooks for all necessary triggers.
  • Provide documentation for the hooks and examples, how to use web cron or maybe Docker containers with cron to trigger actions.

Describe alternatives you've considered
Keep the complexity:

  • Keep cron and fix the flaky tests.
  • Provide the hooks as an alternative to the cron feature.

Additional context

  • It might be nice to have a well documented external cron tool. Maybe a crontab (file) with a default configuration or a docker container which triggers the hooks using the cli or a web endpoint.
  • Authentication for some of the hooks will be necessary (trigger, which image should be deployed).

RollbackStep

Is your feature request related to a problem? Please describe.
When any pipeline step fails, I want to roll back to the last stable snapshot

Describe the solution you'd like
When we have Multi-Step jobs #38 and are far enough in Cluster Rollbacks #23 we can implement rollbacks to stable cluster states. Actually finding out that a cluster is stable is another thing. But knowing that the deployment plan half succeeded is enough right now.

Namespace and deployment filters for deployment jobs

Is your feature request related to a problem? Please describe
Currently, deployments are run against all known namespaces. This might be a problem for some people.

Describe the solution you'd like
Two lists of whitelisted namespaces and deployments.

Green Blue Deployments

Is your feature request related to a problem? Please describe.
Blue green has the benefit to quickly roll back to the previous version.

Describe the solution you'd like
If done right this feature could also enable canary deployments.

Pin / unpin deployments

Is your feature request related to a problem? Please describe.
sometimes you do not want a deployment to happen but stay at a defined service version

Describe the solution you'd like
Pinning and unpinning deployments to and from the current version. Unpinning just removed pin, but does not deploy to newer version. A manual deployment does not Pin deployments.

Describe alternatives you've considered
Just moving forward and use reverts. Use feature branches...

Additional context
I will only implement this if it is requested enough

filter in pipeline doesn't work

Describe the bug
When using filter on planClusterDeployment spacegun doesn't filter the plan. It won't work with neither ClusterDeployment nor ImageDeployment

To Reproduce

  1. add "filter' to a pipeline in the plan step
  2. run spacegun run -p plan

Expected behavior
Only the defined namespace or deployment are planned.

Desktop (please complete the following information):

  • OS: arch linux
  • Version 27

Do not always deploy only newer images

Is your feature request related to a problem? Please describe.
Currently deployments use only newer images, but sometimes I want older images.

Describe the solution you'd like
A deploy criteria flag in the job description. Newer will deploy only newer engages, not equal will also deploy older images

Slack support

Is your feature request related to a problem? Please describe.
I want to be notified about deployements in my slack channel

Describe the solution you'd like
Add a new event log module. It should slack each event using a webhook.

Additional context
Perhaps we can think about the I in IO?

Dependabot PRs

This looks like super cool software, but I can't tell if it's abandoned. All the dependabot PRs make it look like the project is no longer payed attention to. If that's not the case, would you mind closing/merging them all (and possibly disabling dependabot if the vulnerability PRs aren't useful)?

Compiling the test files does not work with the fully strict tsconfig

Describe the bug
Adding a test tsconfig
{ "extends": "../tsconfig", "compilerOptions": { "allowJs": true, "typeRoots": [ "../node_modules/@types", "../types", ] }, "include": [ "../test", "../types" ] }

and tsc --noEmit -p test/tsconfig.test.json && to the test task throws a lot of compile errors.

The easier compile errors are just missing typings and similiar things, but there are some heavier things about my hippy-way of mocking stuff. Correctly mocking some dependencies is hard, but should be done.

Configure the tests as above. Fix all compile errors!

In memory Event log

Is your feature request related to a problem? Please describe.
Except for slack messages you do not see what the cronjobs do.

Describe the solution you'd like
Add an in memory event log to the event log module. Log all events to this log and show it in the frontend. Only log high priority events to slack.

Describe alternatives you've considered
Persisting the events would be nice but should be done separately

Empty config.yml leads to a misleading error

Describe the bug
When you created an empty config.yml and try to start spacegun, the error reads:

An error occured
Cannot destructure property `kube` of 'undefined' or 'null'.

This looks like you have to add kube to the config although only docker is obligatory.

To Reproduce

  1. Create an empty config.yml.
  2. Start spacegun.

Expected behavior
I expect a list of obligatory entries for the config or at least docker instead of kube in the error message.

Additional context
When adding docker: https://docker.com to the config.yml, spacegun responds with:

ENOENT: no such file or directory, scandir './jobs'

But that's fine and creating the folder solves it.

Starting any spacegun executable after installing it shows a stack trace

Describe the bug
I just installed spacegun into a new, empty directory and started it. For every spacegun exectuable, I get an exception.

To Reproduce
Steps to reproduce the behavior:

  1. mkdir -p ~/tmp/spacegun; cd ~/tmp/spacegun; yarn add -D spacegun
  2. Start any of the spacegun exectuables on the command line
  3. See errors

Expected behavior
A clear and concise description of how to use spacegun should be shown.

Desktop (please complete the following information):

  • OS: Linux (NixOS)
  • Version: 0.0.33

Errors

$ yarn spacegun
yarn run v1.9.4
warning package.json: No license field
$ /home/philipp/tmp/spacegun2/node_modules/.bin/spacegun

        /\ *    
       /__\     Spacegun-CLI   version 0.0.33
      /\  /
     /__\/      Space age deployment manager
    /\  /\     
   /__\/__\     Usage: `spacegun <command> [options ...]`
  /\  /    \

(node:10389) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'clusters' of undefined
    at Object.clusters (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/standalone.bundle.js:1:819148)
    at Object.a [as cluster/clusters] (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/standalone.bundle.js:1:816769)
    at t (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/standalone.bundle.js:1:815906)
    at We (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/standalone.bundle.js:1:830743)
    at Rs.run (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/standalone.bundle.js:1:869781)
    at Module.<anonymous> (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/standalone.bundle.js:1:870498)
    at s (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/standalone.bundle.js:1:172)
    at /home/philipp/tmp/spacegun2/node_modules/spacegun/dist/standalone.bundle.js:1:964
    at Object.<anonymous> (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/standalone.bundle.js:1:974)
    at Module._compile (module.js:653:30)
(node:10389) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:10389) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Done in 0.35s.

$ yarn spg
yarn run v1.9.4
warning package.json: No license field
$ /home/philipp/tmp/spacegun2/node_modules/.bin/spg

        /\ *    
       /__\     Spacegun-CLI   version 0.0.33
      /\  /
     /__\/      Space age deployment manager
    /\  /\     
   /__\/__\     Usage: `spacegun <command> [options ...]`
  /\  /    \

(node:10362) UnhandledPromiseRejectionWarning: Error: getaddrinfo ENOTFOUND undefined undefined:80
    at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26)
(node:10362) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:10362) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Done in 0.38s.

$ yarn spacegun-server
yarn run v1.9.4
warning package.json: No license field
$ /home/philipp/tmp/spacegun2/node_modules/.bin/spacegun-server

        /\ *    
       /__\     Spacegun-CLI   version 0.0.33
      /\  /
     /__\/      Space age deployment manager
    /\  /\     
   /__\/__\     Usage: `spacegun <command> [options ...]`
  /\  /    \

(node:10413) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'clusters' of undefined
    at Object.clusters (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/server.bundle.js:1:819589)
    at Object.a [as cluster/clusters] (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/server.bundle.js:1:817214)
    at t (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/server.bundle.js:1:816355)
    at We (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/server.bundle.js:1:831184)
    at Ss.run (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/server.bundle.js:1:870645)
    at Module.<anonymous> (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/server.bundle.js:1:871618)
    at s (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/server.bundle.js:1:172)
    at /home/philipp/tmp/spacegun2/node_modules/spacegun/dist/server.bundle.js:1:964
    at Object.<anonymous> (/home/philipp/tmp/spacegun2/node_modules/spacegun/dist/server.bundle.js:1:974)
    at Module._compile (module.js:653:30)
    at Object.Module._extensions..js (module.js:664:10)
    at Module.load (module.js:566:32)
    at tryModuleLoad (module.js:506:12)
    at Function.Module._load (module.js:498:3)
    at Module.require (module.js:597:17)
    at require (internal/module.js:11:18)
(node:10413) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:10413) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Done in 0.45s.

Health Checks

Is your feature request related to a problem? Please describe
To deploy Spacegun a dedicated health check would be useful.

Describe the solution you'd like
Add a health check to the service

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.