Coder Social home page Coder Social logo

osscda / kedahttp Goto Github PK

View Code? Open in Web Editor NEW
16.0 5.0 0.0 311 KB

kedahttp implements a prototype of auto-scaling containers using Kubernetes and KEDA

Home Page: https://keda.sh

License: MIT License

Dockerfile 13.27% Makefile 2.10% Shell 17.31% Smarty 2.02% Go 47.90% Rust 17.41%
keda proxy kubernetes serverless autoscaling

kedahttp's Introduction

Notice: this repository is moved to github.com/kedacore/http-add-on. Do not commit here anymore

KEDA HTTP

This project implements a prototype of auto-scaling containers based on HTTP requests. As requests come into the system, the container(s) that are equipped to handle that request may or may not be running and ready to accept it. If there are sufficient containers available, the request is routed to one of them. If there are not, a container is started and the request is routed to it when it's ready, courtesy of KEDA.

Installation

To install the application you'll need a Kubernetes Cluster up and running.

Install KEDA

You need to install KEDA first. Do so with these commands:

helm repo add kedacore https://kedacore.github.io/charts
helm repo update
helm install keda kedacore/keda --n capps --create-namespace

These commands are similar to those on the official install page, but we're installing in a different namespace.

Install the Proxy

The proxy is responsible for receiving the requests, so you'll need to install it.

helm upgrade --install capps ./charts/cscaler-proxy -n capps --create-namespace

You can use the above command to upgrade your installation as well

After the install, run the following command to fetch the public IP of the proxy service:

kubectl get svc cscaler-proxy -n cscaler -o=jsonpath='{.status.loadBalancer.ingress[*].ip}'

To delete the proxy, but not KEDA, run this:

helm delete -n capps capps


### Build the app

Just simply run ```make cli``` command within the root directory. This will create a new binary file called `capps` in the `bin` directory.

You can then install it into your ```PATH``` or add the ```./bin``` to your ```PATH```, or you can just run it by typing `./bin/capps` (assuming you're on the root).

## CLI API

```shell
./bin/capps

Running with no parameters will give you the general help for the commands

Root Commands:

  • help: General help, use it as ./bin/capps help <cmd> to get help on any command
  • rm: Removes a created app, has its own set of flags
  • run: Creates a new app, has its own set of flags
  • version: Provides the version name and number

Create an App

./bin/capps run <app-name> --image <repository>/<image>:<tag> --port <number> --server-url <url>

Runs a new application based on parameters.

Flags

  • -i, --image: The image to be downloaded from the repository.

    Since this command will create a new set of workloads, all the logged Docker repositories within the current cluster will work

  • -s, --server-url: (Required) The URL for the admin server. To get this, run kubectl get svc cscaler-proxy -n cscaler -o=jsonpath="{.status.loadBalancer.ingress[*].ip}"

    Without the correct admin url the scaler will not work

  • -p, --port: Port number to be exposed, should be the port where the app listens to incoming connections.

  • --use-http: When set, the server URL will use the HTTP protocol instead of HTTPS (default: false)

Remove an App

./bin/capps rm <app-name> --server-url <url>

Removes a previously created app

  • -s, --server-url: (Required) The URL for the admin server. To get this, run kubectl get svc cscaler-proxy -n cscaler -o=jsonpath="{.status.loadBalancer.ingress[*].ip}"

    Without the correct admin url the scaler will not work

  • --use-http: When set, the server URL will use the HTTP protocol instead of HTTPS (default: false)

Access the app

Once deployed with capps run you'll be able to access the application through the proxy IP.

However, the proxy only understands DNS hostnames, which means that, if your service is called foo, you'll have to access it through a DNS name like foo.domain.com and this DNS Zone needs to have an A record with the name foo pointing to the proxy IP. This is an implementation of an automatic ingress rule.

You can either use your own domain or an Azure provided one.

Important: If you could not access the endpoint and it didn't work, probably you didn't have HTTPS enabled, try to use the --use-http flag to test again

Access through your domain

  1. Go to your DNS zone settings in your domain registrar
  2. Add a new A record with the same name as your service โ€“ it should point to <service>.yourdomain.com
  3. Point this DNS record to the proxy IP
  4. Give it some minutes or check dnschecker.org for the propagation
  5. Access the domain

You can check the logs on kubectl logs deploy/cscaler-proxy -f -n cscaler to check for incoming requests

Access through Azure-provided domains

  1. If your cluster is not created in Azure, check the "Enable HTTP Application Routing Addon" box when creating it
  2. If your cluster already exists, run az aks enable-addons -n <cluster-name> -g <resource-group-name> --addons http_application_routing
  3. Once the execution is complete, open the Azure Portal, navigate to the resource group named MC_<group-name>_<cluster-name>_<location>
  4. Find the Azure DNS zone the Addon created for you
  5. Follow steps 2 to 4 from the previous section
  6. Access the service using <service-name>.<dns-zone-name>

You can check the logs on kubectl logs deploy/cscaler-proxy -f -n cscaler to check for incoming requests

Debugging

If you are using vscode you can open up the dev environment in container using the remote container extension the nats server will be avalible in the dev container via nats-server:4222

If you need to do any DNS work from inside a container that's running Alpine linux, use this command:

curl -L https://github.com/sequenceiq/docker-alpine-dig/releases/download/v9.10.2/dig.tgz|tar -xzv -C /usr/local/bin/

Courtesy https://github.com/sequenceiq/docker-alpine-dig

More Information

See this document for details on the components of this system.

kedahttp's People

Contributors

arschles avatar iennae avatar khaosdoctor avatar pplavetzki avatar scotty-c avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

kedahttp's Issues

Add an ingress & ingress controller

Put it in front of the proxy so that you get auto SSL (and more!)

the critical path request flow would be:

ingress controller (nginx?) --> proxy k8s service --> backend app

Split up the admin API, proxy, and external scaler

This way we can deploy them separately to Kubernetes.

  • Need to investigate rewriting any/all of them in rust. the go is simple prototype
  • For rust rewrite, there's already #11 for rewriting the CLI, so a second rust project would need a top-level cargo.toml that defines a workspace

Sidestep the proxy when 1 or more pods are running.

If there is at least 1 pod running, an incoming request will connect somewhere, so there's little reason for a proxy to sit in the middle. When the system has no pods running, the proxy will need to hold the request and forward it after the first pod has scaled up.

The proxy will always need to keep metrics to control scaling.

Multi-container support

Currently, apps are only allowed to have 1 container in them. Multi-container apps should be supported. It's trivial to enable them in a pod, but the interface via the CLI will need to be figured out. Also need to decide on a few more things:

  • Should we allow each container to have its own env. vars?
  • Should each container get to expose its own ports?
  • If so, how should we route those ports internally?
    • And if we do, should we allow other apps to talk to this one?

Additionally, we might want to add specific sidecars "built in" to enable things like logging adapters. some ideas:

  • Logging sidecar to send logs to splunk, papertrail, azure monitor, stackdriver, etc...
  • Opentracing sidecar
  • A service mesh sidecar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.