Coder Social home page Coder Social logo

loft-sh / loft Goto Github PK

View Code? Open in Web Editor NEW
701.0 11.0 62.0 139.93 MB

Namespace & Virtual Cluster Manager for Kubernetes - Lightweight Virtual Clusters, Self-Service Provisioning for Engineers and 70% Cost Savings with Sleep Mode

Home Page: https://loft.sh/docs/introduction

License: Other

Go 100.00%
kubernetes multi-tenancy namespaces development devops gitops isolation sandboxing dev environment

loft's Introduction

loft

Namespace & Virtual Cluster Manager for Kubernetes

  • Lightweight Virtual Clusters that are flexible like namespaces but much more powerful
  • Sleep Mode to put idle namespaces and virtual clusters asleep and saves up to 70% cloud costs
  • Accounts & Account Users to separate tenants in a shared Kubernetes cluster
  • Self-Service Namespace Provisioning for account users
  • Account Limits to ensure quality of service and fairness when sharing a cluster
  • Namespace Templates for secure tenant isolation and self-service namespace initialization
  • Multi-Cluster Tenant Management for sharing a pool of clusters
  • GitOps-Ready: Custom Resource Definitions for everything loft does

loft demo video


Architecture

loft architecture


loft's People

Contributors

danielthiry avatar dependabot[bot] avatar derekddecker avatar excmax avatar fabiankramm avatar gkarthiks avatar kzap avatar lizardruss avatar loft-bot avatar lukasgentele avatar mpetason avatar neogopher avatar ovaldi avatar richburroughs avatar snyk-bot avatar thomask33 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

loft's Issues

Kubectl port-forward not working on a bare metal kubernetes installation with loft.

Setup

I have recently installed a kubernetes cluster on a bare metal server in my closet for testing purposes. I've installed loft on it with ingress controller and working subdomain and have added an ssl certificate which is valid. I can access the loft instance fine using the browser aswell as the cli and the ssl certificate issued by letsencrypt is valid.

Problem

Everything overall works fine with the loft installation but there is one problem in particular that I ran into. kubectl port-forward used to work fine with the initial cluster installation before installing loft. Since installing loft, after I change my space using the "loft use space damien --cluster loft-cluster" command my kubectl port-forward command stops working and spits out an error saying: error: error upgrading connection: Upgrade request required.

Whenever I go into my kubeconfig and change the current context back to the context I was using before I installed loft, the kubectl port-forward will work again. This context uses the raw ip of the cluster instead of the loft subdomain.

Am I missing something or have I perhaps misconfigured something?

Add support for other ingress annotations

The current ingress is tailored specifically for Nginx. In my case, I'd like to use AWS ALB which requires a lot of annotations to configure. To achieve it I had to hack the ingress.yaml like this:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: {{ .Values.ingress.name }}
  namespace: {{ .Release.Namespace }}
  labels:
    app: {{ template "loft.fullname" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
  annotations:
  {{- with .Values.ingress.annotations }}
    {{- tpl (toYaml .) $ | nindent 4 }}
  {{- end }}
  {{- if .Values.ingress.tls.enabled }}
    cert-manager.io/cluster-issuer: {{ .Values.ingress.tls.clusterIssuer }}
  {{- end }}

Using tpl (toYaml) .) $ allows to put template strings in the values file like:

      external-dns.alpha.kubernetes.io/hostname: "loft.{{ .Values.deploymentName }}.my-domain.com"

Browser error: SEC_ERROR_REUSED_ISSUER_AND_SERIAL

I am running air-gapped using the loft:1.3.0 image and the 1.2.1 chart.

Install goes fine, both loft and kiosk pods are running. I use the port-foward command and go to https://localhost:8080 in my firefox browser and I get the following message:

Error code: SEC_ERROR_REUSED_ISSUER_AND_SERIAL

I suspect this may have something to do with the self signed certs generated but don't know how to proceed.

Also I'd rather use https over an nginx-ingress controller but couldn't seem to get that working either

Loft has not set a limit on cpu

Loft Version: 0.3.6

The currecnt install of loft does not have a limit set on the CPU and is using way too much resources

image

Canceling a space delete before it's done renders the space unusable for the user

When a user cancels a delete before it finishes, it looks like the namespace gets deleted. However, it is not removed from Loft. When the user then tried to use the space they get the following message:

Error: space.tenancy.kiosk.sh "insert-name-here" is forbidden: User "joeuser" cannot delete resource "spaces" in API group "tenancy.kiosk.sh" at the cluster scope (Forbidden)

Once an admin removes the space in Loft, the user can then create the space and use it again

loft pod is failing with CrashLoopBackOff

During the helm install, deploy is failing with the below error.

W0221 22:06:35.365278 34735 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Error: timed out waiting for the condition

Below are the pod logs

register instance: Post "https://license.loft.sh/register": dial tcp: lookup license.loft.sh on 10.96.0.10:53: read udp 10.32.0.3:39996->10.96.0.10:53: read: connection refused

Install loft by Argo CD seems stucking in Syncing, except first time

I am using OpenShift + Argo CD to install loft, the following is my Argo CD manifest. The manifest tells Argo CD to use helm to install loft and feeding some vaules to do customization.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: loft
spec:
  destination:
    namespace: loft
    server: https://kubernetes.default.svc
  project: default
  source:
    repoURL: https://charts.loft.sh
    targetRevision: 1.15.0
    chart: loft
    helm:
      values: |
        ingress:
          enabled: false
  syncPolicy:
    syncOptions:
    - CreateNamespace=true 
    automated:
      prune: true
      selfHeal: true

It can be sync loft succssfully at first time.

But, after deleting all resource installed by loft, including loft-agent, APIService, CRD, RBAC...everything. It seems unable to sync loft again and Argo CD keep stucking in Syncing status.

Any suggestion or experience for how to install loft by Argo CD or any other GitOps tools?

unable to pull images from private registry inside vcluster

Hi

I am trying to deploy an application in which the image is hosted on a private ACR registry. it is failing because of an image pulling issue.

I set up kubenetes secret as following
kubectl -n aqua create secret docker-registry registry-creds --docker-server=#####.azurecr.io --docker-username=##### --docker-password=##### --docker-email=#####

and the deployment of the application is set with
imagePullSecrets:
- name: registry-creds

And service account that is using the same image pull secret

Deployment failed with
aqua-db-c988798b4-j8vsn 0/1 ImagePullBackOff 0 15s

the describe pod Events:

Type Reason Age From Message


Normal Scheduled 13m default-scheduler Successfully assigned aqua/aqua-db-c988798b4-j8vsn to gke-gke8010-default-pool-9eb8eb33-pnbd
Warning SyncError 13m pod-syncer Error updating pod: Operation cannot be fulfilled on pods "aqua-db-c988798b4-j8vsn": the object has been modified; please apply your changes to the latest version and try again
Normal BackOff 11m (x6 over 13m) kubelet, gke-gke8010-default-pool-9eb8eb33-pnbd Back-off pulling image "#####.azurecr.io/database:5.0.0"
Normal Pulling 11m (x4 over 13m) kubelet, gke-gke8010-default-pool-9eb8eb33-pnbd Pulling image "#####.azurecr.io/database:5.0.0"
Warning Failed 11m (x4 over 13m) kubelet, gke-gke8010-default-pool-9eb8eb33-pnbd Error: ErrImagePull
Warning Failed 11m (x4 over 13m) kubelet, gke-gke8010-default-pool-9eb8eb33-pnbd Failed to pull image "#####.azurecr.io/database:5.0.0": rpc error: code = Unknown desc = Error response from daemon: Get https://*****.azurecr.io/v2/database/manifests/5.0.0: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
Warning Failed 2m55s (x43 over 13m) kubelet, gke-gke8010-default-pool-9eb8eb33-pnbd Error: ImagePullBackOff

The above is working well, on any “regular” cluster

Cannot install prometheus stack using default values from the UI

I was looking at the monitoring tab in the cluster details screen and it says that I need prometheus/grafana in order to use it.

I tried to deploy the stack from within the Loft UI with just the default values. I only entered my domain as it was required.
The cluster is a standard EKS cluster with Traefik installed. Not sure how to debug the error. It seems it is Loft specific

trying-to-install-prometheus

Zookeeper operator fails to start pods due to headless service DNS issue

I noticed an issue when trying to deploy pravega's zookeeper operator ( https://github.com/pravega/zookeeper-operator) in a virtual cluster. The operator installs fine, but when it tries to bring up the pods they fall into a crash loop with the following error:

2021-02-02 06:57:17,626 [myid:1] - ERROR [QuorumPeer[myid=1](plain=0.0.0.0:2181)(secure=disabled):Leader@318] - Couldn't bind to zookeeper-0.zookeeper-headless.m10-zookeeper.svc.cluster.local:2888
java.net.SocketException: Unresolved address
	at java.base/java.net.ServerSocket.bind(Unknown Source)
	at java.base/java.net.ServerSocket.bind(Unknown Source)
	at org.apache.zookeeper.server.quorum.Leader.createServerSocket(Leader.java:315)
	at org.apache.zookeeper.server.quorum.Leader.lambda$new$0(Leader.java:294)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown Source)
	at java.base/java.util.concurrent.ConcurrentHashMap$KeySpliterator.forEachRemaining(Unknown Source)
	at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
	at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown Source)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown Source)
	at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
	at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source)
	at org.apache.zookeeper.server.quorum.Leader.<init>(Leader.java:297)
	at org.apache.zookeeper.server.quorum.QuorumPeer.makeLeader(QuorumPeer.java:1260)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1467)

I did some digging, and it seems like the root cause is a problem with DNS for headless services in a virtual cluster. The DNS entry for zookeeper-0.zookeeper-headless.m10-zookeeper.svc.cluster.local is not being created. I suspect this is because pods are renamed when they are synced into the host-cluster, and the host cluster control plane is responsible for service creation.

Any help with this issue would be appreciated.

Helm: Loft deadlocks install if transient failure

During a loft installation via helm, there was a transient failure:

Error: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://nginx-ingress-nginx-controller-admission.ingress.svc:443/networking/v1/ingresses?timeout=10s": context deadline exceeded

Rerunning the helm command just ends up with this error:

Error: failed pre-install: warning: Hook pre-install loft/templates/admin/user.yaml failed: object is being deleted: users.storage.loft.sh "admin" already exists

which results in a single loft-agent pod, no other loft pods.

Here's the values for the helm chart fwiw:

    admin:
      create: true
      username: admin
      password: PASSWORD
    ingress:
      enabled: true
      name: loft-ingress
      host: DOMAIN
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt
      tls:
        enabled: true
        secret: tls-loft

loft start was able to resolve the issue.

How to apply configuration dev cluster configuration?

I'm trying to be able to programmatically bring up a loft cluster with everything pre-configured (I'm limited to pure k8s constructs atm, but if I have to, I'll try to script with the loft-cli -- though I'm not sure it can do all of the things I want to do).

Here's the vcluster kubectl is trying to apply. I can't help but feel I'm missing something obvious...

apiVersion: management.loft.sh/v1
kind: VirtualClusterTemplate
metadata:
  name: development-cluster
spec:
  displayName: Development Cluster
  owner:
    user: roblanders
  template:
    metadata:
      annotations:
        vcluster.loft.sh/skip-translate: 'false'
    helmRelease:
      values: |-
        storage:
          size: 5Gi
        
        syncer:
          extraArgs:
            - --fake-persistent-volumes=false
            - --enable-storage-classes
            - --enable-priority-classes
            - --fake-nodes=false
            - --sync-all-nodes

        rbac:
          clusterRole:
            create: true
    apps:
      - name: metric-server
        namespace: kube-system
  spaceTemplateRef:
    name: dev-space
  access:
    - verbs:
        - '*'
      subresources:
        - '*'
      users:
        - '*'
status:
  apps:
    - name: metric-server
      displayName: Metric Server

However, this is the error I receive:

failed: failed to create development-cluster management.loft.sh/v1, Kind=VirtualClusterTemplate for  kube-system/dev-cluster: invalid owner. must specify user or team or leave empty"

This also happens for the space template and app as well, fwiw.

Kubectl exec with Loft generated Kubeconfig Fails with "Upgrade Request Required"

Having issues with devspace enter and kubectl exec commands failing. My version of kubectl is the same as the cluster, and when using a context generated outside of loft (EKS), kubectl exec works fine.

The only output from the API is "[fatal] Upgrade request required"

Loft Server Version: 1.7.1
Loft CLI Versions tested: 1.7.1, 1.7.0, 1.6.2

Hook pre-install error with loft chart v1.2.1

I am getting the following error when I go to install loft using v1.2.1 of the chart:

Error: failed pre-install: warning: Hook pre-install loft/templates/cluster/cluster-account.yaml failed: Internal error occurred: failed calling webhook "config.kiosh.sh": Post https://kiosk.loft.svc: 443/validate?timeout=30s: service "kiosk" not found

NOTE: I am in an airgapped environment

Streaming error in vcluster: Error streaming logs for ... INTERNAL_ERROR

I am using devspace together with loft.

I very often get the following error when I am connected to a vcluster and streaming the output (devspace dev).
I also get the same error when I am doing builds with kaniko using devspace. The build fails after that error.

Error streaming logs for default/<pod name>/<container name>: stream error: stream ID <random int>; INTERNAL_ERROR

I tried it on different instances of kubernetes (k3s).
The error ocurres on my local machine and also in a github runner.

The problem does no arise, when I am operating on a "real cluster"

Can't go past the "Complete your profile" page

Submitting the form on the "Complete your profile" page spins the "Finish" button indefinitely. The loft pod logs say:

loft-6b647d94dc-ld6tl manager 2021-05-02 12:47:41.819818 I | http: TLS handshake error from 127.0.0.1:57608: remote error: tls: unknown certificate

I have TLS disabled in the values.yaml:

# TLS configuration with a custom cert and key
# Make sure the secret exists prior to deploying loft,
# otherwise the loft pod will not be able to start
tls:
  enabled: false
  secret: loft-tls
  crtKey: tls.crt
  keyKey: tls.key

...
ingress:
...
  tls:
    enabled: false
    secret: tls-loft
    clusterIssuer: lets-encrypt-http-issuer

Improve UX for Helm apps

It is not immediately clear that the warning message about the ingress only refers to the cluster subdomain feature.

I don't understand why the Helm apps are "disabled"/grey color

I think the warning should clarify its scope. Now it looks like it refers to both UI segments

No license

Without a license file this product is essentially unusable to the enterprise. Please update with the license model you are using.

cannot install loft on minikube (Error during helm command)

Hi guys,

Hope you are all well !

$ kubectl auth can-i create clusterrole -A
yes
$ loft start

[info]   Welcome to the loft installation.
[info]   This installer will guide you through the installation.
[info]   If you prefer installing loft via helm yourself, visit https://loft.sh/docs/getting-started/setup
[info]   Thanks for trying out loft!

? Seems like your cluster is running locally (docker desktop, minikube, kind etc.). Is that correct?
 Yes

? Enter an email address for your admin user [email protected]

[info]   This will install loft without an externally reachable URL and instead use port-forwarding to connect to loft

[info]   Executing command: helm install loft loft --repository-config  --repo https://charts.devspace.sh/ --kube-context minikube --namespace loft --set certIssuer.create=false --set ingress.enabled=false --set cluster.connect.local=true --set admin.password=821b6889-1d39-4c44-b6e7-852a714f6a95 --set [email protected] --wait

[fatal]  Error during helm command: W1124 07:38:38.582665  517752 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1124 07:38:38.780785  517752 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1124 07:38:39.178691  517752 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1124 07:38:41.377876  517752 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1124 07:38:41.481505  517752 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1124 07:38:41.580392  517752 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1124 07:38:45.479906  517752 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1124 07:39:02.975126  517752 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
Error: timed out waiting for the condition
 (exit status 1)

What can I do to fix that ?

Thanks for any inputs or insights on this issue.

Cheers,
Luc

Rebooting cluster results in completely unusable cluster.

I installed Loft on a single node, bare metal, cluster. It's using Longhorn to supply PV's, just like the production cluster. The node was rebooted for security patches, and when it came back up, the node never recovered.

After discovering that Longhorn wasn't coming up, I learned that it was failing to connect to loft's proxy which couldn't come up due to loft being unable to get access to it's PVs. It seems there is a chicken-and-egg type problem here.

Do you have any suggestions or best-practices to ensure clusters that suffer catastrophic failures (all nodes going down) can come back up without loft being available yet?

Fails to install helm chart in vcluster

Helm fails when trying to install https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq on a fresh vclusteAlso, during/after installing above, kubectl get pods errors with "Unable to connect to the server: net/http: TLS handshake timeout". Also the port forwarding of port 9898 or the underlying pod for the web UI failed -- the browser disconnected and wouldn't reconnect.

The same chart installed fine both on straight k8s and in a loft space.

This was on a Macbook Pro running k8s in the latest version of Docker Desktop and a fresh download of loft.

loft panics when kubectl is used

loft installed with 1.80 into GKE.
Github actions run and loft login/loft use is successful, kubectl get all --all-namespaces failed with 502

Run ./kubectl --kubeconfig='kube_config' get po --all-namespaces
  ./kubectl --kubeconfig='kube_config' get po --all-namespaces
  shell: /usr/bin/bash -e {0}
  env:
    LOFT_VERSION: 1.8.0
    LOFT_URL: https://loft.l3v.co
    LOFT_USERNAME: ***
    LOFT_ACCESS_KEY: ***
    VCLUSTER_NAME: ci
Error from server (InternalError): an error on the server ("<html>\r\n<head><title>502 Bad Gateway</title></head>\r\n<body>\r\n<center><h1>502 Bad Gateway</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>") has prevented the request from succeeding
Error: Process completed with exit code 1.

Oddly almost everytime it gives the error, once in a while it works.
However kubectl in laptop uses vcluster without any problems. piuzzled bu wanted to report it anyway.

Loft pod logs:

I0305 20:21:08.316280       1 deleg.go:130] start loft-cluster/vcluster-ci/ci port forwarder on port 10009
2021-03-05 20:21:08.317012 I | http: panic serving 10.64.1.107:36048: runtime error: invalid memory address or nil pointer dereference
goroutine 7894 [running]:
net/http.(*conn).serve.func1(0xc0015a52c0)
        /usr/local/go/src/net/http/server.go:1801 +0x147
panic(0x217fa00, 0x37cffe0)
        /usr/local/go/src/runtime/panic.go:975 +0x47a
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).startForwading(0xc000c5e7c0, 0xc0029497e0, 0x1b, 0xc000600d80, 0x28cb920, 0xc001ef22c0, 0x0, 0x2719, 0x0, 0x453514, ...)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:138 +0x19c
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).GetPort(0xc000c5e7c0, 0x2893c60, 0xc001e595f0, 0xc0029497c1, 0xc, 0xc0029497e0, 0x1b, 0x0, 0x0, 0x0)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:121 +0x350
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.(*portForwarder).GetPort(0xc000ee1ea0, 0x2893c60, 0xc001e595f0, 0xc0029497c1, 0xc, 0xc0029497ce, 0xb, 0xc0029497da, 0x2, 0xc001e595f0, ...)
        /loft/pkg/apigateway/virtualcluster/cache.go:45 +0x115
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.Handler.func1(0x287cd20, 0xc002353598, 0xc002de6200)
        /loft/pkg/apigateway/virtualcluster/handler.go:21 +0x109
net/http.HandlerFunc.ServeHTTP(0xc000ee1f00, 0x287cd20, 0xc002353598, 0xc002de6200)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.ServeWithSleepMode(0x283eb60, 0xc000ee1f00, 0x287cd20, 0xc002353598, 0xc002de6200, 0xc0029497c1, 0xc, 0xc0029497ce, 0xb, 0x289e520, ...)
        /loft/pkg/apigateway/filters/sleepmode.go:80 +0x153
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithSleepMode.func1(0x287cd20, 0xc002353598, 0xc002de6200)
        /loft/pkg/apigateway/virtualcluster/filters/sleepmode.go:29 +0x1a5
net/http.HandlerFunc.ServeHTTP(0xc00093c270, 0x287cd20, 0xc002353598, 0xc002de6200)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithMetrics.func1(0x288ec60, 0xc000e99880, 0xc002de6200)
        /loft/pkg/apigateway/filters/metrics.go:201 +0x257
net/http.HandlerFunc.ServeHTTP(0xc000ee1f20, 0x288ec60, 0xc000e99880, 0xc002de6200)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x288ec60, 0xc000e99880, 0xc002de6100)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x269
net/http.HandlerFunc.ServeHTTP(0xc00093c2a0, 0x288ec60, 0xc000e99880, 0xc002de6100)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithVirtualClusterRequestInfo.func1(0x288ec60, 0xc000e99880, 0xc002de6000)
        /loft/pkg/apigateway/virtualcluster/filters/requestinfo.go:90 +0x602
net/http.HandlerFunc.ServeHTTP(0xc000ee1f60, 0x288ec60, 0xc000e99880, 0xc002de6000)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/util/serverhelper.StripLeaveSlash.func1(0x288ec60, 0xc000e99880, 0xc002de6000)
        /loft/pkg/util/serverhelper/helper.go:26 +0xd5
net/http.HandlerFunc.ServeHTTP(0xc00093c300, 0x288ec60, 0xc000e99880, 0xc002de6000)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.(*ServeMux).ServeHTTP(0xc000c5e1c0, 0x288ec60, 0xc000e99880, 0xc002de6000)
        /usr/local/go/src/net/http/server.go:2417 +0x1ad
github.com/loft-sh/loft/pkg/apigateway/filters.WithOriginalRequestPath.func1(0x288ec60, 0xc000e99880, 0xc000737f00)
        /loft/pkg/apigateway/filters/originalrequestpath.go:10 +0x1c7
net/http.HandlerFunc.ServeHTTP(0xc000ef9ac0, 0x288ec60, 0xc000e99880, 0xc000737f00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1(0x288ec60, 0xc000e99880, 0xc000737f00)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0xa8
net/http.HandlerFunc.ServeHTTP(0xc000ef9ae0, 0x288ec60, 0xc000e99880, 0xc000737f00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithCORS.func1(0x288ec60, 0xc000e99880, 0xc000737f00)
        /loft/pkg/apigateway/filters/cors.go:12 +0x2ee
net/http.HandlerFunc.ServeHTTP(0xc000ef9b20, 0x288ec60, 0xc000e99880, 0xc000737f00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.serverHandler.ServeHTTP(0xc000de30a0, 0x288ec60, 0xc000e99880, 0xc000737f00)
        /usr/local/go/src/net/http/server.go:2843 +0xa3
net/http.(*conn).serve(0xc0015a52c0, 0x2893ba0, 0xc001e17d40)
        /usr/local/go/src/net/http/server.go:1925 +0x8ad
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2969 +0x36c
I0305 20:21:08.327087       1 deleg.go:130] start loft-cluster/vcluster-ci/ci port forwarder on port 10010
2021-03-05 20:21:08.327523 I | http: panic serving 10.64.1.107:36050: runtime error: invalid memory address or nil pointer dereference
goroutine 7919 [running]:
net/http.(*conn).serve.func1(0xc002ad4d20)
        /usr/local/go/src/net/http/server.go:1801 +0x147
panic(0x217fa00, 0x37cffe0)
        /usr/local/go/src/runtime/panic.go:975 +0x47a
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).startForwading(0xc000c5e7c0, 0xc005b68d40, 0x1b, 0xc000600d80, 0x28cb920, 0xc002225ce0, 0x0, 0x271a, 0x0, 0x453514, ...)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:138 +0x19c
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).GetPort(0xc000c5e7c0, 0x2893c60, 0xc002b8e8a0, 0xc005b68d01, 0xc, 0xc005b68d40, 0x1b, 0x0, 0x0, 0x0)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:121 +0x350
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.(*portForwarder).GetPort(0xc000ee1ea0, 0x2893c60, 0xc002b8e8a0, 0xc005b68d01, 0xc, 0xc005b68d0e, 0xb, 0xc005b68d1a, 0x2, 0xc002b8e8a0, ...)
        /loft/pkg/apigateway/virtualcluster/cache.go:45 +0x115
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.Handler.func1(0x287cd20, 0xc002353768, 0xc002de7100)
        /loft/pkg/apigateway/virtualcluster/handler.go:21 +0x109
net/http.HandlerFunc.ServeHTTP(0xc000ee1f00, 0x287cd20, 0xc002353768, 0xc002de7100)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.ServeWithSleepMode(0x283eb60, 0xc000ee1f00, 0x287cd20, 0xc002353768, 0xc002de7100, 0xc005b68d01, 0xc, 0xc005b68d0e, 0xb, 0x289e520, ...)
        /loft/pkg/apigateway/filters/sleepmode.go:80 +0x153
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithSleepMode.func1(0x287cd20, 0xc002353768, 0xc002de7100)
        /loft/pkg/apigateway/virtualcluster/filters/sleepmode.go:29 +0x1a5
net/http.HandlerFunc.ServeHTTP(0xc00093c270, 0x287cd20, 0xc002353768, 0xc002de7100)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithMetrics.func1(0x288ec60, 0xc000e99c00, 0xc002de7100)
        /loft/pkg/apigateway/filters/metrics.go:201 +0x257
net/http.HandlerFunc.ServeHTTP(0xc000ee1f20, 0x288ec60, 0xc000e99c00, 0xc002de7100)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x288ec60, 0xc000e99c00, 0xc002de7000)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x269
net/http.HandlerFunc.ServeHTTP(0xc00093c2a0, 0x288ec60, 0xc000e99c00, 0xc002de7000)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithVirtualClusterRequestInfo.func1(0x288ec60, 0xc000e99c00, 0xc002de6f00)
        /loft/pkg/apigateway/virtualcluster/filters/requestinfo.go:90 +0x602
net/http.HandlerFunc.ServeHTTP(0xc000ee1f60, 0x288ec60, 0xc000e99c00, 0xc002de6f00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/util/serverhelper.StripLeaveSlash.func1(0x288ec60, 0xc000e99c00, 0xc002de6f00)
        /loft/pkg/util/serverhelper/helper.go:26 +0xd5
net/http.HandlerFunc.ServeHTTP(0xc00093c300, 0x288ec60, 0xc000e99c00, 0xc002de6f00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.(*ServeMux).ServeHTTP(0xc000c5e1c0, 0x288ec60, 0xc000e99c00, 0xc002de6f00)
        /usr/local/go/src/net/http/server.go:2417 +0x1ad
github.com/loft-sh/loft/pkg/apigateway/filters.WithOriginalRequestPath.func1(0x288ec60, 0xc000e99c00, 0xc000cdd700)
        /loft/pkg/apigateway/filters/originalrequestpath.go:10 +0x1c7
net/http.HandlerFunc.ServeHTTP(0xc000ef9ac0, 0x288ec60, 0xc000e99c00, 0xc000cdd700)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1(0x288ec60, 0xc000e99c00, 0xc000cdd700)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0xa8
net/http.HandlerFunc.ServeHTTP(0xc000ef9ae0, 0x288ec60, 0xc000e99c00, 0xc000cdd700)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithCORS.func1(0x288ec60, 0xc000e99c00, 0xc000cdd700)
        /loft/pkg/apigateway/filters/cors.go:12 +0x2ee
net/http.HandlerFunc.ServeHTTP(0xc000ef9b20, 0x288ec60, 0xc000e99c00, 0xc000cdd700)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.serverHandler.ServeHTTP(0xc000de30a0, 0x288ec60, 0xc000e99c00, 0xc000cdd700)
        /usr/local/go/src/net/http/server.go:2843 +0xa3
net/http.(*conn).serve(0xc002ad4d20, 0x2893ba0, 0xc001a5e400)
        /usr/local/go/src/net/http/server.go:1925 +0x8ad
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2969 +0x36c
I0305 20:21:08.338047       1 deleg.go:130] start loft-cluster/vcluster-ci/ci port forwarder on port 10011
2021-03-05 20:21:08.338446 I | http: panic serving 10.64.1.107:36052: runtime error: invalid memory address or nil pointer dereference
goroutine 7921 [running]:
net/http.(*conn).serve.func1(0xc002ad4e60)
        /usr/local/go/src/net/http/server.go:1801 +0x147
panic(0x217fa00, 0x37cffe0)
        /usr/local/go/src/runtime/panic.go:975 +0x47a
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).startForwading(0xc000c5e7c0, 0xc0029f5cc0, 0x1b, 0xc000600d80, 0x28cb920, 0xc0020ca6e0, 0x0, 0x271b, 0x0, 0x453514, ...)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:138 +0x19c
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).GetPort(0xc000c5e7c0, 0x2893c60, 0xc0028161e0, 0xc0029f5ca1, 0xc, 0xc0029f5cc0, 0x1b, 0x0, 0x0, 0x0)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:121 +0x350
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.(*portForwarder).GetPort(0xc000ee1ea0, 0x2893c60, 0xc0028161e0, 0xc0029f5ca1, 0xc, 0xc0029f5cae, 0xb, 0xc0029f5cba, 0x2, 0xc0028161e0, ...)
        /loft/pkg/apigateway/virtualcluster/cache.go:45 +0x115
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.Handler.func1(0x287cd20, 0xc0039f1878, 0xc000cddb00)
        /loft/pkg/apigateway/virtualcluster/handler.go:21 +0x109
net/http.HandlerFunc.ServeHTTP(0xc000ee1f00, 0x287cd20, 0xc0039f1878, 0xc000cddb00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.ServeWithSleepMode(0x283eb60, 0xc000ee1f00, 0x287cd20, 0xc0039f1878, 0xc000cddb00, 0xc0029f5ca1, 0xc, 0xc0029f5cae, 0xb, 0x289e520, ...)
        /loft/pkg/apigateway/filters/sleepmode.go:80 +0x153
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithSleepMode.func1(0x287cd20, 0xc0039f1878, 0xc000cddb00)
        /loft/pkg/apigateway/virtualcluster/filters/sleepmode.go:29 +0x1a5
net/http.HandlerFunc.ServeHTTP(0xc00093c270, 0x287cd20, 0xc0039f1878, 0xc000cddb00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithMetrics.func1(0x288ec60, 0xc000accc40, 0xc000cddb00)
        /loft/pkg/apigateway/filters/metrics.go:201 +0x257
net/http.HandlerFunc.ServeHTTP(0xc000ee1f20, 0x288ec60, 0xc000accc40, 0xc000cddb00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x288ec60, 0xc000accc40, 0xc000cdda00)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x269
net/http.HandlerFunc.ServeHTTP(0xc00093c2a0, 0x288ec60, 0xc000accc40, 0xc000cdda00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithVirtualClusterRequestInfo.func1(0x288ec60, 0xc000accc40, 0xc000cdd900)
        /loft/pkg/apigateway/virtualcluster/filters/requestinfo.go:90 +0x602
net/http.HandlerFunc.ServeHTTP(0xc000ee1f60, 0x288ec60, 0xc000accc40, 0xc000cdd900)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/util/serverhelper.StripLeaveSlash.func1(0x288ec60, 0xc000accc40, 0xc000cdd900)
        /loft/pkg/util/serverhelper/helper.go:26 +0xd5
net/http.HandlerFunc.ServeHTTP(0xc00093c300, 0x288ec60, 0xc000accc40, 0xc000cdd900)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.(*ServeMux).ServeHTTP(0xc000c5e1c0, 0x288ec60, 0xc000accc40, 0xc000cdd900)
        /usr/local/go/src/net/http/server.go:2417 +0x1ad
github.com/loft-sh/loft/pkg/apigateway/filters.WithOriginalRequestPath.func1(0x288ec60, 0xc000accc40, 0xc000cdd800)
        /loft/pkg/apigateway/filters/originalrequestpath.go:10 +0x1c7
net/http.HandlerFunc.ServeHTTP(0xc000ef9ac0, 0x288ec60, 0xc000accc40, 0xc000cdd800)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1(0x288ec60, 0xc000accc40, 0xc000cdd800)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0xa8
net/http.HandlerFunc.ServeHTTP(0xc000ef9ae0, 0x288ec60, 0xc000accc40, 0xc000cdd800)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithCORS.func1(0x288ec60, 0xc000accc40, 0xc000cdd800)
        /loft/pkg/apigateway/filters/cors.go:12 +0x2ee
net/http.HandlerFunc.ServeHTTP(0xc000ef9b20, 0x288ec60, 0xc000accc40, 0xc000cdd800)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.serverHandler.ServeHTTP(0xc000de30a0, 0x288ec60, 0xc000accc40, 0xc000cdd800)
        /usr/local/go/src/net/http/server.go:2843 +0xa3
net/http.(*conn).serve(0xc002ad4e60, 0x2893ba0, 0xc001a5e540)
        /usr/local/go/src/net/http/server.go:1925 +0x8ad
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2969 +0x36c
I0305 20:21:08.439713       1 deleg.go:130] start loft-cluster/vcluster-ci/ci port forwarder on port 10012
2021-03-05 20:21:08.439901 I | http: panic serving 10.64.1.107:36054: runtime error: invalid memory address or nil pointer dereference
goroutine 7901 [running]:
net/http.(*conn).serve.func1(0xc00322e6e0)
        /usr/local/go/src/net/http/server.go:1801 +0x147
panic(0x217fa00, 0x37cffe0)
        /usr/local/go/src/runtime/panic.go:975 +0x47a
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).startForwading(0xc000c5e7c0, 0xc005c4caa0, 0x1b, 0xc000600d80, 0x28cb920, 0xc00248a9a0, 0x0, 0x271c, 0x0, 0x453514, ...)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:138 +0x19c
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).GetPort(0xc000c5e7c0, 0x2893c60, 0xc002bd3350, 0xc005c4ca81, 0xc, 0xc005c4caa0, 0x1b, 0x0, 0x0, 0x0)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:121 +0x350
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.(*portForwarder).GetPort(0xc000ee1ea0, 0x2893c60, 0xc002bd3350, 0xc005c4ca81, 0xc, 0xc005c4ca8e, 0xb, 0xc005c4ca9a, 0x2, 0xc002bd3350, ...)
        /loft/pkg/apigateway/virtualcluster/cache.go:45 +0x115
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.Handler.func1(0x287cd20, 0xc002353990, 0xc000a2a800)
        /loft/pkg/apigateway/virtualcluster/handler.go:21 +0x109
net/http.HandlerFunc.ServeHTTP(0xc000ee1f00, 0x287cd20, 0xc002353990, 0xc000a2a800)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.ServeWithSleepMode(0x283eb60, 0xc000ee1f00, 0x287cd20, 0xc002353990, 0xc000a2a800, 0xc005c4ca81, 0xc, 0xc005c4ca8e, 0xb, 0x289e520, ...)
        /loft/pkg/apigateway/filters/sleepmode.go:80 +0x153
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithSleepMode.func1(0x287cd20, 0xc002353990, 0xc000a2a800)
        /loft/pkg/apigateway/virtualcluster/filters/sleepmode.go:29 +0x1a5
net/http.HandlerFunc.ServeHTTP(0xc00093c270, 0x287cd20, 0xc002353990, 0xc000a2a800)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithMetrics.func1(0x288ec60, 0xc00253e0e0, 0xc000a2a800)
        /loft/pkg/apigateway/filters/metrics.go:201 +0x257
net/http.HandlerFunc.ServeHTTP(0xc000ee1f20, 0x288ec60, 0xc00253e0e0, 0xc000a2a800)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x288ec60, 0xc00253e0e0, 0xc000a2a700)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x269
net/http.HandlerFunc.ServeHTTP(0xc00093c2a0, 0x288ec60, 0xc00253e0e0, 0xc000a2a700)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithVirtualClusterRequestInfo.func1(0x288ec60, 0xc00253e0e0, 0xc000a2a600)
        /loft/pkg/apigateway/virtualcluster/filters/requestinfo.go:90 +0x602
net/http.HandlerFunc.ServeHTTP(0xc000ee1f60, 0x288ec60, 0xc00253e0e0, 0xc000a2a600)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/util/serverhelper.StripLeaveSlash.func1(0x288ec60, 0xc00253e0e0, 0xc000a2a600)
        /loft/pkg/util/serverhelper/helper.go:26 +0xd5
net/http.HandlerFunc.ServeHTTP(0xc00093c300, 0x288ec60, 0xc00253e0e0, 0xc000a2a600)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.(*ServeMux).ServeHTTP(0xc000c5e1c0, 0x288ec60, 0xc00253e0e0, 0xc000a2a600)
        /usr/local/go/src/net/http/server.go:2417 +0x1ad
github.com/loft-sh/loft/pkg/apigateway/filters.WithOriginalRequestPath.func1(0x288ec60, 0xc00253e0e0, 0xc000a2a400)
        /loft/pkg/apigateway/filters/originalrequestpath.go:10 +0x1c7
net/http.HandlerFunc.ServeHTTP(0xc000ef9ac0, 0x288ec60, 0xc00253e0e0, 0xc000a2a400)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1(0x288ec60, 0xc00253e0e0, 0xc000a2a400)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0xa8
net/http.HandlerFunc.ServeHTTP(0xc000ef9ae0, 0x288ec60, 0xc00253e0e0, 0xc000a2a400)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithCORS.func1(0x288ec60, 0xc00253e0e0, 0xc000a2a400)
        /loft/pkg/apigateway/filters/cors.go:12 +0x2ee
net/http.HandlerFunc.ServeHTTP(0xc000ef9b20, 0x288ec60, 0xc00253e0e0, 0xc000a2a400)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.serverHandler.ServeHTTP(0xc000de30a0, 0x288ec60, 0xc00253e0e0, 0xc000a2a400)
        /usr/local/go/src/net/http/server.go:2843 +0xa3
net/http.(*conn).serve(0xc00322e6e0, 0x2893ba0, 0xc001ce3040)
        /usr/local/go/src/net/http/server.go:1925 +0x8ad
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2969 +0x36c
I0305 20:21:08.449379       1 deleg.go:130] start loft-cluster/vcluster-ci/ci port forwarder on port 10013
2021-03-05 20:21:08.449565 I | http: panic serving 10.64.1.107:36056: runtime error: invalid memory address or nil pointer dereference
goroutine 7904 [running]:
net/http.(*conn).serve.func1(0xc00259e640)
        /usr/local/go/src/net/http/server.go:1801 +0x147
panic(0x217fa00, 0x37cffe0)
        /usr/local/go/src/runtime/panic.go:975 +0x47a
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).startForwading(0xc000c5e7c0, 0xc005c4d320, 0x1b, 0xc000600d80, 0x28cb920, 0xc0025d6000, 0x0, 0x271d, 0x0, 0x453514, ...)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:138 +0x19c
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).GetPort(0xc000c5e7c0, 0x2893c60, 0xc002cc16e0, 0xc005c4d301, 0xc, 0xc005c4d320, 0x1b, 0x0, 0x0, 0x0)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:121 +0x350
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.(*portForwarder).GetPort(0xc000ee1ea0, 0x2893c60, 0xc002cc16e0, 0xc005c4d301, 0xc, 0xc005c4d30e, 0xb, 0xc005c4d31a, 0x2, 0xc002cc16e0, ...)
        /loft/pkg/apigateway/virtualcluster/cache.go:45 +0x115
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.Handler.func1(0x287cd20, 0xc002353b68, 0xc000a2b700)
        /loft/pkg/apigateway/virtualcluster/handler.go:21 +0x109
net/http.HandlerFunc.ServeHTTP(0xc000ee1f00, 0x287cd20, 0xc002353b68, 0xc000a2b700)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.ServeWithSleepMode(0x283eb60, 0xc000ee1f00, 0x287cd20, 0xc002353b68, 0xc000a2b700, 0xc005c4d301, 0xc, 0xc005c4d30e, 0xb, 0x289e520, ...)
        /loft/pkg/apigateway/filters/sleepmode.go:80 +0x153
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithSleepMode.func1(0x287cd20, 0xc002353b68, 0xc000a2b700)
        /loft/pkg/apigateway/virtualcluster/filters/sleepmode.go:29 +0x1a5
net/http.HandlerFunc.ServeHTTP(0xc00093c270, 0x287cd20, 0xc002353b68, 0xc000a2b700)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithMetrics.func1(0x288ec60, 0xc00253e1c0, 0xc000a2b700)
        /loft/pkg/apigateway/filters/metrics.go:201 +0x257
net/http.HandlerFunc.ServeHTTP(0xc000ee1f20, 0x288ec60, 0xc00253e1c0, 0xc000a2b700)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x288ec60, 0xc00253e1c0, 0xc000a2b600)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x269
net/http.HandlerFunc.ServeHTTP(0xc00093c2a0, 0x288ec60, 0xc00253e1c0, 0xc000a2b600)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithVirtualClusterRequestInfo.func1(0x288ec60, 0xc00253e1c0, 0xc000a2b500)
        /loft/pkg/apigateway/virtualcluster/filters/requestinfo.go:90 +0x602
net/http.HandlerFunc.ServeHTTP(0xc000ee1f60, 0x288ec60, 0xc00253e1c0, 0xc000a2b500)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/util/serverhelper.StripLeaveSlash.func1(0x288ec60, 0xc00253e1c0, 0xc000a2b500)
        /loft/pkg/util/serverhelper/helper.go:26 +0xd5
net/http.HandlerFunc.ServeHTTP(0xc00093c300, 0x288ec60, 0xc00253e1c0, 0xc000a2b500)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.(*ServeMux).ServeHTTP(0xc000c5e1c0, 0x288ec60, 0xc00253e1c0, 0xc000a2b500)
        /usr/local/go/src/net/http/server.go:2417 +0x1ad
github.com/loft-sh/loft/pkg/apigateway/filters.WithOriginalRequestPath.func1(0x288ec60, 0xc00253e1c0, 0xc000a2b300)
        /loft/pkg/apigateway/filters/originalrequestpath.go:10 +0x1c7
net/http.HandlerFunc.ServeHTTP(0xc000ef9ac0, 0x288ec60, 0xc00253e1c0, 0xc000a2b300)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1(0x288ec60, 0xc00253e1c0, 0xc000a2b300)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0xa8
net/http.HandlerFunc.ServeHTTP(0xc000ef9ae0, 0x288ec60, 0xc00253e1c0, 0xc000a2b300)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithCORS.func1(0x288ec60, 0xc00253e1c0, 0xc000a2b300)
        /loft/pkg/apigateway/filters/cors.go:12 +0x2ee
net/http.HandlerFunc.ServeHTTP(0xc000ef9b20, 0x288ec60, 0xc00253e1c0, 0xc000a2b300)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.serverHandler.ServeHTTP(0xc000de30a0, 0x288ec60, 0xc00253e1c0, 0xc000a2b300)
        /usr/local/go/src/net/http/server.go:2843 +0xa3
net/http.(*conn).serve(0xc00259e640, 0x2893ba0, 0xc00148af00)
        /usr/local/go/src/net/http/server.go:1925 +0x8ad
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2969 +0x36c
I0305 20:21:08.458342       1 deleg.go:130] start loft-cluster/vcluster-ci/ci port forwarder on port 10014
2021-03-05 20:21:08.459028 I | http: panic serving 10.64.1.107:36058: runtime error: invalid memory address or nil pointer dereference
goroutine 7939 [running]:
net/http.(*conn).serve.func1(0xc0026ae3c0)
        /usr/local/go/src/net/http/server.go:1801 +0x147
panic(0x217fa00, 0x37cffe0)
        /usr/local/go/src/runtime/panic.go:975 +0x47a
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).startForwading(0xc000c5e7c0, 0xc005c4d800, 0x1b, 0xc000600d80, 0x28cb920, 0xc0026b4c60, 0x0, 0x271e, 0x0, 0x453514, ...)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:138 +0x19c
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/portforward.(*cache).GetPort(0xc000c5e7c0, 0x2893c60, 0xc0027696b0, 0xc005c4d7e1, 0xc, 0xc005c4d800, 0x1b, 0x0, 0x0, 0x0)
        /loft/pkg/apigateway/virtualcluster/portforward/cache.go:121 +0x350
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.(*portForwarder).GetPort(0xc000ee1ea0, 0x2893c60, 0xc0027696b0, 0xc005c4d7e1, 0xc, 0xc005c4d7ee, 0xb, 0xc005c4d7fa, 0x2, 0xc0027696b0, ...)
        /loft/pkg/apigateway/virtualcluster/cache.go:45 +0x115
github.com/loft-sh/loft/pkg/apigateway/virtualcluster.Handler.func1(0x287cd20, 0xc002353cf0, 0xc0026c2000)
        /loft/pkg/apigateway/virtualcluster/handler.go:21 +0x109
net/http.HandlerFunc.ServeHTTP(0xc000ee1f00, 0x287cd20, 0xc002353cf0, 0xc0026c2000)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.ServeWithSleepMode(0x283eb60, 0xc000ee1f00, 0x287cd20, 0xc002353cf0, 0xc0026c2000, 0xc005c4d7e1, 0xc, 0xc005c4d7ee, 0xb, 0x289e520, ...)
        /loft/pkg/apigateway/filters/sleepmode.go:80 +0x153
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithSleepMode.func1(0x287cd20, 0xc002353cf0, 0xc0026c2000)
        /loft/pkg/apigateway/virtualcluster/filters/sleepmode.go:29 +0x1a5
net/http.HandlerFunc.ServeHTTP(0xc00093c270, 0x287cd20, 0xc002353cf0, 0xc0026c2000)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithMetrics.func1(0x288ec60, 0xc00253e2a0, 0xc0026c2000)
        /loft/pkg/apigateway/filters/metrics.go:201 +0x257
net/http.HandlerFunc.ServeHTTP(0xc000ee1f20, 0x288ec60, 0xc00253e2a0, 0xc0026c2000)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x288ec60, 0xc00253e2a0, 0xc000a2bf00)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x269
net/http.HandlerFunc.ServeHTTP(0xc00093c2a0, 0x288ec60, 0xc00253e2a0, 0xc000a2bf00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/virtualcluster/filters.WithVirtualClusterRequestInfo.func1(0x288ec60, 0xc00253e2a0, 0xc000a2be00)
        /loft/pkg/apigateway/virtualcluster/filters/requestinfo.go:90 +0x602
net/http.HandlerFunc.ServeHTTP(0xc000ee1f60, 0x288ec60, 0xc00253e2a0, 0xc000a2be00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/util/serverhelper.StripLeaveSlash.func1(0x288ec60, 0xc00253e2a0, 0xc000a2be00)
        /loft/pkg/util/serverhelper/helper.go:26 +0xd5
net/http.HandlerFunc.ServeHTTP(0xc00093c300, 0x288ec60, 0xc00253e2a0, 0xc000a2be00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.(*ServeMux).ServeHTTP(0xc000c5e1c0, 0x288ec60, 0xc00253e2a0, 0xc000a2be00)
        /usr/local/go/src/net/http/server.go:2417 +0x1ad
github.com/loft-sh/loft/pkg/apigateway/filters.WithOriginalRequestPath.func1(0x288ec60, 0xc00253e2a0, 0xc000a2bd00)
        /loft/pkg/apigateway/filters/originalrequestpath.go:10 +0x1c7
net/http.HandlerFunc.ServeHTTP(0xc000ef9ac0, 0x288ec60, 0xc00253e2a0, 0xc000a2bd00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1(0x288ec60, 0xc00253e2a0, 0xc000a2bd00)
        /loft/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0xa8
net/http.HandlerFunc.ServeHTTP(0xc000ef9ae0, 0x288ec60, 0xc00253e2a0, 0xc000a2bd00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
github.com/loft-sh/loft/pkg/apigateway/filters.WithCORS.func1(0x288ec60, 0xc00253e2a0, 0xc000a2bd00)
        /loft/pkg/apigateway/filters/cors.go:12 +0x2ee
net/http.HandlerFunc.ServeHTTP(0xc000ef9b20, 0x288ec60, 0xc00253e2a0, 0xc000a2bd00)
        /usr/local/go/src/net/http/server.go:2042 +0x44
net/http.serverHandler.ServeHTTP(0xc000de30a0, 0x288ec60, 0xc00253e2a0, 0xc000a2bd00)
        /usr/local/go/src/net/http/server.go:2843 +0xa3
net/http.(*conn).serve(0xc0026ae3c0, 0x2893ba0, 0xc000c71e80)
        /usr/local/go/src/net/http/server.go:1925 +0x8ad
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2969 +0x36c

feature request: add additional helm repos to apps install

Currently, you can only install helm charts from the public helm hub repo. While I really like this feature, I have a few enterprise applications which are not in the helm hub, for example, Hashicorp vault. It would be beneficial to be able to add additional helm repos that you can install helm charts from.

Expectation:

  • The ability to add additional helm repos to monitor and install apps from

please release ARM64 images

Hi,

Thanks so much for making loft, it's very exciting!

I'd love to be able to run this on an ARM64 only cluster

Putting a space to sleep is not guaranteed and no feed-back about it

Context
Actively monitoring a vCluster using k9s.

Issue
When trying to put the space in which the vCluster lives to sleep, it is silently accepted (State saved in cluster loft-cluster, or exit 0 on the CLI), but woken-up right after (hit refresh on browser, space status is Active, same with loft list spaces).

Expected
The space to be in status Sleeping, or at least a feedback in the UI/CLI.

Thanks!

Edit
Worth saying that sleeping does work as expected if not using k9s...

ARM64 support

We've clusters running on ARM64, when we try to run loft, we got errors standard_init_linux.go:228: exec user process caused: exec format error, which means that the loft image was built for x86.

Any plans to provide ARM64 image?

Can't run 1.7.0 loft-darwin-amd64

I get the MacOS error "loft-darwin-amd64 is damaged and can't be opened. You should move it to the trash." I didn't have this problem w/ 1.7.0-beta1 or 1.6.2.

Better nodeSelector support

I have a cluster with mixed nodes running both Linux and Windows. I hit an issue where the loft and kiosk pods are scheduled on the Windows nodes.

While hacking the loft pod into deploying on Linux nodes only is quite easy, it's unclear how to make kiosk pods to do the same.

CLI "loft use" Help text wrongly uses "Create ....." explanation

➜ .loft loft use

Below should read "Activates a kube context for the given ......."

#######################################################
###################### loft use #######################
#######################################################

Usage:
loft use [command]

Available Commands:
cluster Creates a kube context for the given cluster
space Creates a kube context for the given space
vcluster Creates a kube context for the given virtual cluster

apps installed from custom helm repo is not getting version updates

When you have an app installed from a custom URL it does not get helm version updates.

for example:

helm search repo spotinst
NAME                                           	CHART VERSION	APP VERSION	DESCRIPTION
spotinst/spotinst-kubernetes-cluster-controller	1.0.76       	1.0.67     	A Helm chart for Spotinst Kubernetes cluster co...

You can see here that the latest chart version if 1.0.76, however, Loft is not showing it as outdated nor as a selection in the dropdown menu.

Environment variables referencing other environment variables are not expanded while using loft

Hello,

As per Kubernetes documentation, it is possible to define interdependent environment variables

https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/

Based on this, we have a helm chart wherein the service port is exposed as a Generic Env using something like below:

MY_SERVICE_PORT: $(MY_CHART_MY_SERVICE_PORT)

This works fine when I installation via helm directly on the cluster.

However when I use vcluster, the Variable expansion/substitution is not happening and the value of MY_SERVICE_PORT is string $(MY_CHART_MY_SERVICE_PORT)

Using vCluster : Not Working MY_SERVICE_PORT is not set properly

$ env | grep PROXY_API_SERVICE_PORT MY_SERVICE_PORT=$(MY_CHART_MY_SERVICE_PORT) MY_CHART_MY_SERVICE_PORT=8001

Direct Installation on cluster: MY_SERVICE_PORT is set properly

$ env | grep PROXY_API_SERVICE_PORT MY_SERVICE_PORT=8001 MY_CHART_MY_SERVICE_PORT=8001

loft template to terminate port-forwarding after specificed time

Issue
While the dev command is super useful, it falls short when a user has left open a devspace dev as the port forwarding keeps the space active and never lets it sleep or delete.

Expected
Adding a template that would put a time limit on port forwarding, so that a space can sleep appropriately and eventually be deleted

feature request: ability to filter clusters/vclusters

When having many clusters (and possibly vclusters) the list of spaces can be somewhat overwhelming when you want to look at only a specific cluster. It would be beneficial to hide the unwanted clusters from the space view. Maybe a dropdown with checkboxes that you can select which clusters you wanted to include.

Error installing loft locally

Tried both Minikube and Kind on Mac.

[info]   This will install loft without an externally reachable URL and instead use port-forwarding to connect to loft

[info]   Executing command: helm install loft loft --repository-config  --repo https://charts.devspace.sh/ --kube-context minikube --namespace loft --set certIssuer.create=false --set ingress.enabled=false --set cluster.connect.local=true --set admin.password=... --set admin.email=.... --wait

[fatal]  Error during helm command: Error: validation: chart.metadata is required
 (exit status 1)

Versions:

helm v3.1.2
Kind v0.8.1 go1.14.2 darwin/amd64
Minikube v1.11.0 on Darwin 10.15.7 that installs Kubernetes v1.18.3 on Docker 19.03.8

installing sumologic in loft times out with an error

When installing the SumoLogic helm chart in the loft UI I get a timeout error although the helm installs successfully. The error that is displayed is:

Error: Timeout: request did not complete within requested timeout 34s (Timeout)

It appears that the SumoLogic helm chart first runs a Job, then installs all the other pods, and then removes the Job at the end. It's uncertain if that initial Job is taking too long or not

Connection to the server localhost:9898 was refused

Version: 1.10
System: Linux (amd64)

Issue:
Loft fails to connect to the local daemon and instead throws error:

" connection to the server localhost:9898 was refused"

Steps to reproduce

  1. Install

curl -s -L "https://github.com/loft-sh/loft/releases/latest" | sed -nE 's!.*"([^"]loft-linux-amd64)".!https://github.com\1!p' | xargs -n 1 curl -L -o loft && chmod +x loft;
sudo mv loft /usr/local/bin;

  1. Deploy Loft
  • loft start
  • Select remote
  • Select port-forwarding
  1. Login (in second terminal)

loft login --insecure https://localhost:9898

  1. Add some stuff
  • loft create vcluster my-dev-stack-1
  1. Close session
  • Hit control-C in the terminal running port-forwarding

Result:

  • running loft start again aborts with the error above
  • login times out
  • Kubectl must be reset

This behavior deviates starkly from version 1. and, for the love of God, must stop. Either write correct documentation of how to terminate an active port-forwarding session without tanking loft or revert the changeset back to the previous behavior, that is, when the active port-forwarding terminal was closed it restored the kubectl config to its initial state so one can re-run kubectl & loft again without issues.

loft space create CLI timeout creates space, but doesn't doesn't give user access to it

1.7.0 For both the CLI and the Helm Chart

I'm having an issue as a user with loft-space-user-default cluster role where I'm able to successfully create a space using loft create space <spacename>, but it takes longer than the default 30 second timeout. So the command errors:
[fatal] create space: Timeout: request did not complete within requested timeout 34s

I can see in the UI with administrator privileges that the space exists and that the non-privileged user is listed as the owner of the space, but when using loft list spaces that particular space doesn't show up.

Loft seg-faults after update

Updated loft:

sudo loft upgrade
[sudo] password for withinboredom:
[done] √ Successfully updated to version 2.0.3
[info]   Release note:


### Changes
- **ui**: New dock functionality to view multiple logs and terminal instances
![image](https://user-images.githubusercontent.com/10119814/145376556-fa883e52-2d15-4e8e-b791-99570ed7266e.png)
- **ui**: Fixed an issue where virtual cluster templates would route to space templates
- **api**: Integrated SAML directly into Loft
- **api**: Fixed an issue where Loft would disable access keys automatically
- **api**: Fixed an issue where Loft would hide failed clusters
- **api**: Update k8s dependencies to v1.23

Now segfaults on any command:

loft create vcluster test --template development-cluster
Segmentation fault
withinboredom:~/code/swytch-placer$ loft create vcluster --template development-cluster test
Segmentation fault
withinboredom:~/code/swytch-placer$ loft create vcluster --help
Segmentation fault
withinboredom:~/code/swytch-placer$ loft --help
Segmentation fault

Here's the result from dmesg:

[60324.445231] potentially unexpected fatal signal 11.
[60324.445234] CPU: 4 PID: 9049 Comm: loft Not tainted 5.10.60.1-microsoft-standard-WSL2 #1
[60324.445239] RIP: 0033:0x7f8c7311f2fb
[60324.445242] Code: Unable to access opcode bytes at RIP 0x7f8c7311f2d1.
[60324.445243] RSP: 002b:00007fff83e8d348 EFLAGS: 00000246 ORIG_RAX: 000000000000003b
[60324.445246] RAX: fffffffffffffff2 RBX: 0000561e44d85d60 RCX: 00007f8c7311f2fb
[60324.445248] RDX: 0000561e450bfe80 RSI: 0000561e44d7b8a0 RDI: 0000561e44e078a0
[60324.445249] RBP: 0000561e44e078a0 R08: 0000561e44d7b8a0 R09: 0000000000000000
[60324.445251] R10: 0000000000000008 R11: 0000000000000246 R12: 00000000ffffffff
[60324.445253] R13: 0000561e44d7b8a0 R14: 0000561e450bfe80 R15: 0000561e44d7ccb0
[60324.445254] FS:  0000000000000000 GS:  0000000000000000

I also pulled the latest release directly (https://github.com/loft-sh/loft/releases/download/v2.0.3/loft-linux-amd64) and it seg-faults.

2.0.3-beta.15 doesn't exhibit this behavior. Might be a bad build?

Documentation: loft requirement RBAC to be enabled

loft.sh requirement RBAC should be listed.
I have had a micrk8s and by default RBAC was not enabled and the UI errors were not meaningful.

log on kiosk:
E0121 18:49:34.187716 1 controller.go:498] unable to retrieve the complete list of server APIs: management.loft.sh/v1: forbidden: User "system:serviceaccount:loft:serviceaccount" cannot get path "/apis/management.loft.sh/v1": RBAC: ClusterRole.rbac.authorization.k8s.io "cluster-admin" not found

after enabling RBAC, which creates cluster-admin, everything worked as expected.

feature request: option to delete space after sleeping too long

It would be nice to set a max sleep length on a space before being deleted.

This would be a feature that would allow easier maintenance for when users create a space then leave it or forget about it. It would be nice if you could say after 75% sleep time delete space or something similar

Loft cli installation instructions error

The loft cli installation instructions for linux do not seem to work correctly. Perhaps because https://github.com/loft-sh/loft/releases/latest resolves to the https://github.com/loft-sh/loft/releases/tag/v1.2.0-beta.0 version, which does not have the expected files.

I think setting v1.2.0 to the latest version will resolve this issue.

curl -s -L "https://github.com/loft-sh/loft/releases/latest" | sed -nE 's!.*"([^"]*loft-darwin-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o loft && chmod +x loft;
sudo mv loft /usr/local/bin;

docker image with devspace + loft plugin

You have a devspace docker image that you can use from dockerhub, however, it does not have the loft plugin which we use for our automation.

It would be very helpful to have a docker image with the loft plugin installed with it so we can just use the latest docker image and not have to roll our own for every release

Upgrading to 0.3.7 fails

When I tried to upgrade to the latest version of loft '0.3.7' I get the following error:

Error: UPGRADE FAILED: template: loft/templates/deployment.yaml:32:22: executing "loft/templates/deployment.yaml" at <.Values.livenessProbe.enabled>: nil pointer evaluating interface {}.enabled

The following command was used:
helm upgrade -n loft loft loft --repo https://charts.devspace.sh --reuse-values --version 0.3.7

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.