Coder Social home page Coder Social logo

txn2 / kubefwd Goto Github PK

View Code? Open in Web Editor NEW
3.7K 33.0 196.0 16.22 MB

Bulk port forwarding Kubernetes services for local development.

Home Page: https://imti.co/kubernetes-port-forwarding/

License: Apache License 2.0

Go 100.00%
kubernetes devtools developer-tools networking port-forwarding port-forward k8s kubernetes-namespace devops-tools devops

kubefwd's Introduction

English|中文

Kubernetes port forwarding for local development.

NOTE: Accepting pull requests for bug fixes, tests, and documentation only.

kubefwd - kubernetes bulk port forwarding

Build Status GitHub license Go Report Card GitHub release

kubefwd (Kube Forward)

Read Kubernetes Port Forwarding for Local Development for background and a detailed guide to kubefwd. Follow Craig Johnston on Twitter for project updates.

kubefwd is a command line utility built to port forward multiple services within one or more namespaces on one or more Kubernetes clusters. kubefwd uses the same port exposed by the service and forwards it from a loopback IP address on your local workstation. kubefwd temporally adds domain entries to your /etc/hosts file with the service names it forwards.

When working on our local workstation, my team and I often build applications that access services through their service names and ports within a Kubernetes namespace. kubefwd allows us to develop locally with services available as they would be in the cluster.

kubefwd - Kubernetes port forward

kubefwd - Kubernetes Port Forward Diagram

OS

Tested directly on macOS and Linux based docker containers.

MacOs Install / Update

kubefwd assumes you have kubectl installed and configured with access to a Kubernetes cluster. kubefwd uses the kubectl current context. The kubectl configuration is not used. However, its configuration is needed to access a Kubernetes cluster.

Ensure you have a context by running:

kubectl config current-context

If you are running MacOS and use homebrew you can install kubefwd directly from the txn2 tap:

brew install txn2/tap/kubefwd

To upgrade:

brew upgrade kubefwd

Windows Install / Update

scoop install kubefwd

To upgrade:

scoop update kubefwd

Docker

Forward all services from the namespace the-project to a Docker container named the-project:

docker run -it --rm --privileged --name the-project \
    -v "$(echo $HOME)/.kube/":/root/.kube/ \
    txn2/kubefwd services -n the-project

Execute a curl call to an Elasticsearch service in your Kubernetes cluster:

docker exec the-project curl -s elasticsearch:9200

Alternative Installs (tar.gz, RPM, deb)

Check out the releases section on Github for alternative binaries.

Contribute

Fork kubefwd and build a custom version. Accepting pull requests for bug fixes, tests, stability and compatibility enhancements, and documentation only.

Usage

Forward all services for the namespace the-project. Kubefwd finds the first Pod associated with each Kubernetes service found in the Namespace and port forwards it based on the Service spec to a local IP address and port. A domain name is added to your /etc/hosts file pointing to the local IP.

Update

Forwarding of headlesss Service is currently supported, Kubefwd forward all Pods for headless service; At the same time, the namespace-level service monitoring is supported. When a new service is created or the old service is deleted under the namespace, kubefwd can automatically start/end forwarding; Supports Pod-level forwarding monitoring. When the forwarded Pod is deleted (such as updating the deployment, etc.), the forwarding of the service to which the pod belongs is automatically restarted;

sudo kubefwd svc -n the-project

Forward all svc for the namespace the-project where labeled system: wx:

sudo kubefwd svc -l system=wx -n the-project

Forward a single service named my-service in the namespace the-project:

sudo kubefwd svc -n the-project -f metadata.name=my-service

Forward more than one service using the in clause:

sudo kubefwd svc -l "app in (app1, app2)"

Help

$ kubefwd svc --help

INFO[00:00:48]  _          _           __             _     
INFO[00:00:48] | | ___   _| |__   ___ / _|_      ____| |    
INFO[00:00:48] | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |    
INFO[00:00:48] |   <| |_| | |_) |  __/  _|\ V  V / (_| |    
INFO[00:00:48] |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|    
INFO[00:00:48]                                              
INFO[00:00:48] Version 0.0.0                                
INFO[00:00:48] https://github.com/txn2/kubefwd              
INFO[00:00:48]                                              
Forward multiple Kubernetes services from one or more namespaces. Filter services with selector.

Usage:
  kubefwd services [flags]

Aliases:
  services, svcs, svc

Examples:
  kubefwd svc -n the-project
  kubefwd svc -n the-project -l app=wx,component=api
  kubefwd svc -n default -l "app in (ws, api)"
  kubefwd svc -n default -n the-project
  kubefwd svc -n default -d internal.example.com
  kubefwd svc -n the-project -x prod-cluster
  kubefwd svc -n the-project -m 80:8080 -m 443:1443
  kubefwd svc -n the-project -z path/to/conf.yml
  kubefwd svc -n the-project -r svc.ns:127.3.3.1
  kubefwd svc --all-namespaces

Flags:
  -A, --all-namespaces          Enable --all-namespaces option like kubectl.
  -x, --context strings         specify a context to override the current context
  -d, --domain string           Append a pseudo domain name to generated host names.
  -f, --field-selector string   Field selector to filter on; supports '=', '==', and '!=' (e.g. -f metadata.name=service-name).
  -z, --fwd-conf string         Define an IP reservation configuration
  -h, --help                    help for services
  -c, --kubeconfig string       absolute path to a kubectl config file
  -m, --mapping strings         Specify a port mapping. Specify multiple mapping by duplicating this argument.
  -n, --namespace strings       Specify a namespace. Specify multiple namespaces by duplicating this argument.
  -r, --reserve strings         Specify an IP reservation. Specify multiple reservations by duplicating this argument.
  -l, --selector string         Selector (label query) to filter on; supports '=', '==', and '!=' (e.g. -l key1=value1,key2=value2).
  -v, --verbose                 Verbose output.

License

Apache License 2.0

Sponsor

Open source utility by Craig Johnston, imti blog and sponsored by Deasil Works, Inc.

Please check out my book Advanced Platform Development with Kubernetes: Enabling Data Management, the Internet of Things, Blockchain, and Machine Learning.

Book Cover - Advanced Platform Development with Kubernetes: Enabling Data Management, the Internet of Things, Blockchain, and Machine Learning

Source code from the book Advanced Platform Development with Kubernetes: Enabling Data Management, the Internet of Things, Blockchain, and Machine Learning by Craig Johnston (@cjimti) ISBN 978-1-4842-5610-7 Apress; 1st ed. edition (September, 2020)

Read my blog post Advanced Platform Development with Kubernetes for more info and background on the book.

Follow me on Twitter: @cjimti (Craig Johnston)

Please Help the Children of Ukraine

UNICEF is on the ground helping Ukraine's children, please donate to https://www.unicefusa.org/ <- "like" this project by donating.

kubefwd's People

Contributors

abirdcfly avatar ajones avatar alrs avatar aude avatar ben-st avatar benmathews avatar bittner avatar bradsheppard avatar calmkart avatar canoztokmak avatar cedrickring avatar chenrui333 avatar cjimti avatar daveelsensohn avatar dependabot[bot] avatar dobesv avatar flupec avatar gaby avatar hosswald avatar indrayam avatar jakereps avatar jmasud avatar loganlinn avatar mhindery avatar miles- avatar n-oden avatar ndj888 avatar pschou avatar skisel avatar svavassori avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubefwd's Issues

kubefwd should respect my current namespace

With using kubens (kubens the-project) you easily get used to set your current namespace once and no longer append -n <yournamespace> to each and every kubectl command.

It would be helpful if kubefwd would appreciate my current namespace too, so instead of kubefwd defaulting to default namespace when being invoked without -n it should default to my current namespace (the-project).

Is Docker for Mac supported?

Hi,

Firstly, great tool!

Quick question, I followed this example for getting Kafka running: https://imti.co/kafka-kubernetes/. I'm using Docker For Mac (2.0.0.3). When running kubefwd, I get the following output:

Screenshot 2019-04-01 at 15 49 58

I am successfully able to create a new Kafka topic:
/kafka-topics.sh --zookeeper kafka-zookeeper:2181
--topic test --create --partitions 1 --replication-factor 1

But when trying to post a message to the topic using the following command, I receive a timeout error:

Tried with:
/kafka-console-producer.sh --topic test --broker-list kafka-headless:9092
/kafka-console-producer.sh --topic test --broker-list kafka:9092

Error:
[2019-04-01 15:50:49,755] WARN [Producer clientId=console-producer] Connection to node 0 (/10.1.0.117:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

My hosts file looks as follows:

127.1.27.1 kafka kafka.the-project kafka.the-project.svc.cluster.local
127.1.27.2 kafka-headless kafka-headless.the-project kafka-headless.the-project.svc.cluster.local
127.1.27.3 kafka-zookeeper kafka-zookeeper.the-project kafka-zookeeper.the-project.svc.cluster.local
127.1.27.4 kafka-zookeeper-headless kafka-zookeeper-headless.the-project kafka-zookeeper-headless.the-project.svc.cluster.local

If I post from within a container, then I have no issues.

Am I missing something?

openshift incompatibility -connection refused

I can can't connect where i can do via oc and kubectl

kubectl get services -n project -> works
sudo kubefwd services -l -n project -> dial tcp xxx.xxxx.xxxx: connect: connection refused

Maybe it's also a http proxy issue ?

Support helm charts

Helm is becoming increasingly used for deployment of kubernetes resources that might represent an 'application'

It would be cool if kubefwd became helm aware (or has a twin helmfwd! ) whereby it would enumerate the services that are part of the helm chart, and do kubefwd on each.

(not used kubefwd yet... next... but an initial idea)

unable to do port forwarding: socat not found.

vagrant@ubuntu:~$ sudo /usr/local/bin/kubefwd services


| | ___ | |_ ___ / | | |
| |/ / | | | '
\ / _ \ |
\ \ /\ / / _ |
| <| |
| | |
) | / |\ V V / (| |
|_|_\
,|./ _|| _/_/ _,_|

Press [Ctrl-C] to stop forwarding.
Loading hosts file /etc/hosts
Original hosts backup already exists at /etc/hosts.original
Fwd 127.1.27.1:8080 as hello-minikube:8080 to pod hello-minikube-6c47c66d8-pnh66:8080
Fwd 127.1.27.2:443 as kubernetes:443 to pod hello-minikube-6c47c66d8-pnh66:8443
E1109 09:48:49.372871 10016 portforward.go:352] an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod 1db44e20e4a1a128fa733e2f5bbc1656cdb3194fbeb4436f678aed52f70f0b3a, uid : unable to do port forwarding: socat not found.
E1109 09:48:57.673534 10016 portforward.go:352] an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod 1db44e20e4a1a128fa733e2f5bbc1656cdb3194fbeb4436f678aed52f70f0b3a, uid : unable to do port forwarding: socat not found.
E1109 09:48:59.132190 10016 portforward.go:352] an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod 1db44e20e4a1a128fa733e2f5bbc1656cdb3194fbeb4436f678aed52f70f0b3a, uid : unable to do port forwarding: socat not found.

there are easier ways to do this

Hey, neat tool and nice idea, but I think that for some aspects of it, like testing frontends, using traefik might simplify it a lot - it just looks at the host header and forwards to a common ingestion point (like all reverse proxies do). So instead of port fwd, you to throw it a service name, port, subdomain in a manifest, and then add the subdomain as 127.0.0.1 in /etc/hosts - one of examples is here: https://github.com/sokoow/kube-desktop/blob/master/ingress/dashboard-ingress.yaml

Let me know what you think.

Support for filtering by more than one label

Hi! First of all, thanks for releasing kubefwd. This is going to save us huge amounts of time.

Is it possible to filter by more than one label, similar to the multiple namespaces feature? I would like to port-forward only a few services in our staging namespace using the app label. Something like this:

$ sudo kubefwd services -n staging -l app=website -l app=api -l app=search

Would this be difficult to add? Would you accept a PR for this?

kubefwd exits when there is a completed pod

When there are completed pods (which belongs to a job), kubefwd exits as follows:

$ sudo kubefwd svc --namespace default
2019/01/07 15:30:50 _ _ __ _
2019/01/07 15:30:50 | | ___ | |_ ___ / | | |
2019/01/07 15:30:50 | |/ / | | | '
\ / _ \ |
\ \ /\ / / _ |
2019/01/07 15:30:50 | <| |
| | |
) | / |\ V V / (| |
2019/01/07 15:30:50 |_|_\
,|./ _|| _/_/ _,_|
2019/01/07 15:30:50
2019/01/07 15:30:50 Version 1.4.10
2019/01/07 15:30:50 https://github.com/txn2/kubefwd
2019/01/07 15:30:50
2019/01/07 15:30:50 Press [Ctrl-C] to stop forwarding.
2019/01/07 15:30:50 'cat /etc/hosts' to see all host entries.
2019/01/07 15:30:50 Loaded hosts file /etc/hosts
2019/01/07 15:30:50 Hostfile management: Original hosts backup already exists at /etc/hosts.original
2019/01/07 15:30:50 Forwarding: kubernetes:443 to pod my-job-p7sqj:443
2019/01/07 15:30:51 ERROR: error upgrading connection: unable to upgrade connection: pod not found ("my-job-p7sqj_default")
2019/01/07 15:30:51 Stopped forwarding kubernetes in default.
2019/01/07 15:30:51 Done..

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-job-p7sqj 0/1 Completed 0 11m

Kafka unable to connect

@cjimti Saw your comments on kafka but thought it'd be best to start a new thread.

For some reason I can't seem to connect to kafka using kubefwd. Did you use helm? If so which chart. I'm using https://github.com/helm/charts/tree/master/incubator/kafka

➜  helm git:(master) ✗ sudo kubefwd svc -l "app in (kafka)"
Password:
2019/07/02 00:24:54  _          _           __             _
2019/07/02 00:24:54 | | ___   _| |__   ___ / _|_      ____| |
2019/07/02 00:24:54 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/07/02 00:24:54 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/07/02 00:24:54 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/07/02 00:24:54
2019/07/02 00:24:54 Version 1.8.0
2019/07/02 00:24:54 https://github.com/txn2/kubefwd
2019/07/02 00:24:54
2019/07/02 00:24:54 Press [Ctrl-C] to stop forwarding.
2019/07/02 00:24:54 'cat /etc/hosts' to see all host entries.
2019/07/02 00:24:54 Loaded hosts file /etc/hosts
2019/07/02 00:24:54 Hostfile management: Original hosts backup already exists at /Users/ken/hosts.original
2019/07/02 00:24:54 Forwarding: kafka:9092 to pod kafka-0:9092
2019/07/02 00:24:54 Forwarding: kafka-headless:9092 to pod kafka-0:9092

When i try to connect using this script in nodejs, it fails.

const { Kafka } = require('kafkajs')

const kafka = new Kafka({
  clientId: 'my-app',
  brokers: ['kafka:9092']
})

const producer = kafka.producer()
const consumer = kafka.consumer({ groupId: 'test-group1' })

const run = async () => {
  // Producing
  await producer.connect()

    await producer.send({
      topic: 'test-topic',
      messages: [
        { value: `Hello KafkaJS user!` },
      ],
    })

  // Consuming
  await consumer.connect()
  await consumer.subscribe({ topic: 'test-topic', fromBeginning: true })

  await consumer.run({
    eachMessage: async ({ topic, partition, message }) => {
      console.log({
        partition,
        offset: message.offset,
        value: message.value.toString(),
      })
    },
  })
}

run().catch(console.error)

It fails with:

{"level":"ERROR","timestamp":"2019-07-02T06:28:24.746Z","logger":"kafkajs","message":"[Producer] Connection error: getaddrinfo ENOTFOUND kafka-1.kafka-headless.default kafka-1.kafka-headless.default:9092","retryCount":0,"retryTime":247}
{ KafkaJSNumberOfRetriesExceeded
  Caused by: KafkaJSConnectionError: Connection error: getaddrinfo ENOTFOUND kafka-1.kafka-headless.default kafka-1.kafka-headless.default:9092
    at Socket.onError

Yet if I run this same script using telepresence, then it works as expected. Is there anything special I need to do in order to connect using kubefwd?

Duplicate host entries in /etc/hosts

Hello ,

We've been experiencing this issue quite often.

Duplicate host entries get assigned randomly every time kubefwd is executed.

Password:
2019/02/09 00:15:54  _          _           __             _
2019/02/09 00:15:54 | | ___   _| |__   ___ / _|_      ____| |
2019/02/09 00:15:54 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/02/09 00:15:54 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/02/09 00:15:54 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/02/09 00:15:54 
2019/02/09 00:15:54 Version 1.4.10
2019/02/09 00:15:54 https://github.com/txn2/kubefwd
2019/02/09 00:15:54 
2019/02/09 00:15:54 Press [Ctrl-C] to stop forwarding.
2019/02/09 00:15:54 'cat /etc/hosts' to see all host entries.
2019/02/09 00:15:54 Hosfile error: Duplicate hostname entry for some_entry -> some_ip
2019/02/09 00:15:54 Errors loading hostfile ```

Support multiple running kubefwd processes

Hello,
I wondered if you have considered to be able to run several independent running kubefwd processes.
I have a lot of different namespaces / projects in a kube cluster and it would be handy to be able to independently start/stop kubefwd for a few resources.

I've seen that the IpC variable is hardcoded, would you accept a PR where it is changed to be variable (and change the way that /etc/hosts is handled)?

Only a single pod bound per service?

Hi,

let's say I install Kafka like the following:

helm install --name wd-kafka --namespace kafka -f "kafka-helm.yml" incubator/kafka

...kafka-helm.yml having the following content: replicas: 3.

Then, accordingly, I'll get a service with 3 pods:

image

But now when I run sudo kubefwd services -n default -n kafka I wil get only the following:

Fwd 127.1.28.1:9092 as wd-kafka.kafka.svc.cluster.local:9092 to pod wd-kafka-0:kafka

This seems fishy - it seems as if the other two pods were ignored.

I think all this has to do with the low reliability Kafka-over-Kubefwd is having for me.

Can't the following happen...?

  • pod 1 goes down, but the other two are operational
  • kubefwd still points to the faulty pod.

We briefly talked about reconnects, over email. But if I'm not mistaken the issue is broader than that (although re-connects might reduce the pain): port forwarding operates at pod level, rather than at service level.

WDYT?

Thanks - Victor

IP addresses change suddenly

When I was connecting to my cluster yesterday, my Grafane dashboard was having IP 127.1.27.7.
When I connected this morning, it had 127.1.27.8.

One idea to avoid this might be some algorithm, which derives the IP address from the name of the service, e.g. creating a md5 hash out of, creating a value out of it and taking a modulo 255 to derive the last digit of the IP address.
The consequence would be that the IP addresses are spread over the full range.
But at least the IP would remain the same, as long as the name of the service does not change.

Simple typo issue

Running kubefwd 1.4.3

When I shutdown kubefwd by hitting Ctrl+C, I get output like this...

^CStoped forwarding 127.1.27.9:7002 and removing spin-clouddriver-rw from hosts.
Stoped forwarding 127.1.27.5:6383 and removing play-others-redis from hosts.
Stoped forwarding 127.1.27.7:7002 and removing spin-clouddriver-ro from hosts.
...

Stoped should be spelled Stopped.

Opened a PR #18 for this

Handling incoming connections

Kubefwd is amazing because it enables to work on a single service without having to deploy the whole platform on your laptop. This said, it's really rare to develop the only calls other services and it's never called. How can I handle incoming connection to "my service" that is running locally?

Removes localhost entry and error "Something is already running on port [PORT]"

Since upgrading to 1.7.3 kubefwd removes 127.0.0.1 localhost from /etc/hosts

Typically my work flow is that I will forward my additional cluster services ie:
sudo kubefwd svc -l "app in (elasticsearch, rabbitmq, redis, mysql)"

But then I would run a microservice locally to develop on which would run on localhost:3000.

However if it removes 127.0.0.1 localhost then I will always get this error message
Something is already running on port 3000.

If I manually edit /etc/hosts and add 127.0.0.1 localhost then the error message goes away.

Any thoughts on this? Any work arounds?

Provide a base port option

When I want to forward to services in multiple clusters or namespaces, which use the same ports, e.g. port 80, kubefwd reports the following error:

w.ForwardPorts Error: unable to listen on any of the requested ports: [{80 80}]
Skipping failure.

An option to specify a port offset "--baseport=30000" and map all ports to consecutive port numbers would be great to allow working with different clusters at the same time.

  • podport1 = baseport
  • podport2= baseport +1
  • podport3 = baseport +2
  • ...

Original /etc/hosts not restored after an error

Hi there,

I guess that after an error, the original /etc/hosts should be restored, for cleaning things up. It's the behavior I can observe if hitting Ctrl-C when Kubefwd is up and running in a healthy state.

However I found two cases where this didn't happen:

1

There was an unexpected error due to some timeout (can't provide more details sorry - no idea what exactly happened). Kubefwd didn't gracefully handle the issue and was seemingly waiting forever.

I hit Ctrl-C, and /etc/hosts wasn't restored.

2

In this other case, Kubectl did catch the issue and gracefully turned itself off. But /etc/hosts wasn't restored.

~ $ sudo kubefwd services -n default -n kafka

 _          _           __             _
| | ___   _| |__   ___ / _|_      ____| |
| |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
|   <| |_| | |_) |  __/  _|\ V  V / (_| |
|_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|

Press [Ctrl-C] to stop forwarding.
Loading hosts file /etc/hosts
Backing up your original hosts file /etc/hosts to /etc/hosts.original
Fwd 127.1.27.1:5432 as db-postgresql:5432 to pod db-postgresql-0:postgresql
Fwd 127.1.27.2:5432 as db-postgresql-headless:5432 to pod db-postgresql-0:postgresql
Fwd 127.1.27.3:443 as kubernetes:443 to pod db-postgresql-0:8443
Fwd 127.1.28.1:9092 as wd-kafka.kafka.svc.cluster.local:9092 to pod wd-kafka-0:kafka
Fwd 127.1.28.2:9092 as wd-kafka-headless.kafka.svc.cluster.local:9092 to pod wd-kafka-0:9092
Fwd 127.1.28.3:2181 as wd-kafka-zookeeper.kafka.svc.cluster.local:2181 to pod wd-kafka-zookeeper-0:client
Fwd 127.1.28.4:2181 as wd-kafka-zookeeper-headless.kafka.svc.cluster.local:2181 to pod wd-kafka-zookeeper-0:2181
Fwd 127.1.28.4:3888 as wd-kafka-zookeeper-headless.kafka.svc.cluster.local:3888 to pod wd-kafka-zookeeper-0:3888
Fwd 127.1.28.4:2888 as wd-kafka-zookeeper-headless.kafka.svc.cluster.local:2888 to pod wd-kafka-zookeeper-0:2888
E1105 17:41:08.849920   34632 portforward.go:190] lost connection to pod
E1105 17:41:08.849954   34632 portforward.go:190] lost connection to pod
E1105 17:41:08.849971   34632 portforward.go:190] lost connection to pod
E1105 17:41:08.849985   34632 portforward.go:190] lost connection to pod
E1105 17:41:08.850454   34632 portforward.go:190] lost connection to pod
E1105 17:41:08.850513   34632 portforward.go:190] lost connection to pod
E1105 17:41:08.850541   34632 portforward.go:190] lost connection to pod
E1105 17:41:08.850703   34632 portforward.go:190] lost connection to pod
E1105 17:41:08.850739   34632 portforward.go:190] lost connection to pod

Done..

Hope the report is useful!

Victor

handle headless service of statefulset pods

Hey guys, thanks for this cool tool!

I have an issue where headless service exposing a statefulset is not getting mapped well.
it would be great if you can provide the special case handling for such scenario (as it seems that everyone is struggling with this one).

➜  ~ sudo kubefwd svc -l app=kafka
2019/03/31 17:51:04  _          _           __             _
2019/03/31 17:51:04 | | ___   _| |__   ___ / _|_      ____| |
2019/03/31 17:51:04 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/03/31 17:51:04 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/03/31 17:51:04 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/03/31 17:51:04
2019/03/31 17:51:04 Version 1.8.0
2019/03/31 17:51:04 https://github.com/txn2/kubefwd
2019/03/31 17:51:04
2019/03/31 17:51:04 Press [Ctrl-C] to stop forwarding.
2019/03/31 17:51:04 'cat /etc/hosts' to see all host entries.
2019/03/31 17:51:04 Loaded hosts file /etc/hosts
2019/03/31 17:51:04 Hostfile management: Original hosts backup already exists at /Users/ram/hosts.original
2019/03/31 17:51:07 Forwarding: kafka-main-headless:9092 to pod kafka-main-0:9092

results in:

127.1.27.2  kafka-main-headless kafka-main-headless.default kafka-main-headless.default.svc.cluster.local

but since there are 3 pods, what's really needed is:

127.1.27.2  kafka-main-0.kafka-main-headless.default
127.1.27.3  kafka-main-1.kafka-main-headless.default
127.1.27.4  kafka-main-2.kafka-main-headless.default

pods:

kafka-main-0  2/2  Running   0  12d   100.123.155.228   ip-10-20-64-220.ec2.internal   <none>
kafka-main-1  2/2  Running   0  12d   100.113.247.159   ip-10-20-65-209.ec2.internal   <none>
kafka-main-2  2/2  Running   0  12d   100.124.146.8     ip-10-20-71-138.ec2.internal   <none>

the issue is that there are pod-names under the headless service, and these don't get mapped.
it seems not too hard to follow the labels and get the right pods (name and IP) and map correctly.

even if just kubefwd pod was available there was something to work with.

StatefulSet+Headless Service+Selectors

In #52 we changed the behavior so that it matches the "Without Selector" section here but broke it for services that do have selector.

For services with a selector there should be a DNS entry with the name of the service pointing to all pods that match (i.e. multiple A records), which in our case it transforms to an another entry in /etc/hosts/ of course

Binding issues when port is in use

Hi,

kubefwd reports ERROR: unable to listen on any of the requested ports: [{80 80}] if the port is already in use and bound to 0.0.0.0:80.

This is the output of netstat -a before running kubefwd:

Active Connections

  Proto  Local Address          Foreign Address        State
  TCP    0.0.0.0:80             DESKTOP-5VE6KMP:0      LISTENING
...

The listening service is the nginx ingress service. Running kubefwd services now returns the unable to listen error.

If I drop the nginx ingress service, run kubefwd, and then create the service again, I get this output from netstat -a:

Active Connections

  Proto  Local Address          Foreign Address        State
  TCP    0.0.0.0:80             DESKTOP-5VE6KMP:0      LISTENING
...
  TCP    127.1.27.1:80          DESKTOP-5VE6KMP:0      LISTENING
...

This is the desired result, and it would be awesome if kubefwd was somehow able to bind to the custom ip's without depending on whether the port is already publicly bound.

kubefwd cannot find root's kubeconfig

I'm not sure whether this is more to do with my environment, but when I use Kubefwd it exits as it is looking for /root/.kube/config which obviously won't exist:

$ sudo kubefwd services --namespace shw
[sudo] password for shw: 

 _          _           __             _
| | ___   _| |__   ___ / _|_      ____| |
| |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
|   <| |_| | |_) |  __/  _|\ V  V / (_| |
|_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|

Press [Ctrl-C] to stop forwarding.
Loading hosts file /etc/hosts
Backing up your original hosts file /etc/hosts to /etc/hosts.original
panic: stat /root/.kube/config: no such file or directory

goroutine 1 [running]:
github.com/txn2/kubefwd/pkg/utils.K8sConfig(0x1bc4f60, 0x0)
	/Users/cjimti/go/src/github.com/txn2/kubefwd/pkg/utils/utils.go:83 +0x19b
github.com/txn2/kubefwd/cmd/kubefwd/services.glob..func1(0x1bc4f60, 0xc0002e8720, 0x0, 0x2)
	/Users/cjimti/go/src/github.com/txn2/kubefwd/cmd/kubefwd/services/services.go:78 +0x138
github.com/txn2/kubefwd/vendor/github.com/spf13/cobra.(*Command).execute(0x1bc4f60, 0xc0002e8700, 0x2, 0x2, 0x1bc4f60, 0xc0002e8700)
	/Users/cjimti/go/src/github.com/txn2/kubefwd/vendor/github.com/spf13/cobra/command.go:766 +0x2cc
github.com/txn2/kubefwd/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0002ecc80, 0xc0002ecc80, 0xc0002ecf00, 0x1bc4f60)
	/Users/cjimti/go/src/github.com/txn2/kubefwd/vendor/github.com/spf13/cobra/command.go:852 +0x2fd
github.com/txn2/kubefwd/vendor/github.com/spf13/cobra.(*Command).Execute(0xc0002ecc80, 0x1, 0x1)
	/Users/cjimti/go/src/github.com/txn2/kubefwd/vendor/github.com/spf13/cobra/command.go:800 +0x2b
main.main()
	/Users/cjimti/go/src/github.com/txn2/kubefwd/cmd/kubefwd/kubefwd.go:64 +0x67

If I need to make changes to my sudoers config then that's not a problem.

Possibly warn or error on invalid namespace provided?

Recently ran into an issue where I fat-fingered a namespace, in a list of many, so the endpoint didn't exist when the code tried to hit it (which I missed because the number of services being forwarded pushed the logs beyond my terminal window).

After some digging I found that it appears k8s corev1 clientset doesn't alert the user, or error when non-existent namespaces are provided (with valid reason I assume, as one of the functions is to Create that namespace). Would it be worth adding validation on kubefwd to either warn or error if a user provides a namespace that results in no listed services?

It looks like it could be added by doing a check on the length of the services.Items slice.
https://github.com/txn2/kubefwd/blob/master/cmd/kubefwd/services/services.go#L243

I'd gladly throw in a PR for this if need be, but understand, since it was user-error, if it's not worth adding.

multiple selectors not working

I can't seem to get kubefwd working with multiple selectors. Here is my output

➜  kube git:(master) ✗ sudo kubefwd svc -n default -l ms=emailer,ms=account
Password:
2019/02/13 12:58:34  _          _           __             _
2019/02/13 12:58:34 | | ___   _| |__   ___ / _|_      ____| |
2019/02/13 12:58:34 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/02/13 12:58:34 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/02/13 12:58:34 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/02/13 12:58:34
2019/02/13 12:58:34 Version 1.4.10
2019/02/13 12:58:34 https://github.com/txn2/kubefwd
2019/02/13 12:58:34
2019/02/13 12:58:34 Press [Ctrl-C] to stop forwarding.
2019/02/13 12:58:34 'cat /etc/hosts' to see all host entries.
2019/02/13 12:58:34 Loaded hosts file /etc/hosts
2019/02/13 12:58:34 Hostfile management: Original hosts backup already exists at /etc/hosts.original
2019/02/13 12:58:34 Done...
➜  kube git:(master) ✗

using a single selector works great

➜  kube git:(master) ✗ sudo kubefwd svc -n default -l ms=emailer
2019/02/13 12:59:46  _          _           __             _
2019/02/13 12:59:46 | | ___   _| |__   ___ / _|_      ____| |
2019/02/13 12:59:46 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/02/13 12:59:46 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/02/13 12:59:46 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/02/13 12:59:46
2019/02/13 12:59:46 Version 1.4.10
2019/02/13 12:59:46 https://github.com/txn2/kubefwd
2019/02/13 12:59:46
2019/02/13 12:59:46 Press [Ctrl-C] to stop forwarding.
2019/02/13 12:59:46 'cat /etc/hosts' to see all host entries.
2019/02/13 12:59:46 Loaded hosts file /etc/hosts
2019/02/13 12:59:46 Hostfile management: Original hosts backup already exists at /etc/hosts.original
2019/02/13 12:59:46 Forwarding: emailer:3001 to pod emailer-7f975c9855-5lnl2:3001
2019/02/13 12:59:46 Forwarding: emailer:80 to pod emailer-7f975c9855-5lnl2:8080

Any thoughts?

Does not work with EKS

I might have missed something somewhere but this does not seem to work with EKS in AWS. EKS requires a special credentials binary that generates a token and it appears that kubefwd is ignoring that requirement and not triggering it when it tries to connect to the cluster. This is all I see in the ouput

2019/05/27 16:37:50  _          _           __             _
2019/05/27 16:37:50 | | ___   _| |__   ___ / _|_      ____| |
2019/05/27 16:37:50 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/05/27 16:37:50 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/05/27 16:37:50 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/05/27 16:37:50
2019/05/27 16:37:50 Version 1.8.2
2019/05/27 16:37:50 https://github.com/txn2/kubefwd
2019/05/27 16:37:50
2019/05/27 16:37:50 Press [Ctrl-C] to stop forwarding.
2019/05/27 16:37:50 'cat /etc/hosts' to see all host entries.
2019/05/27 16:37:50 Loaded hosts file /etc/hosts
2019/05/27 16:37:50 Hostfile management: Original hosts backup already exists at /Users/dkhenkin/hosts.original
2019/05/27 16:37:50 Error forwarding service: Unauthorized
2019/05/27 16:37:50 Done...

Multiple localhost entries in /etc/hosts

I had the issue several times that the /etc/hosts file had been modified by kubefwd in a way that prevented it to start again:

Can not load /etc/hosts
2018/11/22 11:26:27 Duplicate hostname entry for localhost -> ::1

The /etc/hosts had the following entry (up to 4 times once)

::1 localhost localhost ... localhost

I had the impression that this happened when starting it accidentally a second time for the same cluster, but I could not reproduce the issue in a new test.

Reconnect when service or pod is restarted

During development I often restart the pods to apply a new configuration for example. Unfortunately I have to stop kubefwd and start it again to re-establish the connection.
It would be great if there is some cron option to check for changes to the services or pods and restart the connection. Or maybe there is a hook in Kubernetes, where you can be notified of a change in the underlying pod where the connection is going to and the connection could be restarted without having to stop & start kubefwd completely.

For example a "--reconnect" option:

sudo kubefwd services --reconnect -n default -c~/.bluemix/plugins/container-service/clusters/my-cluster/kube-config-fra02-mycluster.yml

Proposal: A way to control idle connection timeouts

This is an AWESOME tool to provide devs an easy and secure way of using our development environment on our k8s cluster.

The issue

Idle connection timeout is very short, and we can't change streaming-connection-idle-timeout setting in our managed kubernetes cluster.

Proposal

Some kind of flag to either control the port-forwarding tunnel idle timeout (not sure how feasible this would be tho), or one that tells the tool to reconnect the service upon disconnecting.

I'm not a fan of either of those two options but perhaps this is a good conversation starter to find a solution.

Go Build errors on FreeBSD

I'm trying to build kubefwd on FreeBSD and it fails with the following compile time errors:

go build kubefwd/kubefwd.go
# vendor/k8s.io/client-go/plugin/pkg/client/auth/azure
vendor/k8s.io/client-go/plugin/pkg/client/auth/azure/azure.go:246:13: cannot use expiresIn (type string) as type json.Number in field value
vendor/k8s.io/client-go/plugin/pkg/client/auth/azure/azure.go:247:13: cannot use expiresOn (type string) as type json.Number in field value
vendor/k8s.io/client-go/plugin/pkg/client/auth/azure/azure.go:248:13: cannot use expiresOn (type string) as type json.Number in field value
vendor/k8s.io/client-go/plugin/pkg/client/auth/azure/azure.go:265:23: cannot use token.token.ExpiresIn (type json.Number) as type string in assignment
vendor/k8s.io/client-go/plugin/pkg/client/auth/azure/azure.go:266:23: cannot use token.token.ExpiresOn (type json.Number) as type string in assignment

Any ideas? Thank you!

All output is written to stderr

Wittnessed behaviour:

All output of kubefwd ist written to stderr (including the KUBEFWD-banner).

Expected behaviour:

Only errors or warning should be written so stdout.
The 'normal' output (i.e., output of log level INFO or DEBUG) shall be written to stdout.

Pod names are used instead of service names

tl;dr
Since version 1.8.2 kubefwd uses the pod names instead of the service names as hostnames


I used to use kubefwd 1.8.0 a lot for local development and embedded the service names in configs, e.g. the redis service oy my app acme for the feature featxy was forwarded according to the service name featxy-acme-redis. Since upgrading to 1.8.4 (actually since 1.8.2 and later) /etc/hosts is populated with the pod names instead (e.g. featxy-acme-redis-57cbcdbd98) which forces me to change all the configs if the pod gets a new identifier. Is this a bug or a feature?

Error running kubefwd as snap installation

↳ sudo kubefwd svc -n default
2019/03/13 11:36:28 _ _ __ _
2019/03/13 11:36:28 | | ___ | |_ ___ / | | |
2019/03/13 11:36:28 | |/ / | | | '
\ / _ \ |
\ \ /\ / / _ |
2019/03/13 11:36:28 | <| |
| | |
) | / |\ V V / (| |
2019/03/13 11:36:28 |_|_\
,|./ _|| _/_/ _,_|
2019/03/13 11:36:28
2019/03/13 11:36:28 Version 1.7.3
2019/03/13 11:36:28 https://github.com/txn2/kubefwd
2019/03/13 11:36:28
2019/03/13 11:36:28 Press [Ctrl-C] to stop forwarding.
2019/03/13 11:36:28 'cat /etc/hosts' to see all host entries.
2019/03/13 11:36:28 Loaded hosts file /etc/hosts
2019/03/13 11:36:28 Hostfile management: Backing up your original hosts file /etc/hosts to /root/snap/kubefwd/5/hosts.original
2019/03/13 11:36:28 Error reading configuration configuration: open /root/snap/kubefwd/5/.kube/config: no such file or directory

/etc/hosts not correctly parsed

$ sudo kubefwd services -n development
[...]
Loading hosts file /etc/hosts
Can not load /etc/hosts
2018/11/13 16:52:31 Duplicate hostname entry for trc.taboola.com -> 0.0.0.0

$ grep taboola.com /etc/hosts
#0.0.0.0  cdn.downloaddeft.com trc.taboola.com images.taboola.com b.scorecardresearch.com
#0.0.0.0  aa-gb.mgid.com rs.gwallet.com rp.gwallet.com ads.yahoo.com cdn.taboola.com
#0.0.0.0  trc.taboola.com images.taboola.com cdn.taboola.com c2.taboola.com

Notice how the lines are commented out. Kubefwd shouldn't be parsing this as anything.

The fact that it's doing so is a bit disconcerting — anything that messes with /etc/hosts had better be really solid so one doesn't lose stuff.

kubefwd does not work for ClusterIP without any selectors

We have a service of ClusterIP type without any selectors (backed by manual Endpoint) which is defined as so:

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: prod
spec:
  ports:
  - name: http
    protocol: TCP
    port: 9200
    targetPort: 9200
---
kind: Endpoints
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: prod
subsets:
  - addresses:
      - ip: 10.132.0.6
    ports:
      - name: http
        port: 9200
        protocol: TCP

When trying to access it thru kubefwd it fails with:

2019/01/07 18:41:40 Runtime error: an error occurred forwarding 9200 -> 9200: error forwarding port 9200 to pod d3dd8d6c2ccd2f3576689afe4868cc30a6a4394cff74e97a8ea1dbabb407da0b, uid : exit status 1: 2019/01/07 17:41:40 socat[39057] E connect(5, AF=2 127.0.0.1:9200, 16): Connection refused

Forwarding to all the other services backed by pods works ok.

Accessing the service from withing the cluster also works just fine:

root@some_other_pod_in_the_cluster:/usr/local/tomcat# curl -v elasticsearch:9200/
*   Trying 10.15.254.34...
* TCP_NODELAY set
* Connected to elasticsearch (10.15.254.34) port 9200 (#0)
> GET / HTTP/1.1
> Host: elasticsearch:9200
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 493
...

// The 10.15.254.34 IP is indeed the one allocated to the elasticsearch service.

I hope I provided enough info. Thanks for your time.

Best.

Disconnect after sending few request

I am connecting to my openshift cluster using below command

sudo kubefwd services -n <nameSpace>

After sending few requests, I am getting the error lost connection to pod.

image

Use localhost + random ports as an alternative

To add local DNS lookup in hosts is convenient, kind of the core idea of this great tool, but I wonder if that makes sense to have it support localhost with random ports by specifying an additional command line option, just like the usual way that Docker does to expose container ports using its -P.

With that, we don't have to modify hosts, create loopback alias, etc., hence do not need sudo any more. I'm not going to break the original design, just to give people another option, to make it more widely used.

Does it make sense?

Subset of /etc/hosts entries not being cleaned up

First of all, thanks for kubefwd! This project is exactly what I've been looking for.

I'm testing out kubefwd and I'm including all of the services in my cluster (~80 services). I've noticed that whenever I stop kubefwd (Ctrl+c), there are number of /etc/hosts entries that do not get removed. It seems to be inconsistent which entries are left behind and how many there are.

One of the entries is for ::1 localhost.localdomain which causes kubefwd to fail to start. I read through #23 which states that 1.4.10 attempts to correct this issue. Let me know if I should open a separate issue for this.

$ kubefwd version
2019/01/17 12:29:33 Version 1.4.10

/etc/hosts file after stopping kubefwd:

$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain
127.0.1.1 host host.localdomain
127.1.27.25 serviceA
127.1.27.51 serviceB.default serviceB.default.svc.cluster.local
127.1.27.73 serviceC.defaultserviceC.default.svc.cluster.local
127.1.27.75 serviceD.default
127.1.27.77 serviceE
::1 localhost.localdomain localhost.localdomain localhost.localdomain

Kubefwd stopped working

I have been using kubefwd just find for the last few weeks. Now all of the sudden when I launch it, the app says its changing the hosts fine then stops the app. Here is the output from my console.

2019/02/07 11:56:12  _          _           __             _
2019/02/07 11:56:12 | | ___   _| |__   ___ / _|_      ____| |
2019/02/07 11:56:12 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/02/07 11:56:12 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/02/07 11:56:12 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/02/07 11:56:12
2019/02/07 11:56:12 Version 1.4.10
2019/02/07 11:56:12 https://github.com/txn2/kubefwd
2019/02/07 11:56:12
2019/02/07 11:56:12 Press [Ctrl-C] to stop forwarding.
2019/02/07 11:56:12 'cat /etc/hosts' to see all host entries.
2019/02/07 11:56:12 Loaded hosts file C:\Windows\System32\drivers\etc\hosts
2019/02/07 11:56:12 Hostfile management: Original hosts backup already exists at C:\Windows\System32\drivers\etc\hosts.o
riginal
2019/02/07 11:56:12 Done...```

Any ideas?

Cannot run kubefwd

When running kubefwd I get the following error

panic: No Auth Provider found for name "gcp"

goroutine 1 [running]:
github.com/txn2/kubefwd/cmd/kubefwd/services.glob..func1(0x17db920, 0xc4202ee440, 0x0, 0x2)
	/Users/cjimti/go/src/github.com/txn2/kubefwd/cmd/kubefwd/services/services.go:74 +0x1545
github.com/txn2/kubefwd/vendor/github.com/spf13/cobra.(*Command).execute(0x17db920, 0xc4202ee420, 0x2, 0x2, 0x17db920, 0xc4202ee420)
	/Users/cjimti/go/src/github.com/txn2/kubefwd/vendor/github.com/spf13/cobra/command.go:766 +0x2c1
github.com/txn2/kubefwd/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc4202f2500, 0xc4202f2500, 0xc4202f2780, 0x17db920)
	/Users/cjimti/go/src/github.com/txn2/kubefwd/vendor/github.com/spf13/cobra/command.go:852 +0x30a
github.com/txn2/kubefwd/vendor/github.com/spf13/cobra.(*Command).Execute(0xc4202f2500, 0x1, 0x1)
	/Users/cjimti/go/src/github.com/txn2/kubefwd/vendor/github.com/spf13/cobra/command.go:800 +0x2b
main.main()
	/Users/cjimti/go/src/github.com/txn2/kubefwd/cmd/kubefwd/kubefwd.go:64 +0x67

kubefwd communication behind a proxy

Hello,

I am trying to setup local port forwarding to a Kubernetes Cluster. I am behind a proxy. Is there a way to allow communication over a proxy. Here are the log details;


kforward
2018/12/24 15:39:24 _ _ __ _
2018/12/24 15:39:24 | | ___ | |_ ___ / | | |
2018/12/24 15:39:24 | |/ / | | | '
\ / _ \ |
\ \ /\ / / _ |
2018/12/24 15:39:24 | <| |
| | |
) | / |\ V V / (| |
2018/12/24 15:39:24 |_|_\
,|./ _|| _/_/ _,_|
2018/12/24 15:39:24
2018/12/24 15:39:24 Version 1.4.10
2018/12/24 15:39:24 https://github.com/txn2/kubefwd
2018/12/24 15:39:24
2018/12/24 15:39:24 Press [Ctrl-C] to stop forwarding.
2018/12/24 15:39:24 'cat /etc/hosts' to see all host entries.
2018/12/24 15:39:24 Loaded hosts file /etc/hosts
2018/12/24 15:39:24 Hostfile management: Original hosts backup already exists at /etc/hosts.original
2018/12/24 15:39:24 Error forwarding service: Get https://gav-cu-ine-dev-aks-91bc5af0.hcp.centralus.azmk8s.io:443/api/v1/namespaces/default/services: dial tcp: lookup gav-cu-ine-dev-aks-91bc5af0.hcp.centralus.azmk8s.io on 10.220.220.220:53: no such host
2018/12/24 15:39:24 Done...

Thanks
Sohil
Senior Staff Software Engineer, GE Aviation

support forward to service or pods optionaly

This would be great that kubefwd support forwarding to native kubernetes services and not only to pods.
This could be done through an option of the commandline like
sudo kubefwd svc
sudo kubefwd pod

(thanks for this cool tool that really help developer smooth their test workflow)

Intermittent timeouts on random services

I am running sudo kubefwd svc -n staging on my local machine to test against my cluster. All my services in staging forward correctly. Example:

2019/05/13 10:32:01 Forwarding: card:9010 to pod card-7bbfdb6cc8-l88pg:9010
2019/05/13 10:32:02 Forwarding: checkout:9001 to pod checkout-56b746867b-4q8bl:80

However, after a while I see the following errors -

2019/05/13 10:32:34 ERROR: error upgrading connection: error dialing backend: dial tcp <ip-redacted>:10250: i/o timeout
2019/05/13 10:32:34 Stopped forwarding checkout in staging.

This happens for different services each time I run the command, even though the pods are running and have not restarted. I can workaround this by running kubefwd again and again until the service I want to test against is stable, but it's not ideal.

Support shorthand service names

One thing I ran into with some of my app configuration was that on kubedns you are able to shorthand a service name, but kubefwd doesn't appear to add that variant to the hosts file at the moment. To expand, I mean instead of service.namespace.svc.cluster.local you can put service.namespace and it will resolve. Could this be included in the aliases added to the hosts file? I haven't dug into the code yet, but I'd volunteer to add the functionality if need be 👍.

Support watching a namespace

It would be nice if we could use kubefwd to watch a namespace and update as the services in a namespace change. It might be a reasonable default behavior as well.

Right now, the use of os signals to kill the port forwarding makes it a little harder to plumb this through. Ideally the port forwarding is cancellable without killing the whole process.

Non-ready pods are selected for forwarding

When I redeploy an application and I stop kubefwd to restart as soon as the new pod is ready and the old one is in terminating state, the terminating one is still selected. It would be good to check the readiness of all containers in a pod. Or perhaps use endpoints as the source, because that will only show ready pods.

Reinitialise on SIGHUP

It would be great if I can send a SIGHUP to let kubefwd restart its main loop and reinitialise itself. Right now I manually have to go to the terminal window, Ctrl-C it and rerun the command. With a proper handling of the signal it'd be possible to do a single-line deployment, e.g. helm upgrade && sudo pkill -HUP kubefwd. Currently on SIGHUP kubefwd just quits. It could partially solve the problem in #21

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.