Coder Social home page Coder Social logo

alibaba / kt-connect Goto Github PK

View Code? Open in Web Editor NEW
1.4K 35.0 226.0 23.29 MB

A toolkit for Integrating with your kubernetes dev environment more efficiently

Home Page: https://alibaba.github.io/kt-connect/#/

License: GNU General Public License v3.0

Go 95.71% Dockerfile 0.24% Shell 3.30% Makefile 0.76%
kubernetes istio vpn connect exchage mesh developer-tools

kt-connect's Introduction

KT-Connect

Go Build Status Go Report Card Test Coverage Maintainability Release License

English | 简体中文

KtConnect ("Kt" is short for "Kubernetes Toolkit") is a utility tool to help you work with Kubernetes dev environment more efficiently.

Arch

✅ Features

  • Connect: Directly Access a remote Kubernetes cluster. KtConnect use ssh-vpn or socks-proxy to access remote Kubernetes cluster networks.
  • Exchange: Developer can exchange the workload to redirect the requests to a local app.
  • Mesh: You can create a mesh version service in local host, and redirect specified workload requests to your local.
  • Preview: Expose a local running app to Kubernetes cluster as a common service, all requests to that service are redirect to local app.

🚀 QuickStart

You can download and install the ktctl from Downloads And Install

Read the Quick Start Guide for more about this tool.

💡 Ask For Help

Please feel free to raise an issue if anything sucks, or go ahead to contact us with DingTalk(钉钉):

kt-connect's People

Contributors

besscroft avatar cryice avatar cui-liqiang avatar cuiliqiang1 avatar dependabot[bot] avatar dominicqi avatar dserodio avatar fudali113 avatar linfan avatar lishuaiii avatar mojo-zd avatar pvtyuan avatar shihaiyang-world avatar sp0cket avatar sunmios avatar wahyd4 avatar wzshiming avatar yunlzheng avatar zerda avatar zeusro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kt-connect's Issues

an error occurred forwarding 2222

Describe the bug
A clear and concise description of what the bug is.

an error occurred forwarding 2222
Log
please add -d to debug log

6:29PM INF Client address 10.252.154.90
6:29PM INF Shadow is ready.
Forwarding from 127.0.0.1:2222 -> 22
Forwarding from [::1]:2222 -> 22
Handling connection for 2222
E0814 18:29:15.441716   22741 portforward.go:400] an error occurred forwarding 2222 -> 22: error forwarding port 22 to pod d36cd49708faf80b96ef3ff70e844d0fc439293276dc367927973711bc31aa2b, uid : unable to do port forwarding: socat not found.
ssh_exchange_identification: Connection closed by remote host
client: fatal: failed to establish ssh session (1)
E0814 18:30:05.450612   22741 portforward.go:233] lost connection to pod

Environment (please complete the following information):

  • OS: MacOS
  • Kubernetes: v1.12.4
  • KT Version: 0.0.5

Additional context
Add any other context about the problem here.

error: invalid resource name "deployments/kt-connect-daemon-gbqts": [may not contain '/']

软件版本: 当前最新版 linux客户端

报错信息如下:(kuberntes内 kt-connect-daemon-gbqts 创建正常)
[root@localhost .kube]# ktctl connect
2019/07/23 11:16:04 Client address 192.168.2.40
2019/07/23 11:16:04 Deploying proxy deployment kt-connect-daemon-gbqts in namespace default
2019/07/23 11:16:04 Pod status is Pending
2019/07/23 11:16:06 Pod status is Pending
2019/07/23 11:16:08 Pod status is Running
2019/07/23 11:16:08 Success deploy proxy deployment kt-connect-daemon-gbqts in namespace default
error: invalid resource name "deployments/kt-connect-daemon-gbqts": [may not contain '/']
2019/07/23 11:16:08 port-forward exited
2019/07/23 11:16:10 port-forward start at pid: 16136
Daemon Start At 16127

测试macos 64客户端正常, linux64 报错信息如上

对集群Service的ClusterIP只代理了一个网段,按clusterIP访问不通

Describe the bug
对service的clusterIP,只代理了一个网段,导致ClusterIP是不通的。
代码pkg/kt/util/helper.go如下:

func getServiceCird(clientset *kubernetes.Clientset) (cidr string, err error) {
	serviceList, err := clientset.CoreV1().Services("").List(metav1.ListOptions{})   //获取所有的service
	if err != nil {
		log.Printf("Fails to get service info of cluster")
		return "", err
	}

	cluserIps := []string{}
	for _, service := range serviceList.Items {
		if service.Spec.ClusterIP != "" && service.Spec.ClusterIP != "None" {
			cluserIps = append(cluserIps, service.Spec.ClusterIP)
		}
	}

	sample := cluserIps[0]    //为什么只取第一个?
	cidr = strings.Join(append(strings.Split(sample, ".")[:2], []string{"0", "0"}...), ".") + "/16"
	return
}

Log
按服务的域名访问不通。

我理解,这里应该是想取集群分配的网段配置 ,如service-cluster-ip-range=10.96.0.0/12才对吧

connect 命令的--cidr参数只对Pod有效,对service无效。

部署dashboard镜像无法下载

Describe the bug
按照官方页面部署界面时无法下载相关镜像:registry.cn-shanghai.aliyuncs.com/kube-helm/kt-dashboard:stable
镜像还有其他下载源吗?

kt-connect-shadow:stable镜像中未包含 netstat 程序

Describe the bug
sudo ktctl connect 无法 工作.

Log
Traceback (most recent call last):
File "", line 1, in
File "assembler.py", line 36, in
File "shuttle.server", line 229, in main
File "shuttle.server", line 84, in list_routes
File "shuttle.server", line 63, in _list_routes
File "/usr/lib/python3.5/subprocess.py", line 947, in init
restore_signals, start_new_seeion)
File "/usr/lib/python3.5/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errnno 2] No such file or directory: 'netstat'

Environment (please complete the following information):

  • OS: [e.g. Ubuntu 16.04]
  • Kubernetes [e.g. 1.13.3]
  • KT Version [e.g. 0.0.5]
  • Image: registry.cn-hangzhou.aliyuncs.com/rdc-incubator/kt-connect-shadow:stable 7d9bc54eaded

解决方式
registry.cn-hangzhou.aliyuncs.com/rdc-incubator/kt-connect-shadow:stable 需安装netstat

git tag action should trigger release publish in travis ci.

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Fail to get Pod CIDR if Node.Spec.PodCIDR is empty

Describe the bug
A clear and concise description of what the bug is.

Log

sudo ktctl -d connect
Password:
5:16PM INF Daemon Start At 98170
5:16PM INF Client address 10.2.131.182
5:16PM DBG Deploying proxy deployment kt-connect-daemon-kcljf in namespace default

5:16PM DBG Shadow Pods not ready......
5:16PM DBG Shadow Pod status is Pending
5:16PM DBG Shadow Pod status is Running
5:16PM INF Shadow is ready.
5:16PM DBG Success deploy proxy deployment kt-connect-daemon-kcljf in namespace default

5:16PM DBG Child, os.Args = [ktctl -d connect]
5:16PM DBG Child, cmd.Args = [kubectl --kubeconfig=/Users/jiangchangqiang/.kube/config -n default port-forward kt-connect-daemon-kcljf-65c5bdff7-26rcn 2222:22]
Forwarding from 127.0.0.1:2222 -> 22
Forwarding from [::1]:2222 -> 22
5:16PM DBG port-forward start at pid: 98171
5:17PM DBG Child, os.Args = [ktctl -d connect]
5:17PM DBG Child, cmd.Args = [sshuttle --dns --to-ns 10.1.97.12 -v -e ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -i /Users/jiangchangqiang/.kt_id_rsa -r [email protected]:2222 -x 127.0.0.1 10.152.0.0/16]
Starting sshuttle proxy.
firewall manager: Starting firewall with Python version 3.7.4
firewall manager: ready method name pf.
IPv6 enabled: True
UDP enabled: False
DNS enabled: True
User enabled: False
TCP redirector listening on ('::1', 12300, 0, 0).
TCP redirector listening on ('127.0.0.1', 12300).
DNS listening on ('::1', 12299, 0, 0).
DNS listening on ('127.0.0.1', 12299).
Starting client with Python version 3.7.4
c : connecting to server...
Handling connection for 2222
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
Starting server with Python version 3.5.2
 s: latency control setting = True
 s: auto-nets:False
c : Connected.
firewall manager: setting up.
>> pfctl -s Interfaces -i lo -v
>> pfctl -s all
>> pfctl -a sshuttle6-12300 -f /dev/stdin
>> pfctl -E
>> pfctl -s Interfaces -i lo -v
>> pfctl -s all
>> pfctl -a sshuttle-12300 -f /dev/stdin
>> pfctl -E
5:17PM DBG vpn(sshuttle) start at pid: 98172
5:17PM DBG KT proxy start successful
c : DNS request from ('10.2.131.182', 63308) to None: 37 bytes
c : DNS request from ('10.2.131.182', 52899) to None: 37 bytes

Environment (please complete the following information):

  • OS: macos
  • Kubernetes microk8s
  • KT Version 0.0.6

Additional context

apiVersion: v1
items:
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      node.alpha.kubernetes.io/ttl: "0"
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: 2019-11-14T05:40:02Z
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/os: linux
      kubernetes.io/arch: amd64
      kubernetes.io/hostname: k8s-kt-connect
      kubernetes.io/os: linux
      microk8s.io/cluster: "true"
    name: k8s-kt-connect
    namespace: ""
    resourceVersion: "19641"
    selfLink: /api/v1/nodes/k8s-kt-connect
    uid: a1a0355f-2e9d-4c5c-b7b9-6fd16af9f930
  spec: {}
  status:
    addresses:
    - address: 192.168.64.15
      type: InternalIP
    - address: k8s-kt-connect
      type: Hostname
    allocatable:
      cpu: "4"
      ephemeral-storage: 59747096Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 8066160Ki
      pods: "110"
    capacity:
      cpu: "4"
      ephemeral-storage: 60795672Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 8168560Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: 2019-11-14T10:01:36Z
      lastTransitionTime: 2019-11-14T05:39:58Z
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: 2019-11-14T10:01:36Z
      lastTransitionTime: 2019-11-14T05:39:58Z
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: 2019-11-14T10:01:36Z
      lastTransitionTime: 2019-11-14T05:39:58Z
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: 2019-11-14T10:01:36Z
      lastTransitionTime: 2019-11-14T08:05:37Z
      message: kubelet is posting ready status. AppArmor enabled
      reason: KubeletReady
      status: "True"
      type: Ready
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - docker.io/library/tomcat@sha256:9b9b01f50a953d3fe24e78404c66cae3372b446d5b162f42c1c64da7e2ec3f51
      - docker.io/library/tomcat:7
      sizeBytes: 229793902
    - names:
      - registry.cn-hangzhou.aliyuncs.com/rdc-incubator/kt-connect-shadow@sha256:f9b1407c06b2952e51f029a464e7e7a6e100cfcf2098e0bb71d3e7f62e836461
      - registry.cn-hangzhou.aliyuncs.com/rdc-incubator/kt-connect-shadow:stable
      sizeBytes: 107847146
    - names:
      - docker.io/coredns/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c
      - docker.io/coredns/coredns:1.5.0
      sizeBytes: 13341835
    - names:
      - k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea
      - k8s.gcr.io/pause:3.1
      sizeBytes: 317164
    nodeInfo:
      architecture: amd64
      bootID: 3551237c-ac2d-422a-9b6f-f24f3fdd8bc9
      containerRuntimeVersion: containerd://1.2.5
      kernelVersion: 4.15.0-69-generic
      kubeProxyVersion: v1.16.2
      kubeletVersion: v1.16.2
      machineID: 608f475aa93446739925b1c231ffccd2
      operatingSystem: linux
      osImage: Ubuntu 18.04.3 LTS
      systemUUID: B44D3E8A-0000-0000-A8AE-4A7E182F900E
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

kt-controller:stable 镜像无法获取

Describe the bug
registry.cn-shanghai.aliyuncs.com/kube-helm/kt-controller:stable 镜像无法获取
Log

docker pull registry.cn-shanghai.aliyuncs.com/kube-helm/kt-controller:stable
 Error response from daemon: pull access denied for registry.cn-shanghai.aliyuncs.com/kube-helm/kt-controller, repository does not exist or may require 'docker login'

Environment (please complete the following information):

  • OS: [e.g. Ubuntu 14.04]
  • Kubernetes [e.g. 1.10.1]
  • KT Version [e.g. 0.0.4]

Additional context
Add any other context about the problem here.

connect already running,but no pod found

Describe the bug
A clear and concise description of what the bug is.
when i run ktctl -n commece connect ,I get an error as bellow,
but no pod running in this namespace,and the kt-dashboard has 0 connect ,
how can i clean this process ?

Log
please add -d to debug log
$ ktctl -n commerce connect
panic: Connect already running. exit this

goroutine 1 [running]:
gitlab.alibaba-inc.com/rdc-incubator/kt/pkg/kt/action.(*Action).Connect(0xc420159700, 0x8ae, 0x10adf00, 0x0, 0x0)
/Users/yunlong/go/src/gitlab.alibaba-inc.com/rdc-incubator/kt/pkg/kt/action/ConectAction.go:19 +0x62f
main.main.func1(0xc4202a0160, 0xc42028e500, 0xc4202a0160)
/Users/yunlong/go/src/gitlab.alibaba-inc.com/rdc-incubator/kt/cmd/standalone/main.go:106 +0x114
gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/urfave/cli.HandleAction(0xf4cc20, 0xc42026e3c0, 0xc4202a0160, 0x0, 0xc42028e5a0)
/Users/yunlong/go/src/gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/urfave/cli/app.go:502 +0xbe
gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/urfave/cli.Command.Run(0x10aecdd, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x10c254a, 0x20, 0x0, ...)
/Users/yunlong/go/src/gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/urfave/cli/command.go:165 +0x47d
gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/urfave/cli.(*App).Run(0xc420094fc0, 0xc4200a8000, 0x4, 0x4, 0x0, 0x0)
/Users/yunlong/go/src/gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/urfave/cli/app.go:259 +0x6e8
main.main()
/Users/yunlong/go/src/gitlab.alibaba-inc.com/rdc-incubator/kt/cmd/standalone/main.go:158 +0xcb9

Environment (please complete the following information):

  • OS: [e.g. Ubuntu 14.04]
  • Kubernetes [e.g. 1.10.1]
  • KT Version [e.g. 0.0.4]

Additional context
Add any other context about the problem here.

使用ktctl connect无法访问kt-dashboard.default.svc.cluster.local之外的地址

为啥我是用了ktctl只能访问这一个 http://kt-dashboard.default.svc.cluster.local 地址,不管是使用ktctl -n jenkins connect,ktctl -n repo-nexus connect,这个kt-dashboard地址都可以访问,但是jenkins命名空间下的http://jenkins.jenkins.svc.cluster.local:8080就是无法访问,repo-nexus下的http://repo-nexus.repo-nexus.svc.cluster.local:8081也无法访问到

Log
please add -d to debug log

Environment

  • OS: [Centos7.5]
    Centos7.5
  • Kubernetes [1.15.3]
  • KT Version [0.0.4]
    Additional context
    Add any other context about the problem here.

server exit with nil pointer

Describe the bug
A clear and concise description of what the bug is.

Log
please add -d to debug log
server:

Format domain b._dns-sd._udp.6.1.255.10.in-addr.arpa. to b._dns-sd._udp.6.1.255.10.in-addr.arpa.
Received DNS query for b._dns-sd._udp.6.1.255.10.in-addr.arpa.:
Exchange message for domain b._dns-sd._udp.6.1.255.10.in-addr.arpa. to dns server 10.233.0.3:53
Format domain db._dns-sd._udp.6.1.255.10.in-addr.arpa. to db._dns-sd._udp.6.1.255.10.in-addr.arpa.
Received DNS query for db._dns-sd._udp.6.1.255.10.in-addr.arpa.:
Exchange message for domain db._dns-sd._udp.6.1.255.10.in-addr.arpa. to dns server 10.233.0.3:53
Format domain lb._dns-sd._udp.6.1.255.10.in-addr.arpa. to lb._dns-sd._udp.6.1.255.10.in-addr.arpa.
Received DNS query for lb._dns-sd._udp.6.1.255.10.in-addr.arpa.:
Exchange message for domain lb._dns-sd._udp.6.1.255.10.in-addr.arpa. to dns server 10.233.0.3:53
 *** invalid answer name b._dns-sd._udp.0.144.31.172.in-addr.arpa. after 12 query for b._dns-sd._udp.0.144.31.172.in-addr.arpa.
 *** invalid answer name db._dns-sd._udp.0.144.31.172.in-addr.arpa. after 12 query for db._dns-sd._udp.0.144.31.172.in-addr.arpa.
 *** invalid answer name b._dns-sd._udp.6.1.255.10.in-addr.arpa. after 12 query for b._dns-sd._udp.6.1.255.10.in-addr.arpa.
 *** invalid answer name lb._dns-sd._udp.6.1.255.10.in-addr.arpa. after 12 query for lb._dns-sd._udp.6.1.255.10.in-addr.arpa.
 *** invalid answer name db._dns-sd._udp.6.1.255.10.in-addr.arpa. after 12 query for db._dns-sd._udp.6.1.255.10.in-addr.arpa.
 *** invalid answer name lb._dns-sd._udp.0.144.31.172.in-addr.arpa. after 12 query for lb._dns-sd._udp.0.144.31.172.in-addr.arpa.
 *** invalid answer name db._dns-sd._udp.54.196.168.192.in-addr.arpa. after 12 query for db._dns-sd._udp.54.196.168.192.in-addr.arpa.
*** error: read udp 10.233.102.169:36633->10.233.0.3:53: i/o timeout
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x603096]

goroutine 51 [running]:
gitlab.alibaba-inc.com/rdc-incubator/kt/pkg/proxy/daemon.(*handler).ServeDNS(0x7ed488, 0x6ab120, 0xc420191800, 0xc420177830)
	/Users/yunlong/go/src/gitlab.alibaba-inc.com/rdc-incubator/kt/pkg/proxy/daemon/dnsserver.go:66 +0x416
gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/miekg/dns.(*Server).serveDNS(0xc4200f4000, 0xc420191800)
	/Users/yunlong/go/src/gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/miekg/dns/server.go:687 +0x2cb
gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/miekg/dns.(*Server).serve(0xc4200f4000, 0xc420191800)
	/Users/yunlong/go/src/gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/miekg/dns/server.go:572 +0x2a7
gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/miekg/dns.(*Server).worker(0xc4200f4000, 0xc420191800)
	/Users/yunlong/go/src/gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/miekg/dns/server.go:244 +0x4d
created by gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/miekg/dns.(*Server).spawnWorker
	/Users/yunlong/go/src/gitlab.alibaba-inc.com/rdc-incubator/kt/vendor/github.com/miekg/dns/server.go:284 +0x85

client:

2019/07/29 14:27:06 Client address 172.31.144.16
2019/07/29 14:27:06 Deploying proxy deployment kt-connect-daemon-csnsv in namespace default
2019/07/29 14:27:06 Pod status is Pending
2019/07/29 14:27:08 Pod status is Pending
2019/07/29 14:27:10 Pod status is Running
2019/07/29 14:27:10 Success deploy proxy deployment kt-connect-daemon-csnsv in namespace default
2019/07/29 14:27:10 Child, os.Args = [ktctl -d connect]
2019/07/29 14:27:10 Child, cmd.Args = [kubectl --kubeconfig=/Users/runzexia/.kube/config -n default port-forward deployments/kt-connect-daemon-csnsv 2222:22]
Forwarding from 127.0.0.1:2222 -> 22
Forwarding from [::1]:2222 -> 22
2019/07/29 14:27:12 port-forward start at pid: 96311
2019/07/29 14:27:17 Child, os.Args = [ktctl -d connect]
2019/07/29 14:27:17 Child, cmd.Args = [sshuttle --dns --to-ns 10.233.102.169 -e ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -i /tmp/kt_id_rsa -r [email protected]:2222 -x 127.0.0.1 10.233.64.0/24 10.233.65.0/24 10.233.0.0/16]
Handling connection for 2222
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
server: WARNING: Neither ip nor netstat were found on the server.
client: Connected.
2019/07/29 14:27:19 vpn(sshuttle) start at pid: 96312
2019/07/29 14:27:19 KT proxy start successful
Daemon Start At 96309Connection to 127.0.0.1 closed by remote host.
client: fatal: server died with error code 255
2019/07/29 14:32:17 vpn(sshuttle) exited
E0729 14:33:17.503274   96311 portforward.go:224] lost connection to pod
2019/07/29 14:33:17 port-forward exited

[Feature] Use secret to replace constant SSH key

Is your feature request related to a problem? Please describe.
Constant SSH key is really dangerous.

Describe the solution you'd like

Generate SSH key when using ktctl init .

ktctl init should also support local SSH config file.

And then create the secret(kubernetes resource)

Describe alternatives you've considered
None

Additional context

在docker本地docker容器中无法连入kt创建的vpn网络,这个能解决吗

Is your feature request related to a problem? Please describe.
由于我们本地开发也都是基于docker镜像进行的,这样做的原因是为了保证每个人手上的PHP运行环境都是统一的,但是在Linux虚拟机中(mac中没有这个问题)docker网络无法连接到VPN网络,即使启用了host网络模式也不行。不知道针对这种情况各位大佬有没有什么解决思路呢。

kt-connect can't work

Describe the bug
A clear and concise description of what the bug is.
when i finish install kt-connect,i can't curl any podIP:port and clusterIP:port in local linux machine.
in local machine
[root@localhost ~]# ktctl --kubeconfig /root/.kube/admin.conf connect
2019/10/29 22:02:06 Client address 10.0.0.100
2019/10/29 22:02:07 Deploying proxy deployment kt-connect-daemon-hgswr in namespace default
2019/10/29 22:02:07 Pod status is Pending
2019/10/29 22:02:09 Pod status is Pending
2019/10/29 22:02:11 Pod status is Running
2019/10/29 22:02:11 Success deploy proxy deployment kt-connect-daemon-hgswr in namespace default
Daemon Start At 18437

[root@localhost ~]# ps -ef |grep sshuttle
root 18583 18448 0 14:51 pts/1 00:00:00 grep --color=auto sshuttle
[root@localhost ~]#

Log
please add -d to debug log
I can't find any log in /var/log/messages

Environment (please complete the following information):

  • OS: [CentOS 7.5]
  • Kubernetes [e.g. 1.15.1]
  • KT Version [e.g. 0.0.4]

Additional context
Add any other context about the problem here.

0.0.6 版本源码编译报错

Describe the bug
A clear and concise description of what the bug is.
下载源码包:kt-connect-kt-0.0.6.zip
在Centos 7.5机器编译kt-connect-kt-0.0.6过程中报错如下:
[root@k8s-master-15-81 ~]# go version
go version go1.12.9 linux/amd64
[root@k8s-master-15-81 ~]#
[root@k8s-master-15-81 tools]# ll
total 1228
drwxr-xr-x 11 root root 4096 Oct 29 09:31 kt-connect-kt-0.0.6
-rw-r--r-- 1 root root 1251777 Oct 29 09:20 kt-connect-kt-0.0.6.zip
[root@k8s-master-15-81 tools]# cd kt-connect-kt-0.0.6/
[root@k8s-master-15-81 kt-connect-kt-0.0.6]# ll
total 80
drwxr-xr-x 2 root root 4096 Oct 29 09:32 bin
drwxr-xr-x 5 root root 4096 Oct 9 19:43 cmd
drwxr-xr-x 3 root root 4096 Oct 9 19:43 config
drwxr-xr-x 7 root root 4096 Oct 9 19:43 docker
drwxr-xr-x 6 root root 4096 Oct 9 19:43 docs
-rw-r--r-- 1 root root 1314 Oct 9 19:43 go.mod
-rw-r--r-- 1 root root 21123 Oct 9 19:43 go.sum
-rw-r--r-- 1 root root 1076 Oct 9 19:43 LICENSE
-rw-r--r-- 1 root root 1100 Oct 9 19:43 package.json
drwxr-xr-x 5 root root 4096 Oct 9 19:43 pkg
drwxr-xr-x 2 root root 4096 Oct 9 19:43 public
-rw-r--r-- 1 root root 5570 Oct 9 19:43 README.md
-rwxr-xr-x 1 root root 299 Oct 9 19:43 release.sh
drwxr-xr-x 7 root root 4096 Oct 9 19:43 src
[root@k8s-master-15-81 kt-connect-kt-0.0.6]# go build -o "output/ktctl/ktctl" ./cmd/ktctl
go: finding github.com/deckarep/golang-set v1.7.1
go: finding github.com/jessevdk/go-assets v0.0.0-20160921144138-4f4301a06e15
go: finding github.com/imdario/mergo v0.3.7
go: finding github.com/thinkerou/favicon v0.1.0
go: finding github.com/rs/zerolog v0.0.0-20190704061603-77a169535877
go: finding github.com/gorilla/websocket v1.4.1
go: finding github.com/golang/protobuf v1.3.2
go: finding github.com/campoy/embedmd v0.0.0-20171205015432-c59ce00e0296
go: finding github.com/gin-gonic/autotls v0.0.0-20180426091246-be87bd5ef97b
go: finding github.com/urfave/cli v0.0.0-20190203184040-693af58b4d51
go: finding github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e
go: finding github.com/zenazn/goji v0.9.0
go: finding github.com/miekg/dns v0.0.0-20190106042521-5beb9624161b
go: k8s.io/[email protected]: unrecognized import path "k8s.io/apimachinery" (https fetch: Get https://k8s.io/apimachinery?go-get=1: dial tcp 35.201.71.162:443: i/o timeout)
go: k8s.io/[email protected]: unrecognized import path "k8s.io/utils" (https fetch: Get https://k8s.io/utils?go-get=1: dial tcp 35.201.71.162:443: i/o timeout)
go: k8s.io/[email protected]: unrecognized import path "k8s.io/client-go" (https fetch: Get https://k8s.io/client-go?go-get=1: dial tcp 35.201.71.162:443: i/o timeout)
go: finding github.com/pkg/errors v0.8.1
go: finding github.com/kubernetes/dashboard v1.10.1
go: golang.org/x/[email protected]: unrecognized import path "golang.org/x/lint" (https fetch: Get https://golang.org/x/lint?go-get=1: dial tcp 216.239.37.1:443: i/o timeout)
go: finding github.com/dustin/go-broadcast v0.0.0-20171205050544-f664265f5a66
go: finding github.com/manucorporat/stats v0.0.0-20180402194714-3ba42d56d227
go: golang.org/x/[email protected]: unrecognized import path "golang.org/x/time" (https fetch: Get https://golang.org/x/time?go-get=1: dial tcp 216.239.37.1:443: i/o timeout)
go: finding github.com/rs/xid v1.2.1
go: google.golang.org/[email protected]: unrecognized import path "google.golang.org/grpc" (https fetch: Get https://google.golang.org/grpc?go-get=1: dial tcp 216.239.37.1:443: i/o timeout)
go: finding github.com/gin-gonic/gin v1.4.0
go: k8s.io/[email protected]: unrecognized import path "k8s.io/api" (https fetch: Get https://k8s.io/api?go-get=1: dial tcp 35.201.71.162:443: i/o timeout)
go: golang.org/x/[email protected]: unrecognized import path "golang.org/x/tools" (https fetch: Get https://golang.org/x/tools?go-get=1: dial tcp 216.239.37.1:443: i/o timeout)
go: error loading module requirements
[root@k8s-master-15-81 kt-connect-kt-0.0.6]#

可以直接提供一下二进制包吗?
我在这个链接下载的是0.0.4版本的
https://rdc-incubators.oss-cn-beijing.aliyuncs.com/stable/ktctl_linux_amd64.tar.gz
可以提供一下最新二进制包下载路径吗?

Log
please add -d to debug log

Environment (please complete the following information):

  • OS: [e.g. CentOS 7.5]
  • Kubernetes [e.g. 1.15.1]
  • KT Version [e.g. 0.0.6]

Additional context
Add any other context about the problem here.

ktctl_darwin_amd64 do not be recognized to Unix executable file by macOS

Describe the bug
ktctl_darwin_amd64 do not be recognized to Unix executable file by macOS

Log
无法执行

Environment (please complete the following information):

  • OS: Darwin MacBook-Pro.local 18.7.0 Darwin Kernel Version 18.7.0: Thu Jun 20 18:42:21 PDT 2019;
  • KT Version: 0.0.5

Additional context
0.05版本的ktctl_darwin_amd64下载后没有被系统识别为Unix可执行文件,而是文稿类型

exec kubectl_darwin_amd64 会导致终端窗口退出

i can not connect the cluster

Describe the bug
can not connect cluster

Log
chengchuanxin@Acceleration ~/.kube> ktctl connect
panic: Connect already running /Users/chengchuanxin/.ktctl/pid. exit this

goroutine 1 [running]:
github.com/alibaba/kt-connect/pkg/kt/action.(*Action).Connect(0xc0001017b0, 0x8ae, 0x3900600, 0x0, 0x0)
/Users/yunlong/go/src/github.com/alibaba/kt-connect/pkg/kt/action/conect_action.go:21 +0x967
main.main.func1(0xc0002f02c0, 0xc0002d8400, 0xc0002f02c0)
/Users/yunlong/go/src/github.com/alibaba/kt-connect/cmd/ktctl/main.go:115 +0x129
github.com/urfave/cli.HandleAction(0x1c9fb80, 0xc00000e0a0, 0xc0002f02c0, 0x0, 0xc0002d84e0)
/Users/yunlong/go/src/github.com/urfave/cli/app.go:502 +0xbe
github.com/urfave/cli.Command.Run(0x1e1290f, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1e26af8, 0x20, 0x0, ...)
/Users/yunlong/go/src/github.com/urfave/cli/command.go:165 +0x487
github.com/urfave/cli.(*App).Run(0xc0002fe000, 0xc0000d2000, 0x2, 0x2, 0x0, 0x0)
/Users/yunlong/go/src/github.com/urfave/cli/app.go:259 +0x6e3
main.main()
/Users/yunlong/go/src/github.com/alibaba/kt-connect/cmd/ktctl/main.go:173 +0xc1b

Environment (please complete the following information):

  • OS: [macOS 10.13.6]
  • Kubernetes [1.10.1]
  • KT Version [0.0.5]

kt-connect无法实现联调Connect和Exchange

Describe the bug
A clear and concise description of what the bug is.
根据文档操作:https://rdc-incubator.github.io/kt-docs/#/zh-cn/quickstart

  1. 本地Linux centos7.5 开启connect和exchange

[root@k8s-registry-91 ~]# ktctl --kubeconfig /root/.kube/admin.conf connect
2019/10/29 17:39:21 Client address 192.168.15.91
2019/10/29 17:39:21 Deploying proxy deployment kt-connect-daemon-lqgpt in namespace default
2019/10/29 17:39:21 Pods not ready......
2019/10/29 17:39:23 Pod status is Pending
2019/10/29 17:39:25 Pod status is Running
2019/10/29 17:39:25 Success deploy proxy deployment kt-connect-daemon-lqgpt in namespace default
Daemon Start At 15177
[root@k8s-registry-91 ~]# ktctl --kubeconfig /root/.kube/admin.conf exchange tomcat --expose 8080
2019/10/29 17:38:08 'KT Connect' not runing, you can only access local app from cluster
2019/10/29 17:38:08 * tomcat (0 replicas)
2019/10/29 17:38:08 Scale deployment tomcat to zero
2019/10/29 17:38:08 Client address 192.168.15.91
2019/10/29 17:38:08 Deploying proxy deployment tomcat-kt-ssyos in namespace default
2019/10/29 17:38:08 Pods not ready......
2019/10/29 17:38:10 Pod status is Pending
2019/10/29 17:38:12 Pod status is Running
2019/10/29 17:38:12 Success deploy proxy deployment tomcat-kt-ssyos in namespace default

  1. k8s集群查看对应的kt相关 pod是否启动

[root@k8s-master-15-81 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
coffee-bbd45c6-mpp7n 1/1 Running 1 29h
coffee-bbd45c6-p5hgk 1/1 Running 10 36d
kt-connect-daemon-lqgpt-6d444896cb-pqzkx 1/1 Running 0 10m
nginx-7c45b84548-nbwgd 1/1 Running 0 44m
tea-5857f7786b-9btbj 1/1 Running 10 36d
tea-5857f7786b-9h2dv 1/1 Running 1 29h
tea-5857f7786b-v84hx 1/1 Running 2 36d
tomcat-kt-ssyos-75cfd6484b-tpkth 1/1 Running 0 11m
[root@k8s-master-15-81 ~]#

通过文档的demo测试:

  1. Connect联调:
    3.1 在本地直接访问PodIP:
    curl http://Pod:Port 发现有的pod可以,有点pod不可以
    3.2 在本地访问ClusteriIP:不通
    curl http://ClusterIP:Port 所有clusterIP:Port都不通
    3.3 使用Service的域名访问:所有都不通
    curl http://XXX服务.default.svc.cluster.local:8080

4.Exchange联调测试
集群内测试:
[root@k8s-master-15-81 ~]# kubectl exec -it face-api-5dcb48f6b8-8jsbf bash -n vsit
[root@face-api-5dcb48f6b8-8jsbf tomcat]# curl http://tomcat.default.svc.cluster.local:8080
curl: (7) Failed connect to tomcat.default.svc.cluster.local:8080; Connection refused
[root@face-api-5dcb48f6b8-8jsbf tomcat]# exit
command terminated with exit code 7
[root@k8s-master-15-81 ~]# curl http://tomcat.default.svc.cluster.local:8080
curl: (6) Could not resolve host: tomcat.default.svc.cluster.local; Unknown error
[root@k8s-master-15-81 ~]#

本地客户端测试:
[root@k8s-registry-91 ~]# curl http://tomcat.default.svc.cluster.local:8080
curl: (6) Could not resolve host: tomcat.default.svc.cluster.local; Unknown error
[root@k8s-registry-91 ~]#

所有测试都失败了,未看到/var/log/messages相关报错日志

Log
please add -d to debug log

Environment (please complete the following information):

  • OS: [e.g. CentOS 7.5]
  • Kubernetes [e.g. ### ### ### 1.10.1]
  • KT Version [e.g. 0.0.4]、[e.g. 0.0.5]、[e.g. 0.0.6]

Additional context
Add any other context about the problem here.

UI项目在哪里?是否开源?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
我们需要额外开发一些ui功能,请问是否开源UI项目?这样我们可以把一些通用功能提交PR。比如相关的鉴权功能等

Can't Access Server via internal DNS address

Describe the bug
Access via IP ok
Can't Access Server via internal DNS address

Error:
vm:~$ curl http://tomcat.default.svc.cluster.local:8080
curl: (6) Could not resolve host: tomcat.default.svc.cluster.local

DNS request from ('192.168.160.128', 55699) to None: 43 bytes

Log
vm:~/.kube$ ktctl connect namespace test
3:50PM INF Daemon Start At 14194
3:50PM INF Client address 192.168.160.128
3:50PM INF Shadow is ready.
Forwarding from 127.0.0.1:2222 -> 22
Forwarding from [::1]:2222 -> 22
Starting sshuttle proxy.
firewall manager: Starting firewall with Python version 2.7.12
firewall manager: ready method name nat.
IPv6 enabled: False
UDP enabled: False
DNS enabled: True
User enabled: False
TCP redirector listening on ('127.0.0.1', 12300).
DNS listening on ('127.0.0.1', 12299).
Starting client with Python version 2.7.12
c : connecting to server...
Handling connection for 2222
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
c : Connected.
Starting server with Python version 3.5.2
s: latency control setting = True
s: auto-nets:False
firewall manager: setting up.

iptables -t nat -N sshuttle-12300
iptables -t nat -F sshuttle-12300
iptables -t nat -I OUTPUT 1 -j sshuttle-12300
iptables -t nat -I PREROUTING 1 -j sshuttle-12300
iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.1/32 -p tcp
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.244.0.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.100.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.96.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.108.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.102.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.99.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.104.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.98.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.97.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.103.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.105.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.101.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.107.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.111.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.109.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.106.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.110.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 114.114.114.114/32 -p udp --dport 53 --to-ports 12299 -m ttl ! --ttl 42
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 192.168.0.2/32 -p udp --dport 53 --to-ports 12299 -m ttl ! --ttl 42
c : DNS request from ('192.168.160.128', 41908) to None: 41 bytes
c : DNS request from ('192.168.160.128', 41908) to None: 41 bytes
c : DNS request from ('192.168.160.128', 51766) to None: 53 bytes
c : DNS request from ('192.168.160.128', 51766) to None: 53 bytes
c : DNS request from ('192.168.160.128', 47463) to None: 37 bytes
c : DNS request from ('192.168.160.128', 47463) to None: 37 bytes
c : DNS request from ('192.168.160.128', 41140) to None: 49 bytes
c : DNS request from ('192.168.160.128', 41140) to None: 49 bytes
c : DNS request from ('192.168.160.128', 35257) to None: 50 bytes
c : DNS request from ('192.168.160.128', 35257) to None: 50 bytes
c : DNS request from ('192.168.160.128', 37780) to None: 62 bytes
c : DNS request from ('192.168.160.128', 37780) to None: 62 bytes
c : Accept TCP: 192.168.160.128:42278 -> 10.111.211.232:80.
s: SW#6:10.111.211.232:80: uwrite: got EPIPE
c : Accept TCP: 192.168.160.128:41788 -> 10.102.89.241:80.
c : SW#8:192.168.160.128:42278: deleting (3 remain)
c : SW'unknown':Mux#13: deleting (2 remain)
c : Accept TCP: 192.168.160.128:41844 -> 10.102.89.241:80.
c : SW#11:192.168.160.128:41788: deleting (3 remain)
c : SW'unknown':Mux#14: deleting (2 remain)
c : DNS request from ('192.168.160.128', 50877) to None: 29 bytes
c : SW#8:192.168.160.128:41844: deleting (1 remain)
c : SW'unknown':Mux#15: deleting (0 remain)
c : DNS request from ('192.168.160.128', 39003) to None: 29 bytes
c : DNS request from ('192.168.160.128', 51747) to None: 43 bytes
c : DNS request from ('192.168.160.128', 35325) to None: 43 bytes

Environment (please complete the following information):

  • OS: [e.g. Ubuntu 14.04] Ubuntu 18.04.3 LTS
  • Kubernetes [e.g. 1.10.1] 1.12.5
  • KT Version [e.g. 0.0.4] KT Connect version 0.0.6

Additional context
Add any other context about the problem here.

镜像无法获取

Describe the bug
A clear and concise description of what the bug is.

调用connect的时候pod创建不起来,查看events得到阿里云的镜像仓库需要docker login提示

Log
please add -d to debug log

 Normal   Scheduled       42m                 default-scheduler      Successfully assigned default/kt-connect-daemon-ypcpe-6cbfd9b47c-9bv75 to 172.21.40.15
  Normal   SandboxChanged  42m                 kubelet, 172.21.40.15  Pod sandbox changed, it will be killed and re-created.
  Warning  Failed          41m (x3 over 42m)   kubelet, 172.21.40.15  Failed to pull image "registry.cn-shanghai.aliyuncs.com/kube-helm/kube-proxy:1560842881": rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-shanghai.aliyuncs.com/kube-helm/kube-proxy, repository does not exist or may require 'docker login'
  Warning  Failed          41m (x3 over 42m)   kubelet, 172.21.40.15  Error: ErrImagePull
  Warning  Failed          41m (x7 over 42m)   kubelet, 172.21.40.15  Error: ImagePullBackOff
  Normal   Pulling         40m (x4 over 42m)   kubelet, 172.21.40.15  pulling image "registry.cn-shanghai.aliyuncs.com/kube-helm/kube-proxy:1560842881"
  Normal   BackOff         2m (x175 over 42m)  kubelet, 172.21.40.15  Back-off pulling image "registry.cn-shanghai.aliyuncs.com/kube-helm/kube-proxy:1560842881"

Environment (please complete the following information):

  • KT Version = 0.0.4
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.4-tke.3", GitCommit:"ab6e1c10a35382c2ec70036e0e51c201eb3fc3f8", GitTreeState:"clean", BuildDate:"2019-06-18T12:12:29Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

Additional context
Add any other context about the problem here.

执行ktctl connect后,默认镜像无法下载

Describe the bug
A clear and concise description of what the bug is.

执行ktctl connect时,K8s集群会下载镜像:registry.cn-shanghai.aliyuncs.com/kube-helm/kube-proxy:1560842881 ,但这个镜像无法下载,自己制作镜像是通过项目目录的这个文件吗:docker/proxy/Dockerfile(v0.0.5 release版本)。

制作完镜像,ktctl --image connect命令能够执行,但执行后的输出如下,与示例不一致,集群内IP与不通:
8:37PM INF Daemon Start At 25606
8:37PM INF Client address 10.49.xx.xx
8:37PM INF Shadow is ready.

Log
please add -d to debug log

Environment (please complete the following information):

  • OS: [e.g. Ubuntu 14.04]
  • Kubernetes [e.g. 1.10.1]
  • KT Version [e.g. 0.0.4]

Additional context
Add any other context about the problem here.

Iptables rules is not apply to WSL ubuntu

Describe the bug

Iptables rules is not apply to WSL ubuntu

Log

Forwarding from 127.0.0.1:2222 -> 22
Forwarding from [::1]:2222 -> 22
2:47AM DBG port-forward start at pid: 5538
2:47AM DBG Child, os.Args = [ktctl -d connect]
2:47AM DBG Child, cmd.Args = [sshuttle --dns --to-ns 172.16.0.72 -v -e ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -i /root/.kt_id_rsa -r [email protected]:2222 -x 127.0.0.1 172.16.1.0/25 172.16.0.128/25 172.16.0.0/25 172.19.0.0/16]
Starting sshuttle proxy.
firewall manager: Starting firewall with Python version 2.7.6
firewall manager: ready method name nat.
IPv6 enabled: False
UDP enabled: False
DNS enabled: True
User enabled: False
TCP redirector listening on ('127.0.0.1', 12300).
DNS listening on ('127.0.0.1', 12299).
Starting client with Python version 2.7.6
c : connecting to server...
Handling connection for 2222
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
c : Connected.
firewall manager: setting up.
Starting server with Python version 3.5.2
 s: latency control setting = True
 s: auto-nets:False
>> iptables -t nat -N sshuttle-12300
>> iptables -t nat -F sshuttle-12300
>> iptables -t nat -I OUTPUT 1 -j sshuttle-12300
>> iptables -t nat -I PREROUTING 1 -j sshuttle-12300
>> iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.1/32 -p tcp
>> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 172.16.1.0/25 -p tcp --to-ports 12300 -m ttl ! --ttl 42
>> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 172.16.0.128/25 -p tcp --to-ports 12300 -m ttl ! --ttl 42
>> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 172.16.0.0/25 -p tcp --to-ports 12300 -m ttl ! --ttl 42
>> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 172.19.0.0/16 -p tcp --to-ports 12300 -m ttl ! --ttl 42
>> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.0.2.3/32 -p udp --dport 53 --to-ports 12299 -m ttl ! --ttl 42
2:47AM DBG vpn(sshuttle) start at pid: 5547
2:47AM DBG KT proxy start successful
$ iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
sshuttle-12300  all  --  anywhere             anywhere

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
sshuttle-12300  all  --  anywhere             anywhere

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination

Chain sshuttle-12300 (0 references)
target     prot opt source               destination
RETURN    all  --  anywhere             anywhere

Environment (please complete the following information):

  • OS: [e.g. Windows 10 WSL ubuntu]
  • Kubernetes [1.10.1]
  • KT Version [0.0.6]

Additional context

panic: No Auth Provider found for name "gcp"

go version
go version go1.12.6 darwin/amd64
ktctl version
Kubernetes Develope Tools version 0.0.4

kubectx 指向GCP的cluster
image
已通过gcloud账户授权

image
使用kt时出现以上报错

腾讯云环境执行ktctl mesh时报错,缺少oidc插件

Describe the bug
本地腾讯云环境
在执行ktctl mesh 的时候报错.

image

Log
panic: No Auth Provider found for name "oidc"

goroutine 1 [running]:
github.com/alibaba/kt-connect/pkg/kt/action.(*Action).Exchange(0xc0002e1710, 0x7ffeefbff82f, 0x6, 0x7ffeefbff83f, 0x4, 0xc000040145, 0x11, 0xc0000419c0, 0x1c)
/go/src/github.com/alibaba/kt-connect/pkg/kt/action/exchange_action.go:40 +0x5a2
main.main.func2(0xc0002038c0, 0xc0001eae00, 0xc0002038c0)
/go/src/github.com/alibaba/kt-connect/cmd/ktctl/main.go:141 +0x16d
github.com/urfave/cli.HandleAction(0x1c45f80, 0xc0001ea840, 0xc0002038c0, 0x0, 0xc00023ac60)
/go/pkg/mod/github.com/urfave/[email protected]/app.go:502 +0xbe
github.com/urfave/cli.Command.Run(0x1db7237, 0x8, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1dd1864, 0x27, 0x0, ...)
/go/pkg/mod/github.com/urfave/[email protected]/command.go:165 +0x459
github.com/urfave/cli.(*App).Run(0xc0000ef340, 0xc000038050, 0x5, 0x5, 0x0, 0x0)
/go/pkg/mod/github.com/urfave/[email protected]/app.go:259 +0x6bb
main.main()
/go/src/github.com/alibaba/kt-connect/cmd/ktctl/main.go:173 +0xc45

Environment (please complete the following information):

  • OS: [e.g. Mac OS]
  • Kubernetes [e.g. 1.14.1]
  • KT Version [e.g. 0.0.8]

Additional context
Add any other context about the problem here.

启动问题

Describe the bug
sudo ktctl connect 命令的日志和文档描述不一致,尝试使用crul请求远程集群的pod也请求不通(mac系统按照文档安装)

Log
1:40PM INF Daemon Start At 72299
1:40PM INF Client address 192.168.4.19
1:40PM INF Deploying shadow deployment kt-connect-daemon-agfnp in namespace default

1:41PM INF Shadow is ready.
Forwarding from 127.0.0.1:2222 -> 22
Forwarding from [::1]:2222 -> 22
Starting sshuttle proxy.
firewall manager: Starting firewall with Python version 3.8.1
firewall manager: ready method name pf.
IPv6 enabled: True
UDP enabled: False
DNS enabled: True
User enabled: False
TCP redirector listening on ('::1', 12300, 0, 0).
TCP redirector listening on ('127.0.0.1', 12300).
DNS listening on ('::1', 12299, 0, 0).
DNS listening on ('127.0.0.1', 12299).
Starting client with Python version 3.8.1
c : connecting to server...
Handling connection for 2222
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
Starting server with Python version 3.5.2
s: latency control setting = True
s: auto-nets:False
c : Connected.
firewall manager: setting up.

pfctl -s Interfaces -i lo -v
pfctl -s all
pfctl -a sshuttle6-12300 -f /dev/stdin
pfctl -E
pfctl -s Interfaces -i lo -v
pfctl -s all
pfctl -a sshuttle-12300 -f /dev/stdin
pfctl -E
c : DNS request from ('192.168.4.19', 64619) to None: 40 bytes
c : DNS request from ('192.168.4.19', 54159) to None: 60 bytes
c : DNS request from ('192.168.4.19', 63031) to None: 36 bytes

ktctl connect 使用后dns解析等并没有走到集群内

Describe the bug
2019/08/23 15:35:11 Client address 10.0.0.12
2019/08/23 15:35:11 Deploying proxy deployment kt-connect-daemon-tztcp in namespace default
2019/08/23 15:35:12 Pods not ready......
2019/08/23 15:35:14 Pod status is Pending
2019/08/23 15:35:16 Pod status is Running
2019/08/23 15:35:16 Success deploy proxy deployment kt-connect-daemon-tztcp in namespace default
Forwarding from 127.0.0.1:2222 -> 22
2019/08/23 15:35:18 port-forward start at pid: 3019
Daemon Start At 2777

集群内pod 访问可以
root@php-test-7cddb54698-j6tn9:~# curl php-test.default.svc.cluster.local:18306 | grep head
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4036 100 4036 0 0 622k 0 --:--:-- --:--:-- --:--:-- 656k

article, aside, dialog, figcaption, figure, footer, header, hgroup, main, nav, section {

机器上执行
[root@VM_0_12_centos ~]# curl php-test.default.svc.cluster.local:18306

curl: (6) Could not resolve host: php-test.default.svc.cluster.local; Unknown error

Environment (please complete the following information):

  • OS: [e.g. Linux VM_0_12_centos 3.10.0-862.9.1.el7.x86_64 #1 SMP Mon Jul 16 16:29:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux]
  • Kubernetes [v1.12.4-tke.4]
  • KT Version [e.g. 0.0.4]

Additional context
使用后dns解析等并没有走到集群内

使用ktctl connect后,sshuttle无法启动

Describe the bug
sshuttle无法启动,ps -ef没有sshuttle进程
➜ ~ ssh -i .kt_id_rsa [email protected] -p 2222
Connection closed by 127.0.0.1 port 2222
➜ ~ curl 127.0.0.1:2222
SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.8
Protocol mismatch.
➜ ~ ssh [email protected] -p 2222
[email protected]'s password:
Permission denied, please try again.
[email protected]'s password:
Permission denied, please try again.
➜ ~ sshuttle --dns --to-ns 10.129.2.114 -e "ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -i /Users/xx/.kt_id_rsa" -r [email protected]:2222 -x 127.0.0.1 172.30.0.0/16
[local sudo] Password:
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
Connection closed by 127.0.0.1 port 2222
client: fatal: failed to establish ssh session (1)

使用root密码或者公钥登录都没法ssh登录到pod中
应该是registry.cn-hangzhou.aliyuncs.com/rdc-incubator/kt-connect-shadow:stable这个镜像有问题

Log
➜ ~ sudo ktctl -d -n kt connect
10:57PM INF Daemon Start At 59974
10:57PM INF Client address 192.168.1.129
10:57PM DBG Deploying proxy deployment kt-connect-daemon-fahkw in namespace kt

10:57PM DBG Shadow Pods not ready......
10:57PM DBG Shadow Pod status is Pending
10:57PM DBG Shadow Pod status is Pending
10:57PM DBG Shadow Pod status is Pending
10:57PM DBG Shadow Pod status is Running
10:57PM INF Shadow is ready.
10:57PM DBG Success deploy proxy deployment kt-connect-daemon-fahkw in namespace kt

10:57PM DBG Child, os.Args = [ktctl -d -n kt connect]
10:57PM DBG Child, cmd.Args = [kubectl --kubeconfig=/Users/xx/.kube/config -n kt port-forward kt-connect-daemon-fahkw-5497fd9d6d-f79kk 2222:22]
Forwarding from 127.0.0.1:2222 -> 22
Forwarding from [::1]:2222 -> 22
10:57PM DBG port-forward start at pid: 59980

10:57PM DBG Child, os.Args = [ktctl -d -n kt connect]
10:57PM DBG Child, cmd.Args = [sshuttle --dns --to-ns 10.129.2.114 -e ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -i /Users/xx/.kt_id_rsa -r [email protected]:2222 -x 127.0.0.1 172.30.0.0/16]
Handling connection for 2222
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
Connection closed by 127.0.0.1 port 2222
client: fatal: failed to establish ssh session (1)
10:57PM DBG vpn(sshuttle) exited
10:57PM DBG vpn(sshuttle) start at pid: 59984

Environment (please complete the following information):

  • OS: [macos]
  • Kubernetes [1.14.6]
  • KT Version [0.0.6]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.