Coder Social home page Coder Social logo

telepresenceio / telepresence Goto Github PK

View Code? Open in Web Editor NEW
6.4K 58.0 505.0 39.27 MB

Local development against a remote Kubernetes or OpenShift cluster

Home Page: https://www.telepresence.io

License: Other

Shell 0.37% Makefile 1.44% Ruby 0.20% Go 97.13% Smarty 0.15% Batchfile 0.09% PowerShell 0.04% HCL 0.58%
kubernetes local-development docker proxy tunnel vpn minikube

telepresence's Introduction

Telepresence: fast, efficient local development for Kubernetes microservices

Artifact HUB

Telepresence gives developers infinite scale development environments for Kubernetes.

Docs: OSS: https://www.getambassador.io/docs/telepresence-oss/ Licensed: https://www.getambassador.io/docs/telepresence Slack: Discuss in the OSS CNCF Slack in the #telepresence-oss channel Licensed: a8r.io/slack

With Telepresence:

  • You run one service locally, using your favorite IDE and other tools
  • You run the rest of your application in the cloud, where there is unlimited memory and compute

This gives developers:

  • A fast local dev loop, with no waiting for a container build / push / deploy
  • Ability to use their favorite local tools (IDE, debugger, etc.)
  • Ability to run large-scale applications that can't run locally

Quick Start

A few quick ways to start using Telepresence

  • Telepresence Quick Start: Quick Start
  • Install Telepresence: Install
  • Contributor's Guide: Guide
  • Meetings: Check out our community meeting schedule for opportunities to interact with Telepresence developers

Walkthrough

Install an interceptable service:

Start with an empty cluster:

$ kubectl create deploy hello --image=registry.k8s.io/echoserver:1.4
deployment.apps/hello created
$ kubectl expose deploy hello --port 80 --target-port 8080
service/hello exposed
$ kubectl get ns,svc,deploy,po
NAME                        STATUS   AGE
namespace/kube-system       Active   53m
namespace/default           Active   53m
namespace/kube-public       Active   53m
namespace/kube-node-lease   Active   53m

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1      <none>        443/TCP   53m
service/hello        ClusterIP   10.43.73.112   <none>        80/TCP    2m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello   1/1     1            1           2m

NAME                        READY   STATUS    RESTARTS   AGE
pod/hello-9954f98bf-6p2k9   1/1     Running   0          2m15s

Check telepresence version

$ telepresence version
OSS Client : v2.17.0
Root Daemon: not running
User Daemon: not running

Setup Traffic Manager in the cluster

Install Traffic Manager in your cluster. By default, it will reside in the ambassador namespace:

$ telepresence helm install

Traffic Manager installed successfully

Establish a connection to the cluster (outbound traffic)

Let telepresence connect:

$ telepresence connect
Launching Telepresence Root Daemon
Launching Telepresence User Daemon
Connected to context default, namespace default (https://35.232.104.64)

A session is now active and outbound connections will be routed to the cluster. I.e. your laptop is logically "inside" a namespace in the cluster.

Since telepresence connected to the default namespace, all services in that namespace can now be reached directly by their name. You can of course also use namespaced names, e.g. curl hello.default.

$ curl hello
CLIENT VALUES:
client_address=10.244.0.87
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://hello:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=hello
user-agent=curl/8.0.1
BODY:
-no body in request-

Intercept the service. I.e. redirect traffic to it to our laptop (inbound traffic)

Add an intercept for the hello deployment on port 9000. Here, we also start a service listening on that port:

$ telepresence intercept hello --port 9000 -- python3 -m http.server 9000
Using Deployment hello
intercepted
    Intercept name         : hello
    State                  : ACTIVE
    Workload kind          : Deployment
    Destination            : 127.0.0.1:9000
    Service Port Identifier: 80
    Volume Mount Point     : /tmp/telfs-524630891
    Intercepting           : all TCP connections
Serving HTTP on 0.0.0.0 port 9000 (http://0.0.0.0:9000/) ...

The python -m httpserver is now started on port 9000 and will run until terminated by <ctrl>-C. Access it from a browser using http://hello/ or use curl from another terminal. With curl, it presents a html listing from the directory where the server was started. Something like:

$ curl hello
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href="file1.txt">file1.txt</a></li>
<li><a href="file2.txt">file2.txt</a></li>
</ul>
<hr>
</body>
</html>

Observe that the python service reports that it's being accessed:

127.0.0.1 - - [16/Jun/2022 11:39:20] "GET / HTTP/1.1" 200 -

Clean-up and close daemon processes

End the service with <ctrl>-C and then try curl hello or http://hello again. The intercept is gone, and the echo service responds as normal.

Now end the session too. Your desktop no longer has access to the cluster internals.

$ telepresence quit
Disconnected
$ curl hello
curl: (6) Could not resolve host: hello

The telepresence daemons are still running in the background, which is harmless. You'll need to stop them before you upgrade telepresence. That's done by passing the option -s (stop all local telepresence daemons) to the quit command.

$ telepresence quit -s
Telepresence Daemons quitting...done

What got installed in the cluster?

Telepresence installs the Traffic Manager in your cluster if it is not already present. This deployment remains unless you uninstall it.

Telepresence injects the Traffic Agent as an additional container into the pods of the workload you intercept, and will optionally install an init-container to route traffic through the agent (the init-container is only injected when the service is headless or uses a numerical targetPort). The modifications persist unless you uninstall them.

At first glance, we can see that the deployment is installed ...

$ kubectl get svc,deploy,pod
service/kubernetes   ClusterIP   10.43.0.1       <none>        443/TCP                      7d22h
service/hello        ClusterIP   10.43.145.57    <none>        80/TCP                       13m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello   1/1     1            1           13m

NAME                         READY   STATUS    RESTARTS        AGE
pod/hello-774455b6f5-6x6vs   2/2     Running   0               10m

... and that the traffic-manager is installed in the "ambassador" namespace.

$ kubectl -n ambassador get svc,deploy,pod
NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/traffic-manager   ClusterIP   None           <none>        8081/TCP   17m
service/agent-injector    ClusterIP   10.43.72.154   <none>        443/TCP    17m

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/traffic-manager   1/1     1            1           17m

NAME                                  READY   STATUS    RESTARTS   AGE
pod/traffic-manager-dcd4cc64f-6v5bp   1/1     Running   0          17m

The traffic-agent is installed too, in the hello pod. Here together with an init-container, because the service is using a numerical targetPort.

$ kubectl describe pod hello-774455b6f5-6x6vs
Name:             hello-75b7c6d484-9r4xd
Namespace:        default
Priority:         0
Service Account:  default
Node:             kind-control-plane/192.168.96.2
Start Time:       Sun, 07 Jan 2024 01:01:33 +0100
Labels:           app=hello
                  pod-template-hash=75b7c6d484
                  telepresence.io/workloadEnabled=true
                  telepresence.io/workloadName=hello
Annotations:      telepresence.getambassador.io/inject-traffic-agent: enabled
                  telepresence.getambassador.io/restartedAt: 2024-01-07T00:01:33Z
Status:           Running
IP:               10.244.0.89
IPs:
  IP:           10.244.0.89
Controlled By:  ReplicaSet/hello-75b7c6d484
Init Containers:
  tel-agent-init:
    Container ID:  containerd://4acdf45992980e2796f0eb79fb41afb1a57808d108eb14a355cb390ccc764571
    Image:         docker.io/datawire/tel2:2.17.0
    Image ID:      docker.io/datawire/tel2@sha256:e18aed6e7bd3c15cb5a99161c164e0303d20156af68ef138faca98dc2c5754a7
    Port:          <none>
    Host Port:     <none>
    Args:
      agent-init
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 07 Jan 2024 01:01:34 +0100
      Finished:     Sun, 07 Jan 2024 01:01:34 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/traffic-agent from traffic-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svf4h (ro)
Containers:
  echoserver:
    Container ID:   containerd://577e140545f3106c90078e687e0db3661db815062084bb0c9f6b2d0b4f949308
    Image:          registry.k8s.io/echoserver:1.4
    Image ID:       sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sun, 07 Jan 2024 01:01:34 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svf4h (ro)
  traffic-agent:
    Container ID:  containerd://17558b4711903f4cb580c5afafa169d314a7deaf33faa749f59d3a2f8eed80a9
    Image:         docker.io/datawire/tel2:2.17.0
    Image ID:      docker.io/datawire/tel2@sha256:e18aed6e7bd3c15cb5a99161c164e0303d20156af68ef138faca98dc2c5754a7
    Port:          9900/TCP
    Host Port:     0/TCP
    Args:
      agent
    State:          Running
      Started:      Sun, 07 Jan 2024 01:01:34 +0100
    Ready:          True
    Restart Count:  0
    Readiness:      exec [/bin/stat /tmp/agent/ready] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      _TEL_AGENT_POD_IP:       (v1:status.podIP)
      _TEL_AGENT_NAME:        hello-75b7c6d484-9r4xd (v1:metadata.name)
      A_TELEPRESENCE_MOUNTS:  /var/run/secrets/kubernetes.io/serviceaccount
    Mounts:
      /etc/traffic-agent from traffic-config (rw)
      /tel_app_exports from export-volume (rw)
      /tel_app_mounts/echoserver/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svf4h (ro)
      /tel_pod_info from traffic-annotations (rw)
      /tmp from tel-agent-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svf4h (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-svf4h:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
  traffic-annotations:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.annotations -> annotations
  traffic-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      telepresence-agents
    Optional:  false
  export-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  tel-agent-tmp:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:   <unset>
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  7m40s  default-scheduler  Successfully assigned default/hello-75b7c6d484-9r4xd to kind-control-plane
  Normal  Pulled     7m40s  kubelet            Container image "docker.io/datawire/tel2:2.17.0" already present on machine
  Normal  Created    7m40s  kubelet            Created container tel-agent-init
  Normal  Started    7m39s  kubelet            Started container tel-agent-init
  Normal  Pulled     7m39s  kubelet            Container image "registry.k8s.io/echoserver:1.4" already present on machine
  Normal  Created    7m39s  kubelet            Created container echoserver
  Normal  Started    7m39s  kubelet            Started container echoserver
  Normal  Pulled     7m39s  kubelet            Container image "docker.io/datawire/tel2:2.17.0" already present on machine
  Normal  Created    7m39s  kubelet            Created container traffic-agent
  Normal  Started    7m39s  kubelet            Started container traffic-agent

Telepresence keeps track of all possible intercepts for containers that have an agent installed in the configmap telepresence-agents.

$ kubectl describe configmap telepresence-agents
Name:         telepresence-agents
Namespace:    default
Labels:       app.kubernetes.io/created-by=traffic-manager
              app.kubernetes.io/name=telepresence-agents
              app.kubernetes.io/version=2.17.0
Annotations:  <none>

Data
====
hello:
----
agentImage: localhost:5000/tel2:2.17.0
agentName: hello
containers:
- Mounts: null
  envPrefix: A_
  intercepts:
  - agentPort: 9900
    containerPort: 8080
    protocol: TCP
    serviceName: hello
    servicePort: 80
    serviceUID: 68a4ecd7-0a12-44e2-9293-dc16fb205621
    targetPortNumeric: true
  mountPoint: /tel_app_mounts/echoserver
  name: echoserver
logLevel: debug
managerHost: traffic-manager.ambassador
managerPort: 8081
namespace: default
pullPolicy: IfNotPresent
tracingPort: 15766
workloadKind: Deployment
workloadName: hello


BinaryData
====

Events:  <none>

Uninstalling

You can uninstall the traffic-agent from specific deployments or from all deployments. Or you can choose to uninstall everything in which case the traffic-manager and all traffic-agents will be uninstalled.

$ telepresence helm uninstall

will remove everything that was automatically installed by telepresence from the cluster.

$ telepresence uninstall --agent hello

will remove the traffic-agent and the configmap entry.

Troubleshooting

The telepresence background processes daemon and connector both produces log files that can be very helpful when problems are encountered. The files are named daemon.log and connector.log. The location of the logs differ depending on what platform that is used:

  • macOS ~/Library/Logs/telepresence
  • Linux ~/.cache/telepresence/logs
  • Windows "%USERPROFILE%\AppData\Local\logs"

How it works

When Telepresence 2 connects to a Kubernetes cluster, it

  1. Ensures Traffic Manager is installed in the cluster.
  2. Looks for the relevant subnets in the kubernetes cluster.
  3. Creates a Virtual Network Interface (VIF).
  4. Assigns the cluster's subnets to the VIF.
  5. Binds itself to VIF and starts routing traffic to the traffic-manager, or a traffic-agent if one is present.
  6. Starts listening for, and serving DNS requests, by passing a selected portion to the traffic-manager or traffic-agent.

When a locally running application makes a network request to a service in the cluster, Telepresence will resolve the name to an address within the cluster. The operating system then sees that the TUN device has an address in the same subnet as the address of the outgoing packets and sends them to tel0. Telepresence is on the other side of tel0 and picks up the packets, injecting them into the cluster through a gRPC connection with Traffic Manager.

For a more in-depth overview, checkout our blog post: Implementing Telepresence Networking with a TUN device

Comparison to classic Telepresence

Visit the troubleshooting section in the Telepresence documentation for more advice: Troubleshooting

telepresence's People

Contributors

0x6a77 avatar agustinmiquelez avatar aosoriodw avatar ark3 avatar arturogonzalez58 avatar brianfleming avatar brucehorn avatar concaf avatar dependabot[bot] avatar efunk avatar ikanadev avatar inercia avatar inoahnothing avatar jacoblbeck avatar josecv avatar julianarana avatar kai-tillman avatar khussey avatar knlambert avatar lukeshu avatar mattmcclure-dw avatar mkantzer avatar njayp avatar plombardi89 avatar raphaelreyna avatar sarabrajsingh avatar shepz avatar ssaraswati avatar thallgren avatar w-h37 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

telepresence's Issues

Don't require kubernetes credentials

Currently telepresence requires kubectl and corresponding credentials to access the remote Kubernetes cluster. It would be nice if there was another access control mechanism that didn't require giving users full Kubernetes access.

Interested in this feature? Add a "thumbs up" or comment below.

Better distribution mechanism

Right now we have file you download.

Some issues:

  1. We don't notice new versions exist.
  2. No update mechanism.
  3. No way to use 3rd party Python libraries.

Options (non-exclusive), each of which might cover different subset of the above:

  • Native packages (brew, deb, rpm)
  • arx/nix-bundle/appimage/PyInstaller
  • pex
  • Rewrite in Go

Native packages seem best. Some thoughts on tooling:

Continuous integration, not just local tests

  • Kubernetes cluster in the cloud.
  • Build docker images for git hash, not latest version, except when doing actual release.
  • Tests don't use any shared resources (need to add more randomness, etc..)
  • CI runs tests using custom generated images.

Hang at "Starting proxy..."

In the guestbook example directory, I ran

$ telepresence --deployment telepresence-deployment --expose 8080 --run-shell
Starting proxy...

... and it is sitting there.

Log file says:

Running: (['docker', 'run', '--rm', '--name', 'telepresence-1491325324-79-46205', '-t', '-v', '/Users/ark3:/opt:ro', '-v', '/Users/ark3:/Users/ark3:ro', '-p', ':9050', '-v', '/tmp/tmpQhUaSJ:/output', 'datawire/telepresence-local:0.23', '501', 'telepresence-deployment', '8080', '10.12.108.229', 'null'],)
Unable to find image 'datawire/telepresence-local:0.23' locally
0.23: Pulling from datawire/telepresence-local
0a8490d0dfd3: Pulling fs layer
51f9c8ec365f: Pulling fs layer
00f17031bf5f: Pulling fs layer
00f17031bf5f: Verifying Checksum
00f17031bf5f: Download complete
0a8490d0dfd3: Verifying Checksum
0a8490d0dfd3: Download complete
0a8490d0dfd3: Pull complete
51f9c8ec365f: Verifying Checksum
51f9c8ec365f: Download complete
51f9c8ec365f: Pull complete
00f17031bf5f: Pull complete
Digest: sha256:96c6330bacca0c60c521ac2a4f8ed829574db665dac9c05824a444960069c9dd
Status: Downloaded newer image for datawire/telepresence-local:0.23
Unable to connect to the server: dial tcp 192.168.64.5:8443: i/o timeout
Traceback (most recent call last):
  File "/usr/local/bin/entrypoint.py", line 348, in <module>
    loads(argv[5])
  File "/usr/local/bin/entrypoint.py", line 320, in main
    remote_info = get_remote_info(deployment_name, namespace)
  File "/usr/local/bin/entrypoint.py", line 153, in get_remote_info
    "--export",
  File "/usr/lib/python3.5/subprocess.py", line 626, in check_output
    **kwargs).stdout
  File "/usr/lib/python3.5/subprocess.py", line 708, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['kubectl', 'get', 'deployment', '-o', 'json', 'telepresence-deployment', '--export']' returned non-zero exit status 1

Fails with "SSH isn't starting" with SSH option "Compression yes"

What were you trying to do?

Start a telepresence shell. I have the SSH option "Compression yes" set in ~/.ssh/config.

What did you expect to happen?

The shell would start.

What happened instead?

There is a long pause after "Starting proxy...", then the traceback below is printed. If I removed the Compression setting from my SSH config, the command is successful.

Automatically included information

Command line: ['/Users/ssorensen/bin/telepresence', '--new-deployment', 'quickstart', '--run-shell']
Version: 0.26
Python version: 3.6.0 (default, Mar 27 2017, 20:59:04) [GCC 4.2.1 Compatible Clang 3.7.1 (tags/RELEASE_371/final)]
kubectl version: Client Version: v1.5.3-dirty
OS: Darwin 200956 16.5.0 Darwin Kernel Version 16.5.0: Fri Mar 3 16:52:33 PST 2017; root:xnu-3789.51.2~3/RELEASE_X86_64 x86_64 i386 MacBookPro12,1 Darwin
Traceback:

Traceback (most recent call last):
  File "/Users/ssorensen/bin/telepresence", line 678, in call_f
    return f(*args, **kwargs)
  File "/Users/ssorensen/bin/telepresence", line 761, in go
    subprocesses, env, socks_port = start_proxy(runner, args)
  File "/Users/ssorensen/bin/telepresence", line 507, in start_proxy
    args.expose,
  File "/Users/ssorensen/bin/telepresence", line 433, in connect
    wait_for_ssh(runner, ssh_port)
  File "/Users/ssorensen/bin/telepresence", line 379, in wait_for_ssh
    raise RuntimeError("SSH isn't starting.")
RuntimeError: SSH isn't starting.

Logs:

le=/dev/null', 'root@localhost', '/bin/true'],)
Running: (['ssh', '-q', '-p', '60325', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', 'root@localhost', '/bin/true'],)
Running: (['ssh', '-q', '-p', '60325', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', 'root@localhost', '/bin/true'],)
Running: (['ssh', '-q', '-p', '60325', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', 'root@localhost', '/bin/true'],)
Running: (['ssh', '-q', '-p', '60325', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', 'root@localhost', '/bin/true'],)
Running: (['ssh', '-q', '-p', '60325', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', 'root@localhost', '/bin/true'],)
Running: (['ssh', '-q', '-p', '60325', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', 'root@localhost', '/bin/true'],)
Running: (['ssh', '-q', '-p', '60325', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', 'root@localhost', '/bin/true'],)

Hard-coded destination ports don't work with --docker-run

A Docker image may hardcode that it needs to connect to a Service listening on port 6379. Telepresence might proxy that via port 2001, though.

A solution which can also be used to solve #6: each destination gets its own loopback IP, e.g. 127.0.0.2. You can then listen on same port as destination without worrying about conflicts or the like.

@richarddli reported this.

Docker versions <1.13 don't work (conflicting `--rm` and `--detach` options to `docker run`)

What were you trying to do?

Following the instructions on https://datawire.github.io/telepresence/ I was trying to run a local container with telepresence and have it hooked in to an existing deployment which I had added the telepresence container to on my Kubernetes cluster.

I got to the ./telepresence --deployment ... command in "2. Run the local Telepresence client on your machine" and then encountered an exception.

What did you expect to happen?

To have my container started up locally and plumbed into the networking of the pod for the selected deployment.

What happened instead?

Traceback for docker run error, no running container.

Automatically included information

Command line: ['./telepresence', '--deployment', 's4-infrastructure', '--docker-run', 'leastauthority/grid-router']
Version: 0.7
Python version: 2.7.12+ (default, Sep 17 2016, 12:08:02) [GCC 6.2.0 20160914]
OS: Linux baryon 4.8.0-37-generic #39-Ubuntu SMP Thu Jan 26 02:27:07 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Traceback:

Traceback (most recent call last):
  File "./telepresence", line 197, in call_f
    return f(*args, **kwargs)
  File "./telepresence", line 227, in go
    tempdir, container_id = start_proxy(args)
  File "./telepresence", line 139, in start_proxy
    container_id = unicode(check_output(docker_cmd).strip(), "utf-8")
  File "/usr/lib/python2.7/subprocess.py", line 574, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
CalledProcessError: Command '['docker', 'run', '--detach', '--rm', '-v', '/home/exarkun:/opt:ro', '-v', '/home/exarkun:/home/exarkun:ro', '-v', '/tmp/tmpjWD__K:/output', 'datawire/telepresence-local:0.7', '1000', 's4-infrastructure', '', '']' returned non-zero exit status 1

Documentation improvement suggestions

  1. The better way to query a Pod rather than using grep is kubectl get pod --selector=run=helloworld ... recommend changing this kubectl get pod | grep helloworld which might be confusing potentially if using a shared cluster.
  2. switching to / # wget -qO- http://quickstart.default.svc.cluster.local:8080/file.txt since its bundled on busybox and you don't need to have them instal lcurl
  3. Table of contents (probably w/https://github.com/dafi/tocmd-generator until we have better documentation system)

Detect non-local Docker and complain

If the Docker server is not on the local machine there all kinds of fun ways telepresence can break. Complain if we're using non-local Docker server, i.e. DOCKER_HOST env variable is set.

Support KUBECONFIG environment variable

One way of specifying an alternate context for kubectl is to set the environment variable KUBECONFIG to point to a non-default configuration file. Once done, this allows the user to use kubectl as always to talk to the desired cluster.

Telepresence does not respect this environment variable, leading to confusing situations where the telepresence log file shows a kubectl command failing whereas the same command typed at the shell completes successfully. Other (worse) failure modes are possible too.

--new-deploment can find the wrong Deployment/Pod

--new-deployment searches for Deployment and Pod by name... but that may be left over from a previous run, e.g. if it's still shutting down.

It would be better to set some unique per-run metadata and use that to find them, rather than name.

Don't require running code in container

Currently telepresence runs local code in a container. It would be even nicer if it didn't require that and code could be run normally on the developer machine directly.

Interested in this feature? Add a "thumbs up" or comment below.

Implementation options:

Improve error handling

Based on comments in #18, it seems error handling needs some work:

  1. Capture stderr and perhaps stdout from relevant subprocesses, so it doesn't get lost and can be reported automatically along with the traceback.
  2. We should record the Docker version and kubectl version in the bug report.

Remove need for Docker

At this point Telepresence doesn't actually require Docker, and on Linux it means we need sudo. We should be able to remove the dependency by moving code in entrypoint.py into the top-level CLI (cli/telepresence).

  • Document new requirements.
  • Remove sshpass requirement.
  • Assign random ports for local SSH.
  • Assign random ports for proxied SOCKS.
  • Test on Mac.

Document development commands

  1. Run tests (minikube start; make minikube-test)
  2. Run partial tests (TELEPRESENCE_TESTS="-k tocluster tests/test_run.py" make minikube-test)
  3. Run command line with images built above (TELEPRESENCE_VERSION=$(make version) cli/telepresence ...)

Windows support

It would be good to support Windows.

Options:

  • Windows Subsystem for Linux.
  • torsocks equivalent.
  • Other implementation strategies, e.g. #76.

Secrets, config maps, and other minimal volume support

Telepresence would benefit from just enough volume support to allow reading things like secrets, configmaps, Downward API, and the like. Basically, read-only access.

Transparent options:

Manual options:

  • Special env var $MOUNTS_ROOT is set to / in production, a temporary directory in Telepresence. Users must manually use it as basis for paths.

doubt-creating torsocks error

I tried using the uncontainerized process support in 0.14 and the first line of output when my app starts up now is:

[Mar 21 11:17:26] ERROR torsocks[19467]: Unable to resolve. Status reply: 4 (in socks5_recv_resolve_reply() at socks5.c:666)

I don't know if this is a critical error preventing my app from networking properly or if it's just a warning that something non-critical that failed that I can safely ignore.

strange diverging behavior with --docker-run

I don't really know what's going on here:

$ X="-it --rm leastauthority/subscription-converger /app/env/bin/python -c xxx"
$ docker run $X
Traceback (most recent call last):
  File "<string>", line 1, in <module>
NameError: name 'xxx' is not defined
$ ../telepresence --deployment s4-infrastructure --docker-run $X
Starting proxy...
Starting local container...
ImportError: No module named site
Shutting proxy down...
$ 

Support for running local service in a Docker container

Telepresence (in 0.22 or later) cannot proxy local services running in a Docker container.

This was implemented in version < 0.22, but was not very robust. Having both local process and local container supported in same program also impacts the design in ways that aren't. The feature was therefore removed, but should be reintroduced if users want it, perhaps as a separate program.

Requirements:

  • Ability to hard code destination port of a Service in the code written by the developer. E.g. "connect to port 6719 of service myredis" should work if the destination Service is on port 6719. Previously this was not possible, since we proxied ports to random other ports, so e.g. you would have to connect to port 2000.

Implementation ideas:

  • Document how to run Telepresence inside a Docker container. This is the easiest option, but require more user work.
  • OpenVPN or some other VPN.
  • Similar to original design, except instead of having single container proxy everything, have a different container proxying each Service. That would allow them to have different IP/port combinations, much like real Services, which is necessary if we want to meet requirements above.

no telepresence shell prompt indicator in gnome-terminal / bash

exarkun@baryon:~/Work/LeastAuthority/leastauthority.com$ ../telepresence --deployment s4-infrastructure --run-shell
Starting proxy...
exarkun@baryon:~/Work/LeastAuthority/leastauthority.com$ logout
Shutting proxy down...
exarkun@baryon:~/Work/LeastAuthority/leastauthority.com$ 

It does appear in my gnome-terminal / tmux / bash sessions.

telepresence CLI doesn't like Python 3 (or rather, not automatically tested on Python 3)

What were you trying to do?

I was trying to start a shiny new telepresence shell on my Mac...

What did you expect to happen?

I was expecting a shiny new telepresence shell to be running on my Mac...

What happened instead?

...but it didn't work. :(

Automatically included information

Command line: ['/Users/flynn/bin/telepresence', '--new-deployment', 'skunkworks', '--expose', '5000', '--run-shell']
Version: 0.14
Python version: 3.6.0 (default, Jan 25 2017, 07:30:03) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)]
kubectl version: (error: Command '['kubectl', 'version', '--short', '--client']' returned non-zero exit status 1.)
Docker version: `Client:
Version: 17.03.0-ce
API version: 1.26
Go version: go1.7.5
Git commit: 60ccb22
Built: Thu Feb 23 10:40:59 2017
OS/Arch: darwin/amd64

Server:
Version: 17.03.0-ce
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 07:52:04 2017
OS/Arch: linux/amd64
Experimental: trueOS:Darwin sasami-2.local 16.4.0 Darwin Kernel Version 16.4.0: Thu Dec 22 22:53:21 PST 2016; root:xnu-3789.41.3~3/RELEASE_X86_64 x86_64`
Traceback:

Traceback (most recent call last):
  File "/Users/flynn/bin/telepresence", line 401, in call_f
    return f(*args, **kwargs)
  File "/Users/flynn/bin/telepresence", line 505, in go
    container_id,
  File "/Users/flynn/bin/telepresence", line 292, in run_local_command
    tor_conffile.write(TORSOCKS_CONFIG.format(socks_port))
  File "/Users/flynn/.pyenv/versions/3.6.0/lib/python3.6/tempfile.py", line 483, in func_wrapper
    return func(*args, **kwargs)
TypeError: a bytes-like object is required, not 'str'

Logs:

Running: (['kubectl', 'delete', '--ignore-not-found', 'service,deployment', 'skunkworks'],)
Running: (['kubectl', 'run', '--generator', 'deployment/v1beta1', 'skunkworks', '--image=datawire/telepresence-k8s:0.14', '--port=5000', '--expose'],)
Running: (['docker', 'run', '--rm', '--name', 'telepresence-1490041216-377813-51396', '-t', '-v', '/Users/flynn:/opt:ro', '-v', '/Users/flynn:/Users/flynn:ro', '-p', ':9050', '-v', '/tmp/tmpg6z9hlk9:/output', 'datawire/telepresence-local:0.14', '502', 'skunkworks', '5000', '', '10.12.8.60'],)
Unable to find image 'datawire/telepresence-local:0.14' locally
0.14: Pulling from datawire/telepresence-local
0a8490d0dfd3: Already exists
51f9c8ec365f: Already exists
5d12fe2d195a: Pulling fs layer
74f8f9ff85ef: Pulling fs layer
5d12fe2d195a: Verifying Checksum
5d12fe2d195a: Download complete
74f8f9ff85ef: Download complete
5d12fe2d195a: Pull complete
74f8f9ff85ef: Pull complete
Digest: sha256:449e1c8ade09e57b8c97d16dcd9cdd19bb2a14da2e8d2da253125afef2042ccd
Status: Downloaded newer image for datawire/telepresence-local:0.14
Expected metadata for pods: {'labels': {'run': 'skunkworks'}, 'creationTimestamp': None}
Checking {'pod-template-hash': '3445746570', 'service': 'edge-envoy'} (phase Running)...
Labels don't match.
Checking {'pod-template-hash': '498312721', 'service': 'envoy-sds'} (phase Running)...
Labels don't match.
Checking {'pod-template-hash': '592758786', 'service': 'gruesvc'} (phase Running)...
Labels don't match.
Checking {'pod-template-hash': '1385931004', 'service': 'postgres'} (phase Running)...
Labels don't match.
Checking {'pod-template-hash': '642107786', 'run': 'skunkworks'} (phase Pending)...
Looks like we've found our pod!
Forwarding from 127.0.0.1:22 -> 22
Forwarding from [::1]:22 -> 22
Handling connection for 22
Handling connection for 22
Handling connection for 22
Forwarding from 127.0.0.1:2000 -> 2000
Forwarding from [::1]:2000 -> 2000
Forwarding from 127.0.0.1:2002 -> 2002
Forwarding from [::1]:2002 -> 2002
Forwarding from 127.0.0.1:2006 -> 2006
Forwarding from [::1]:2006 -> 2006
Forwarding from 127.0.0.1:2004 -> 2004
Forwarding from [::1]:2004 -> 2004
Forwarding from 127.0.0.1:2005 -> 2005
Forwarding from [::1]:2005 -> 2005
Forwarding from 127.0.0.1:2001 -> 2001
Forwarding from [::1]:2001 -> 2001
Forwarding fr

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.