Coder Social home page Coder Social logo

tomplus / kubernetes_asyncio Goto Github PK

View Code? Open in Web Editor NEW
327.0 327.0 66.0 13.64 MB

Python asynchronous client library for Kubernetes http://kubernetes.io/

License: Apache License 2.0

Python 99.89% Shell 0.11%
aio aiohttp async asynchronous asyncio kubernetes python

kubernetes_asyncio's Introduction

Kubernetes Python Client

Build status PyPI version Docs codecov pypi supported versions Client Capabilities Client Support Level

Asynchronous (AsyncIO) client library for the Kubernetes API.

This library is created in the same way as official https://github.com/kubernetes-client/python but uses asynchronous version of OpenAPI generator. My motivation is described here: kubernetes-client/python#324

Installation

From PyPi directly:

pip install kubernetes_asyncio

It requires Python 3.6+

Example

To list all pods:

import asyncio
from kubernetes_asyncio import client, config
from kubernetes_asyncio.client.api_client import ApiClient


async def main():
    # Configs can be set in Configuration class directly or using helper
    # utility. If no argument provided, the config will be loaded from
    # default location.
    await config.load_kube_config()

    # use the context manager to close http sessions automatically
    async with ApiClient() as api:

        v1 = client.CoreV1Api(api)
        print("Listing pods with their IPs:")
        ret = await v1.list_pod_for_all_namespaces()

        for i in ret.items:
            print(i.status.pod_ip, i.metadata.namespace, i.metadata.name)


if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    loop.close()

More complicated examples, like asynchronous multiple watch or tail logs from pods, you can find in examples/ folder.

Documentation

https://kubernetes-asyncio.readthedocs.io/

Compatibility

This library is generated in the same way as the official Kubernetes Python Library. It uses swagger-codegen and the same concepts like streaming, watching or reading configuration. Because of an early stage of this library some differences still exist:

synchronous library kubernetes-client/python this library
authentication method gcp-token, azure-token, user-token, oidc-token, user-password, in-cluster gcp-token (only via gcloud command), user-token, oidc-token, user-password, in-cluster
streaming data via websocket from PODs bidirectional read-only is already implemented

Microsoft Windows

In case this library is used against Kubernetes cluster using client-go credentials plugin, the default asyncio event loop is SelectorEventLoop. This event loop selector, however, does NOT support pipes and subprocesses, so exec_provider.py::ExecProvider is failing. In order to avoid failures the ProactorEventLoop has to be selected. The ProactorEventLoop can be enabled via WindowsProactorEventLoopPolicy.

Application's code needs to contain following code:

import asyncio

asyncio.set_event_loop_policy(
    asyncio.WindowsProactorEventLoopPolicy()
)

Versions

This library is versioned in the same way as the synchronous library. The schema version has been changed with version v18.20.0. Now, first two numbers from version are Kubernetes version (v.1.18.20). The last number is for changes in the library not directly connected with K8s.

Development

Install the development packages:

pip install -r requirements.txt
pip install -r test-requirements.txt

You can run the style checks and tests with

flake8 kubernetes_asyncio/
isort --diff kubernetes_asyncio/
py.test

kubernetes_asyncio's People

Contributors

ak0nshin avatar bobh66 avatar bpicolo avatar consideratio avatar dependabot[bot] avatar epdhowwd avatar evemorgen avatar glassofwhiskey avatar harryharpel avatar hrichardlee avatar hubo1016 avatar icamposrivera avatar jacobhenner avatar jean-daniel avatar jnschaeffer avatar juliantaylor avatar kexirong avatar lejmr avatar mcreng avatar mickours avatar multani avatar olitheolix avatar pidgeybe avatar shtlrs avatar tkauf15k avatar tomplus avatar videosystemstech avatar weltonrodrigo avatar wolph avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kubernetes_asyncio's Issues

Watch event can cause aiohttp ValueError: Line is too long

As reported in aio-libs/aiohttp#4453, aiohttp's default input stream buffer size is insufficient for some Kubernetes watch event lines. When such a line is transmitted to the client, watch.py raises a ValueError: Line is too long.

In aiohttp 3.7.0 (aio-libs/aiohttp#5065), support for configuring the buffer size was added. read_bufsize can be supplied here:

self.pool_manager = aiohttp.ClientSession(
connector=connector
)

I intend to open a PR for this tomorrow - I just need to figure out what the correct value for that parameter is.

Documentation on regenerating client

I haven't quite found good docs on how to regenerate the client. Would you be able to point me to them, and/or put links in a CONTRIBUTING.md file?

RESTClientObject maxsize

Hi,

I finally realized what the issue was in my program using kubernetes asyncio that made it have a slow progress. It is the maxsize in RESTClientObject is set to only 4. In some scenarios I use the same client for many watch operations, but they would be competing over the 4 available connections. Could we either remove the limit (default is 100 in aiohttp) or make it possible to set in the kubernetes Configuration class. The pool based ones can set the pool size, seems like we could reuse that with a more generic name.

Watching with specified resource_version

Currently when you start a stream() (on a Watch()) with a specific resource_version, the retry after a timeout fails if no new events arrived.

This has been solved in kubernetes-client with:
kubernetes-client/python-base@2d69e89

Having this added to kubernetes_asyncio would be really useful, although a workaround is also pretty simple (create the Watch() object and set resource_version there too before calling stream()). But this is a bit of a hack ;)

I tried to backport this myself, but got hopefully lost in the test case.

Generic client

Is there any way of using a generic client with kubernetes_asyncio? Similar to for example what kubernetes federated v2 have in their client code?

https://github.com/kubernetes-sigs/federation-v2/blob/9c6ce26ea02d103572b69aa09acd1d51ffe29447/pkg/kubefed2/enable/enable.go#L243

https://github.com/kubernetes-sigs/federation-v2/blob/master/pkg/client/generic/genericclient.go

https://github.com/kubernetes-sigs/federation-v2/blob/master/pkg/apis/core/v1alpha1/federatedtypeconfig_types.go

If you have any example I would be grateful. Could I just do something similar to how generated code use the api_client.call_api?

Use of ssl_context= keyword raises DeprecationWarnings

I get the following deprecation warning when using this library.

lib/python3.6/site-packages/kubernetes_asyncio/client/rest.py:73: DeprecationWarning: ssl_context is deprecated, use ssl=context instead
    ssl_context=ssl_context

Swapping out ssl_context= for ssl= works fine when I apply the change locally. However I'm not sure how best to enact change in this repository, given that parts (all?) of it is generated.

cc @yuvipanda

Add OIDC auth support

Seems this exists in python-base now, though their version is not entirely functional (story of my life with every open id implementation in existence ๐Ÿ˜ž )

I am currently working on a pr for this

field_selector for custom objects missing

Noticed that the field_selector parameter is missing for listing custom objects list_cluster_custom_object and list_namespaced_custom_object in custom_objects_api. See this issue for when it was added to Kubernetes.
kubernetes/kubernetes#51046

It should support field selector for name and namespace (potentially more later)

Don't know if it is missing from all OpenAPI generated clients.

It works from kubectl:
kubectl -v=7 get customobjname --field-selector "metadata.name=myname" -o yaml
lists CRDs of customobjname kind.

ERROR:asyncio:Unclosed client session

Hi,

If I use example4.py but don't see any changes (or set timeout_seconds=10 to force an early close) I see exceptions/errors after the initial list of events:

ERROR:asyncio:Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x109d3ba90>
ERROR:asyncio:Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x109d37550>
ERROR:asyncio:Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x109d667c0>, 10.727993494)]']
connector: <aiohttp.connector.TCPConnector object at 0x109d37590>

I'm using python 3.7.5 with:

kubernetes==11.0.0
kubernetes-asyncio==11.2.0

I saw a recent issue where I think the context manager was changed to close() in 11.2.0, but it seems aiohttp.ClientSession isn't actually closing properly?

Custom object json-patch

I noticed that most patch calls support all 3 patch types (json-patch, merge-patch, strategic-merge-patch), but the patch calls for custom objects (including for status and scale sub-resources) only support merge-patch.

For example compare consumes for the custom object from the swagger.json file:

  "patch": {
    "responses": {
      "200": {
        "description": "OK",
        "schema": {
          "type": "object"
        }
      },
      "401": {
        "description": "Unauthorized"
      }
    },
    "schemes": [
      "https"
    ],
    "description": "patch the specified cluster scoped custom object",
    "parameters": [
      {
        "schema": {
          "type": "object"
        },
        "description": "The JSON schema of the Resource to patch.",
        "required": true,
        "name": "body",
        "in": "body"
      }
    ],
    "produces": [
      "application/json"
    ],
    "x-codegen-request-body-name": "body",
    "tags": [
      "custom_objects"
    ],
    "consumes": [
      "application/merge-patch+json"
    ],
    "operationId": "patchClusterCustomObject"
  },

and consumes from Pod:

  "patch": {
    "description": "partially update the specified Pod",
    "consumes": [
      "application/json-patch+json",
      "application/merge-patch+json",
      "application/strategic-merge-patch+json"
    ],
    "produces": [
      "application/json",
      "application/yaml",
      "application/vnd.kubernetes.protobuf"
    ],
   ...

This then prevents using json-patch instead of merge-patch. Kubectl handles it.
The code in rest.py then have some strange things since it actually use the first one only and then if a json-patch contenttype but not list switch to merge-patch. So would be good to have json-patch+json first also for custom objects.

Asyncio support

Thank you @tomplus for working on this project. I was going through python clients for k8s and this suits my needs perfectly. However I wanted to ask how is this project maintained and is it safe to assume that we should expect upstream support ? ( it would be great btw if upstream k8s made this official as well )

My case is that we would like to use it but would also like to know if we should be expecting some support from upstream ( we are willing to help as well as much as we can do ).

Looking forward to hearing from you, thank you.

stream doesn't support multichannel communication

Current implementation of web-socket doesn't support multichannel communication.

Now it's impossible to "talk" to a pod like sending a command, reading output, send another command ... in the same connection.

Don't try to close aiohttp connection pool inside `__del__`

Hi! Thanks for a great library.

There is a flaw though: one really should not perform async cleanup inside __del__. This code may cause exceptions at shutdown: https://github.com/tomplus/kubernetes_asyncio/blob/master/kubernetes_asyncio/client/rest.py#L84

This is because __del__ may be called after the event loop is closed.

The preferred way is to use an async context manager or add an API to explicitly shutdown the client. Example may be found here: https://stackoverflow.com/questions/54770360/how-can-i-wait-for-an-objects-del-to-finish-before-the-async-loop-closes

example1.py ssl_context error

Hi! I'm testing out this library and ran into an error with example1.py

Python3

pip list

$ pip list
Package             Version
------------------- ----------
aiofiles            0.4.0
aiohttp             4.0.0a1
aniso8601           6.0.0
apispec             3.2.0
apistar             0.7.2
appnope             0.1.0
astroid             2.3.3
async-timeout       3.0.1
attrs               19.3.0
backcall            0.1.0
cachetools          4.0.0
certifi             2019.11.28
chardet             3.0.4
Click               7.0
decorator           4.4.1
dnspython           1.16.0
docopt              0.6.2
google-auth         1.10.0
graphene            2.1.7
graphql-core        2.2
graphql-relay       2.0.0
graphql-server-core 1.1.1
h11                 0.9.0
httptools           0.0.13
idna                2.8
ipython             7.11.0
ipython-genutils    0.2.0
isort               4.3.21
itsdangerous        1.1.0
jedi                0.15.2
Jinja2              2.10.3
kubernetes          10.0.1
kubernetes-asyncio  10.0.1
lazy-object-proxy   1.4.3
MarkupSafe          1.1.1
marshmallow         3.3.0
mccabe              0.6.1
more-itertools      8.0.2
multidict           4.7.2
oauthlib            3.1.0
packaging           19.2
parse               1.14.0
parso               0.5.2
pexpect             4.7.0
pickleshare         0.7.5
pip                 19.3.1
pluggy              0.13.1
prometheus-client   0.7.1
promise             2.3
prompt-toolkit      3.0.2
ptyprocess          0.6.0
py                  1.8.1
pyasn1              0.4.8
pyasn1-modules      0.2.7
Pygments            2.5.2
pylint              2.4.4
pyparsing           2.4.6
pytest              5.3.2
python-dateutil     2.8.1
python-multipart    0.0.5
PyYAML              5.3b1
redis               3.2.1
requests            2.22.0
requests-oauthlib   1.3.0
requests-toolbelt   0.9.1
responder           1.3.1
rfc3986             1.3.2
rsa                 4.0
Rx                  3.0.1
setuptools          42.0.2
six                 1.13.0
starlette           0.10.7
traitlets           4.3.3
typesystem          0.2.4
typing-extensions   3.7.4.1
urllib3             1.25.7
uvicorn             0.11.1
uvloop              0.14.0
wcwidth             0.1.7
websocket-client    0.57.0
websockets          8.1
wheel               0.33.6
whitenoise          5.0.1
wrapt               1.11.2
yarl                1.4.2

K8s client / server

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T14:25:20Z", GoVersion:"go1.12.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

python example/example1.py

Traceback (most recent call last):
  File "1.py", line 22, in <module>
    loop.run_until_complete(main())
  File "/Users/zane/.pyenv/versions/3.8.0/lib/python3.8/asyncio/base_events.py", line 608, in run_until_complete
    return future.result()
  File "1.py", line 12, in main
    v1 = client.CoreV1Api()
  File "/Users/zane/.local/share/virtualenvs/dinghy-ping-ePAjWOtt/lib/python3.8/site-packages/kubernetes_asyncio/client/api/core_v1_api.py", line 32, in __init__
    api_client = ApiClient()
  File "/Users/zane/.local/share/virtualenvs/dinghy-ping-ePAjWOtt/lib/python3.8/site-packages/kubernetes_asyncio/client/api_client.py", line 72, in __init__
    self.rest_client = rest.RESTClientObject(configuration)
  File "/Users/zane/.local/share/virtualenvs/dinghy-ping-ePAjWOtt/lib/python3.8/site-packages/kubernetes_asyncio/client/rest.py", line 67, in __init__
    connector = aiohttp.TCPConnector(
TypeError: __init__() got an unexpected keyword argument 'ssl_context'
Exception ignored in: <function RESTClientObject.__del__ at 0x10e5d4430>
Traceback (most recent call last):
  File "/Users/zane/.local/share/virtualenvs/dinghy-ping-ePAjWOtt/lib/python3.8/site-packages/kubernetes_asyncio/client/rest.py", line 84, in __del__
AttributeError: 'RESTClientObject' object has no attribute 'pool_manager'

Running a non asyncio example script with python-client library works fine

$ cat non-async-example.py
from kubernetes import client, config

# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()

v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
    print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

Curious, if something stands out as an issue here.

Consider moving this repo to https://github.com/aio-libs

This repository is difficult to find even if you know it exists. I think this is a pity because async support is a useful feature for a Kubernetes client library.

Therefore, would it make sense to move this repo into the aio-libs fold to give it more visibility and traction? Maybe also rename it to aiokubernetes for consistency with how many other popular async libraries are named?

watcher.stream call failed in pods

If i execute some code in pod of my cluster:

from kubernetes_asyncio import client
from kubernetes_asyncio import config
from kubernetes_asyncio import watch

config.load_incluster_config()
k8s_client = client.CoreV1Api()

watcher = watch.Watch()
async for event in watcher.stream(k8s_client.list_persistent_volume_claim_for_all_namespaces, timeout_seconds=5):
    pass

I see exception:

Traceback (most recent call last):
  File "t.py", line 22, in <module>
    asyncio.run(test())
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 43, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "t.py", line 16, in test
    async for event in watcher.stream(
  File "/usr/local/lib/python3.8/site-packages/kubernetes_asyncio/watch/watch.py", line 114, in __anext__
    return await self.next()
  File "/usr/local/lib/python3.8/site-packages/kubernetes_asyncio/watch/watch.py", line 152, in next
    return self.unmarshal_event(line, self.return_type)
  File "/usr/local/lib/python3.8/site-packages/kubernetes_asyncio/watch/watch.py", line 90, in unmarshal_event
    js['object'] = self._api_client.deserialize(
  File "/usr/local/lib/python3.8/site-packages/kubernetes_asyncio/client/api_client.py", line 264, in deserialize
    return self.__deserialize(data, response_type)
  File "/usr/local/lib/python3.8/site-packages/kubernetes_asyncio/client/api_client.py", line 292, in __deserialize
    klass = getattr(kubernetes_asyncio.client.models, klass)
AttributeError: module 'kubernetes_asyncio.client.models' has no attribute ''

But if i load config by call await config.load_kube_config() and run script from cluster host, everything works

Traceback in debug mode due to pending aiohttp connections

From what I have found out so far this is because RESTClientObject relies on its __del__ method to be called before the event loop terminates. This is flakey since Python makes no guarantee to call that method at all.

A quick way to reproduce the problem on my Python 3.6 machine is to add this test to test_watch.py and run it in debug mode ie (export PYTHONASYNCIODEBUG=1):

    def test_foo(self):
        resource = kubernetes_asyncio.client.CoreV1Api().list_namespace
        assert False

This produces a lengthy stack trace, but the salient part is this:

Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7f140e947d30>
source_traceback: Object created at (most recent call last):

...

  File "/home/oliver/pocs/kubernetes_asyncio/kubernetes_asyncio/watch/watch_test.py", line 26, in test_foo
    resource = kubernetes_asyncio.client.CoreV1Api().list_namespace
  File "/home/oliver/pocs/kubernetes_asyncio/kubernetes_asyncio/client/api/core_v1_api.py", line 33, in __init__
    api_client = ApiClient()
  File "/home/oliver/pocs/kubernetes_asyncio/kubernetes_asyncio/client/api_client.py", line 70, in __init__
    self.rest_client = rest.RESTClientObject(configuration)
  File "/home/oliver/pocs/kubernetes_asyncio/kubernetes_asyncio/client/rest.py", line 81, in __init__
    connector=connector

The problem does not materialise if I change

resource = kubernetes_asyncio.client.CoreV1Api().list_namespace

to

kubernetes_asyncio.client.CoreV1Api().list_namespace

because __del__ will be called immediately, presumably because Python garbage collects the object immediately since nothing ever references it.

As a first attempt to address the problem I added an explicit shutdown method to RESTClientObject
and all the classes that instantiate it up to Watch. Users must then explicitly call
Watch().stream(...).shutdown() or use the context manager I created for it. This feels inelegant and is a maintenance burden. Ideally, we could register a callback in RESTClientObject that triggers whenever the event loop shuts down because that is when we need to close pending connections. Any ideas?

I should probably point out that users will not see the stack trace unless they run in debug mode, but I would still like a clean solution for this because I am petty ๐Ÿ˜„

ApiClient.deserialize should be a function, not a method

As far as I can tell, it does not use any attributes of the ApiClient.

The advantage of having it as function would be to have it available without needing to construct an ApiClient (as the Watch currently does). Normally, constructing an ApiClient would not be a problem. However, if one is in a non-async context, one cannot properly clean the client up and that will cause a warning ("asyncio: ERROR: Unclosed client session").

I would be happy to provide a PR if you agree.

RuntimeError: Event loop is closed in __del__ of rest client

When running a REST client with
asyncio.wait(tasks, return_when="FIRST_EXCEPTION")
I get an unexpected exception the __del__ method of the REST client that makes my program fails.

The exception is the pool manager closing on a closed loop. Here is a curated stack trace:

Exception ignored in: <bound method RESTClientObject.__del__ of <kubernetes_asyncio.client.rest.RESTClientObject object at 0x7f0edd67b198>>
File "kubernetes_asyncio/client/rest.py", line 84, in __del__
   asyncio.ensure_future(self.pool_manager.close())                                                                                                                                          
RuntimeError: Event loop is closed

Seems related to #25 but I'm not sure how I can workaround this, any insight?

'ClientResponse' object has no attribute 'get header'

This issue is seen when run my project with ver.12.0.0 of kubernetes_asyncio which it was worked with ver. 11.3.0.

resp = await v1.read_namespaced_pod_log(podname, namespace, container=container, **params)
  File "..\.venv\lib\site-packages\kubernetes_asyncio\client\api_client.py", line 191, in __call_api
    content_type = response_data.getheader('content-type')
AttributeError: 'ClientResponse' object has no attribute 'getheader'

content_type = response_data.getheader('content-type')

to fix it, I change it to
content_type = response_data.headers.get('content-type')

I use :
aiohttp==3.6.3
urllib3==1.25.11
Thanks for your attempts.

watch stream set timeout_seconds=0 not work

as example code:

    async def watch_job_event(namespace: str, callback: asyncio.coroutines):
        """ watch job event"""
        async with ApiClient() as api:
            batchv1 = client.BatchV1Api(api)
            w = watch.Watch()
            logging.info("begin to watch job events")
            async with w.stream(batchv1.list_namespaced_job, namespace, timeout_seconds=0) as stream:
                async for event in stream:
                    evt, obj = event['type'], event['object']
                    try:
                        await callback(evt, obj)
                    except Exception as e:
                        import traceback
                        logging.error(f"watch job event callback get exception: {traceback.format_exc()}")

As I set timeout_seconds=0 here, expect no timeout, but got aiohttp's TimeoutError after 5 minutes. I check the code and find in watch.py line 147, when got asyncio.TimeoutError, only when "timeout_seconds" not in func's keywords can continue, I'm expecting to have the same behavior when set timeout_seconds=0. :)

windows_options are not supported

We are trying to create Pods with Dask-Kubernetes (https://github.com/dask/dask-kubernetes) which uses kubernetes_asyncio to talk to k8s. We noticed that V1PodSecurityContext does not yet support windows_options. As a a result, our Pods can't use Windows-specific security options.

  • Is there an upcoming release where this can be addressed?
  • Is there a workaround we can use short of re-generating through OpenAPI generator and building our local version?

TIA.

Relevant code.

Official client library:
https://github.com/kubernetes-client/python/blob/02ef5be4ecead787961037b236ae498944040b43/kubernetes/client/models/v1_pod_security_context.py#L41

kubernetes_asyncio:

ApiClient creates new threadpool with number of active threads even though it isn't used

In https://github.com/tomplus/kubernetes_asyncio/blob/master/kubernetes_asyncio/client/api_client.py#L69, a new ThreadPool is created per ApiClient. A lot of other objects (such as Watch) create an ApiClient per use, leading to lots of ThreadPools being created. ThreadPools aren't free - on my machine, merely instantiating a ThreadPool object (similar to what's happening here) starts 8 threads.

Very soon this becomes too many threads, and CPU usage / time for everything shoots up way high - bringing everything else to a halt.

In this client, the ThreadPool shouldn't be used at all, so we should remove it.

remove dependenecies to synchronous libraries

Some libraries from requirements.txt are synchronous and should be removed.

urllib3>=1.19.1,!=1.21,<1.23  # MIT
google-auth>=1.0.1  # Apache-2.0
requests<=2.18.4 # Apache-2.0
requests-oauthlib # ISC
  • urllib3 is uses in e2e tests to check apiserver and it can be easily replaced by aiohttp (#14)
  • requests, *auth* is used to authorize to GKE (#8)

Example in the README will not run because of "pods is forbidden"

I'm running the example in the README for the repo, and I cannot get it to work:

HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:anonymous\" cannot list pods at the cluster scope","reason":"Forbidden","details":{"kind":"pods"},"code":403}

If the client is supposed to read the default kube config (which I think it does), it somehow cannot authenticate correctly. Is this a bug? Or is my usage incorrect?

Incompatible Urllib warning during installation

When I install the requirements file with pip install -r requirements.txt I get this error:

requests 2.18.4 has requirement urllib3<1.23,>=1.21.1, but you'll have urllib3 1.23 which is incompatible.

Everything still works and all tests pass but I wonder if you know how to solve this one.

Reconnect watcher if Kubernetes sends an empty response

In watch.py, next() is set to stop iteration if the k8s server provides an empty response (i.e. when the server timeout expires). I believe this is undesirable.

line = line.decode('utf8')
# Stop the iterator if K8s sends an empty response. This happens when
# eg the supplied timeout has expired.
if line == '':
raise StopAsyncIteration

Instead, the logic should be similar to the logic in the previous except block, which handles asyncio.TimeoutError:

except asyncio.TimeoutError:
if 'timeout_seconds' not in self.func.keywords:
self.resp.close()
self.resp = None
self.func.keywords['resource_version'] = self.resource_version
continue

From the perspective of kubernetes_asyncio, it should not matter if the client or if the server has timed out; the watcher should attempt to reconnect unless the user-specified timeout (timeout_seconds) has elapsed.

I believe this is how the standard Python client handles it:

https://github.com/kubernetes-client/python-base/blob/d30f1e6fd4e2725aae04fa2f4982a4cfec7c682b/watch/watch.py#L141-L157

When iter_resp_lines(resp) returns a value which evaluates to False (e.g. the k8s api returns an empty response), iteration stops. Unless _stop has been set to True, the client will attempt to reconnect.

Raise exception when watch returns an error

In kubernetes_asyncio/watch/watch.py, there is a comment about raising an exception when a watch returns an error. Specifically:

# Something went wrong. A typical example would be that the user
# supplied a resource version that was too old. In that case K8s would
# not send a conventional ADDED/DELETED/... event but an error. Turn
# this error into a Python exception to save the user the hassle.
if js['type'].lower() == 'error':
return js

However, this doesn't seem to have been implemented.

Unclear error message when the kubeconfig is not found

When KUBECONFIG is not set and the ~/kube/config file does not exist, the load_kube_config() function fails with an unclear error message: TypeError: argument of type 'NoneType' is not iterable.

Here is full error stack trace:

    await kubernetes_asyncio.config.load_kube_config()
  File "/nix/store/9g9h6h33wwpjcwqjxcpg74jpzi3rdmm6-python3-3.7.9-env/lib/python3.7/site-packages/kubernetes_asyncio/config/kube_config.py", line 553, in load_kube_config
    persist_config=persist_config)
  File "/nix/store/9g9h6h33wwpjcwqjxcpg74jpzi3rdmm6-python3-3.7.9-env/lib/python3.7/site-packages/kubernetes_asyncio/config/kube_config.py", line 521, in _get_kube_config_loader_for_yaml_file
    **kwargs)
  File "/nix/store/9g9h6h33wwpjcwqjxcpg74jpzi3rdmm6-python3-3.7.9-env/lib/python3.7/site-packages/kubernetes_asyncio/config/kube_config.py", line 144, in __init__
    self.set_active_context(active_context)
  File "/nix/store/9g9h6h33wwpjcwqjxcpg74jpzi3rdmm6-python3-3.7.9-env/lib/python3.7/site-packages/kubernetes_asyncio/config/kube_config.py", line 154, in set_active_context
    context_name = self._config['current-context']
  File "/nix/store/9g9h6h33wwpjcwqjxcpg74jpzi3rdmm6-python3-3.7.9-env/lib/python3.7/site-packages/kubernetes_asyncio/config/kube_config.py", line 405, in __getitem__
    v = self.safe_get(key)
  File "/nix/store/9g9h6h33wwpjcwqjxcpg74jpzi3rdmm6-python3-3.7.9-env/lib/python3.7/site-packages/kubernetes_asyncio/config/kube_config.py", line 401, in safe_get
    key in self.value):
TypeError: argument of type 'NoneType' is not iterable

resource_version set incorrectly while watching list_* functions

Watch objects maintain a resource_version field to allow the watch to resume at the last observed point after a timeout. However, there is a bug preventing this from working as intended. Instead of tracking the resource_version of the list (from ListMeta), the field is updated with the resource_version of the resource object included in each event:

# decode and save resource_version to continue watching
if hasattr(js['object'], 'metadata'):
self.resource_version = js['object'].metadata.resource_version

When the library later attempts to use an object resource_version as a parameter to the list_* function the watcher is operating on, the list function will return a 410 Gone, since that resource_version refers to an object and not a list.

I believe there are two actions to take here:

  1. Enable watch bookmarks and update the value of resource_version using those bookmark responses instead of the resource_version values from the ordinary event objects.
  2. Incorporate automatic retries for 410 errors. I'll open a separate issue for this.

problems with long-living watch

I was trying to follow one of the basic examples but replacing the list_namespaces func with read_node. The major difference between the 2 is that read_node requires a name argument.
The code looks like this:

import asyncio

from kubernetes_asyncio import client, config, watch


async def main():
    # Configs can be set in Configuration class directly or using helper
    # utility. If no argument provided, the config will be loaded from
    # default location.
    await config.load_kube_config()

    v1 = client.CoreV1Api()
    count = 20
    w = watch.Watch()
    my_node_name = 'some-node-that-actually-exists'
    async for event in w.stream(v1.read_node, my_node_name):
        print("{} Event: {} {}\n\labels={}\n\n".format(count, event['type'],
                event['object'].metadata.name, event['object'].metadata.labels))
        count -= 1
        if not count:
            w.stop()

    print("Ended.")


if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    loop.close()

The run output is:

Traceback (most recent call last):
  File "async_test.py", line 28, in <module>
    loop.run_until_complete(main())
  File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
    return future.result()
  File "async_test.py", line 16, in main
    async for event in w.stream(v1.read_node, my_node_name):
  File "/Users/erares01/.virtualenvs/splatt-master/lib/python3.6/site-packages/kubernetes_asyncio/watch/watch.py", line 95, in __anext__
    return await self.next()
  File "/Users/erares01/.virtualenvs/splatt-master/lib/python3.6/site-packages/kubernetes_asyncio/watch/watch.py", line 101, in next
    self.resp = await self.func()
  File "/Users/erares01/.virtualenvs/splatt-master/lib/python3.6/site-packages/kubernetes_asyncio/client/api/core_v1_api.py", line 20285, in read_node
    (data) = self.read_node_with_http_info(name, **kwargs)  # noqa: E501
  File "/Users/erares01/.virtualenvs/splatt-master/lib/python3.6/site-packages/kubernetes_asyncio/client/api/core_v1_api.py", line 20318, in read_node_with_http_info
    " to method read_node" % key
TypeError: Got an unexpected keyword argument 'watch' to method read_node
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x1101f16a0>
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x11020b240>

Is this a bug or am I doing something wrong?

`client.Configuration.proxy` doesn't work.

I was trying to use this library with http proxy.

However, if I set the 'proxy' configuration(like below script) and run the program, ka.client.CoreV1Api() raises TypeError.

This is the script(almost same as README.md sample):

import asyncio
import kubernetes_asyncio as ka


async def main():
    await ka.config.load_kube_config()

    # below 3 lines are main change from `README.md` sample.
    conf = ka.client.Configuration()
    conf.proxy = 'http://your_http_proxy'
    ka.client.Configuration.set_default(conf)

    v1 = ka.client.CoreV1Api() # This raises exception.


    print("Listing pods with their IPs:")
    ret = await v1.list_pod_for_all_namespaces()

    for i in ret.items:
        print(i.status.pod_ip, i.metadata.namespace, i.metadata.name)


if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    loop.close()

The output is:

Exception ignored in: <bound method ClientSession.__del__ of <aiohttp.client.ClientSession object at 0x7fa97bb3fc88>>
Traceback (most recent call last):
  File "/home/ta-ono/.virtualenvs/dev-ka/lib/python3.6/site-packages/aiohttp/client.py", line 302, in __del__
    if not self.closed:
  File "/home/ta-ono/.virtualenvs/dev-ka/lib/python3.6/site-packages/aiohttp/client.py", line 916, in closed
    return self._connector is None or self._connector.closed
AttributeError: 'ClientSession' object has no attribute '_connector'
Traceback (most recent call last):
  File "test.py", line 26, in <module>
    loop.run_until_complete(main())
  File "/usr/lib64/python3.6/asyncio/base_events.py", line 484, in run_until_complete
    return future.result()
  File "test.py", line 16, in main
    v1 = ka.client.CoreV1Api()
  File "/home/ta-ono/programs/kubernetes_asyncio/kubernetes_asyncio/client/api/core_v1_api.py", line 32, in __init__
    api_client = ApiClient()
  File "/home/ta-ono/programs/kubernetes_asyncio/kubernetes_asyncio/client/api_client.py", line 72, in __init__
    self.rest_client = rest.RESTClientObject(configuration)
  File "/home/ta-ono/programs/kubernetes_asyncio/kubernetes_asyncio/client/rest.py", line 76, in __init__
    proxy=configuration.proxy
TypeError: __init__() got an unexpected keyword argument 'proxy'
Exception ignored in: <bound method RESTClientObject.__del__ of <kubernetes_asyncio.client.rest.RESTClientObject object at 0x7fa97bb3f470>>
Traceback (most recent call last):
  File "/home/ta-ono/programs/kubernetes_asyncio/kubernetes_asyncio/client/rest.py", line 84, in __del__
AttributeError: 'RESTClientObject' object has no attribute 'pool_manager'

I think proxy argument should be passed to aiohttp.ClientSession.request instead of aiohttp.ClientSession.

I could run the above script by changing kubernetes_asyncio.client.rest.RESTClientObject.
The change is:

(dev-ka) [ta-ono@kubernetes_asyncio]$ git diff
diff --git a/kubernetes_asyncio/client/rest.py b/kubernetes_asyncio/client/rest.py
index 69bba34a..5a71b894 100644
--- a/kubernetes_asyncio/client/rest.py
+++ b/kubernetes_asyncio/client/rest.py
@@ -69,16 +69,12 @@ class RESTClientObject(object):
             ssl_context=ssl_context
         )

+        self.proxy = configuration.proxy
+
         # https pool manager
-        if configuration.proxy:
-            self.pool_manager = aiohttp.ClientSession(
-                connector=connector,
-                proxy=configuration.proxy
-            )
-        else:
-            self.pool_manager = aiohttp.ClientSession(
-                connector=connector
-            )
+        self.pool_manager = aiohttp.ClientSession(
+            connector=connector
+        )

     def __del__(self):
         asyncio.ensure_future(self.pool_manager.close())
@@ -168,6 +164,7 @@ class RESTClientObject(object):
                          declared content type."""
                 raise ApiException(status=0, reason=msg)

+        args['proxy'] = self.proxy
         r = await self.pool_manager.request(**args)
         if _preload_content:

However this change doesn't pass pytest via http proxy.
So I'm not sure this is correct.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.