Coder Social home page Coder Social logo

kr8s-org / kr8s Goto Github PK

View Code? Open in Web Editor NEW
777.0 6.0 40.0 3.46 MB

A batteries-included Python client library for Kubernetes that feels familiar for folks who already know how to use kubectl

Home Page: https://kr8s.org

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%
kubernetes python asyncio kubectl sync kr8s hacktoberfest devops-tools k8s kubernetes-client

kr8s's People

Contributors

beanagrammer avatar benedikt-bartscher avatar bpartridge avatar calin-iorgulescu avatar droctothorpe avatar florianvazelle avatar geoffreyperrin avatar jacobtomlinson avatar kr8s-bot avatar leelavg avatar marcelofa avatar marcodlk avatar max-muoto avatar pre-commit-ci[bot] avatar saghen avatar willgleich avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kr8s's Issues

TypeError: 'NoneType' object is not iterable when loading kubeconfig

Which project are you reporting a bug for?

kr8s

What happened?

When loading my kubeconfig, I get a TypeError:

In [1]: import kr8s

In [2]: api = kr8s.api()
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[2], line 1
----> 1 api = kr8s.api()

File ~/src/pc/pc-onboarding-pipelines/.direnv/python-3.10.10/lib/python3.10/site-packages/kr8s/_io.py:46, in run_sync.<locals>.wrapped(*args, **kwargs)
     44 with anyio.from_thread.start_blocking_portal() as portal:
     45     if inspect.iscoroutinefunction(coro):
---> 46         return portal.call(wrapped)
     47     if inspect.isasyncgenfunction(coro):
     48         return iter_over_async(wrapped)

File ~/src/pc/pc-onboarding-pipelines/.direnv/python-3.10.10/lib/python3.10/site-packages/anyio/from_thread.py:277, in BlockingPortal.call(self, func, *args)
    264 def call(
    265     self, func: Callable[..., Awaitable[T_Retval] | T_Retval], *args: object
    266 ) -> T_Retval:
    267     """
    268     Call the given function in the event loop thread.
    269
   (...)
    275
    276     """
--> 277     return cast(T_Retval, self.start_task_soon(func, *args).result())

File ~/mambaforge2/lib/python3.10/concurrent/futures/_base.py:458, in Future.result(self, timeout)
    456     raise CancelledError()
    457 elif self._state == FINISHED:
--> 458     return self.__get_result()
    459 else:
    460     raise TimeoutError()

File ~/mambaforge2/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
    401 if self._exception:
    402     try:
--> 403         raise self._exception
    404     finally:
    405         # Break a reference cycle with the exception in self._exception
    406         self = None

File ~/src/pc/pc-onboarding-pipelines/.direnv/python-3.10.10/lib/python3.10/site-packages/anyio/from_thread.py:217, in BlockingPortal._call_func(self, func, args, kwargs, future)
    214             else:
    215                 future.add_done_callback(callback)
--> 217             retval = await retval
    218 except self._cancelled_exc_class:
    219     future.cancel()

File ~/src/pc/pc-onboarding-pipelines/.direnv/python-3.10.10/lib/python3.10/site-packages/kr8s/asyncio/_api.py:44, in api(url, kubeconfig, serviceaccount, namespace, _asyncio)
     41         return await list(_cls._instances[thread_id].values())[0]
     42     return await _cls(**kwargs, bypass_factory=True)
---> 44 return await _f(
     45     url=url,
     46     kubeconfig=kubeconfig,
     47     serviceaccount=serviceaccount,
     48     namespace=namespace,
     49 )

File ~/src/pc/pc-onboarding-pipelines/.direnv/python-3.10.10/lib/python3.10/site-packages/kr8s/asyncio/_api.py:42, in api.<locals>._f(**kwargs)
     36 if (
     37     all(k is None for k in kwargs.values())
     38     and thread_id in _cls._instances
     39     and list(_cls._instances[thread_id].values())
     40 ):
     41     return await list(_cls._instances[thread_id].values())[0]
---> 42 return await _cls(**kwargs, bypass_factory=True)

File ~/src/pc/pc-onboarding-pipelines/.direnv/python-3.10.10/lib/python3.10/site-packages/kr8s/_api.py:59, in Api.__await__.<locals>.f()
     58 async def f():
---> 59     await self.auth
     60     return self

File ~/src/pc/pc-onboarding-pipelines/.direnv/python-3.10.10/lib/python3.10/site-packages/kr8s/_auth.py:48, in KubeAuth.__await__.<locals>.f()
     47 async def f():
---> 48     await self.reauthenticate()
     49     return self

File ~/src/pc/pc-onboarding-pipelines/.direnv/python-3.10.10/lib/python3.10/site-packages/kr8s/_auth.py:59, in KubeAuth.reauthenticate(self)
     57     await self._load_service_account()
     58 if self._kubeconfig is not False and not self.server:
---> 59     await self._load_kubeconfig()
     60 if not self.server:
     61     raise ValueError("Unable to find valid credentials")

File ~/src/pc/pc-onboarding-pipelines/.direnv/python-3.10.10/lib/python3.10/site-packages/kr8s/_auth.py:122, in KubeAuth._load_kubeconfig(self)
    119 args = self._user["exec"].get("args", [])
    120 env = os.environ.copy()
    121 env.update(
--> 122     **{e["name"]: e["value"] for e in self._user["exec"].get("env", [])}
    123 )
    124 data = json.loads(await check_output(command, *args, env=env))["status"]
    125 if "token" in data:

TypeError: 'NoneType' object is not iterable

Here's that section of my kubeconfig:

     exec:
       apiVersion: client.authentication.k8s.io/v1beta1
       args:
       - get-token
       - --login
       - azurecli
       - --server-id
       - 6dae42f8-4368-4678-94ff-3960e28e3630
       command: kubelogin
       env: null

Notice that env is null. I think that line should be updated to something like (self._user["exec"].get("env") or []), to handle both the env value missing and the env value being present but null.

Anything else?

For now, you can work around this by deleting the null env from your kubeconfig.

When creating resources in a loop, kube sometimes returns 409's.

Which project are you reporting a bug for?

kr8s

What happened?

I think there's some kind of out of order issue happening but I'm not really sure where to start to track something like this down.

I'm deploying about 20 applications in a for loop, a simplified example more or less looks like this:

from starbug.models.kubernetes.infrastructure.namespace import Namespace
from starbug.models.kubernetes.infrastructure.postgres import Postgres
from starbug.models.kubernetes.infrastructure.rabbitmq import RabbitMQ
from starbug.models.kubernetes.infrastructure.redis import Redis

def main() -> None:
    word1 = choice(["walking", "running", "jumping", "skipping", "hopping"])
    word2 = choice(["red", "blue", "green", "yellow", "orange", "purple", "pink"])
    word3 = choice(["cat", "dog", "bird", "fish", "rabbit", "hamster", "mouse"])

    modules = []

    namespace_name = f"ait-{word1}-{word2}-{word3}"
    modules.append(Namespace(name=namespace_name).complete())
    modules.append(Postgres(namespace=namespace_name).complete())
    modules.append(RabbitMQ(namespace=namespace_name).complete())
    modules.append(Redis(namespace=namespace_name).complete())

    for module in modules:
        for component in module:
            logger.info("Deploying: {}, {}", component.name, component.kind)
            component.create()

Each time the main() function gets called a new namespace gets created and then the subcomponents of each components get installed one at a time. Here is how we define an application, Redis for example:

"""Define a Redis Instance."""
from kr8s.objects import Deployment, Service, ServiceAccount


class Redis:
    """Define a Redis Instance."""

    def __init__(self, namespace: str, image: str | None = None) -> None:
        """Initialize the Redis class."""
        self.namespace = namespace
        self.image = image or "docker.io/redis:6"
        self.name = "redis"
        self.labels = {"app": "redis"}
        self.serviceaccount = ServiceAccount({
            "apiVersion": "v1",
            "kind": "ServiceAccount",
            "metadata": {
                "name": self.name,
                "namespace": self.namespace,
            },
        })
        self.service = Service({
            "apiVersion": "v1",
            "kind": "Service",
            "metadata": {
                "name": self.name,
                "namespace": self.namespace,
                "labels": self.labels,
            },
            "spec": {
                "ports": [{"port": 6379, "targetPort": 6379}],
                "selector": self.labels,
            },
        })
        self.deployment = Deployment({
            "apiVersion": "apps/v1",
            "kind": "Deployment",
            "metadata": {
                "name": self.name,
                "namespace": self.namespace,
                "labels": self.labels,
            },
            "spec": {
                "replicas": 1,
                "selector": {
                    "matchLabels": self.labels,
                },
                "template": {
                    "metadata": {
                        "labels": self.labels,
                        "annotations": {
                            "kubectl.kubernetes.io/default-container": self.name,
                        },
                    },
                    "spec": {
                        "serviceAccountName": self.name,
                        "containers": [
                            {
                                "name": self.name,
                                "image": self.image,
                                "ports": [{"containerPort": 6379}],
                            },
                        ],
                    },
                },
            },
        })

    def complete(self) -> tuple[ServiceAccount, Service, Deployment]:
        """Return all deployable objects as a tuple."""
        return (self.serviceaccount, self.service, self.deployment)

This works really well, most of the time. Unfortunately sometimes Kubernetes seems to return a 409 stating that an object already exists. Well, that's impossible as only a few milliseconds ago did we create the namespace that these components are going to be installed in. I modified the main() function with some janky log messages and a pause to try and determine why this is happening.

    for module in modules:
        for component in module:
            try:
                logger.info("Deploying: {}, {}", component.name, component.kind)
                component.create()
            except HTTPStatusError as error:
                if error.response.status_code == 409:  # noqa: PLR2004
                    logger.warning("Component Conflict, retrying: {}, {}", component.name, component.kind)
                    logger.info(error.response.text)
                    input("Press Enter to continue...")

    Namespace(name=namespace_name).namespace.delete()

Running this in a while True loop, I'll eventually get a message like:

2023-10-12 17:32:16.811 | INFO     | __main__:main:65 - Deploying: eos-migrator, Job
2023-10-12 17:32:16.887 | WARNING  | __main__:main:70 - Component Conflict, retrying: eos-migrator, Job
2023-10-12 17:32:16.888 | INFO     | __main__:main:71 - {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"jobs.batch \"eos-migrator\" already exists","reason":"AlreadyExists","details":{"name":"eos-migrator","group":"batch","kind":"jobs"},"code":409}

And if I look in that namespace, I can indeed confirm that the job it's complaining about does indeed exist. So it's almost like the library performed some operation out of order and is now raising a failure.

I'm not sure what additional information I can provide to help track down the root cause of this, but it seems just running a large number of deployments in a for loop (synchronously) is enough to trigger this eventually.

Anything else?

No response

Get Pods for Deployment

Which project are you requesting an enhancement for?

kr8s

What do you need?

It would be really helpful if a Deployment object had a method you could call to get all of the Pods associated with it.

I.e

import kr8s

deployment = kr8s.objects.Deployment.get("foo")
pods = deployment.pods()  # Uses selector to get a list of associated Pods

Possible copy-paste error in license/docs attribution?

Which project are you reporting a bug for?

kr8s

What happened?

Hi Jacob ๐Ÿ‘‹, nice project! While poking through the docs and source, I noticed two locations where the author attribution looks like a potential copy-paste error (ported over from dask-kubernetes).

Not sure if these are correct or not, but figured I'd raise an issue to check.

(edit: looking through more of the source, I'm now guessing this attribution was intentional, possibly to cite code copied out of dask-kubernetes? If so, apologies for the noise)

Anything else?

No response

Expose API client methods without a client object

Which project are you requesting an enhancement for?

kr8s

What do you need?

I've been thinking about how we allow the object API to work without having to create an instance of the Api class first.

from kr8s.objects import Pod

spec = ...
pod = Pod(spec)
pod.create()

Under the hood we check to see if Pod has been passed an API client instance and if not we create one at runtime. The goal here is to make things as simple as kubectl in terms of configuration and authentication. Sensible defaults are used when being implicit.

However, for the client API we still need to create an instance and then call methods on it.

import kr8s

api = kr8s.api()
pods = api.get("pods")

I wonder if we should extend the same simplicity here too. We could add some utility methods that match the names and call signatures of the methods on the client object (but also optionally take a client instance) and do the same lookup.

import kr8s

pods = kr8s.get("pods")

I would expect the implementation to look something like this (but with all the asyncio magic and wrapping).

def get(*args, api=None, **kwargs) -> List[object]:
    if api is None:
        api = kr8s.api()
    return api.get(*args, **kwargs)

self-signed certificate in certificate chain

Which project are you reporting a bug for?

kr8s

What happened?

I'm running into the following error:

SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1129)

It's preventing me from using dask-kubernetes, unfortunately.

I'll post the full traceback below.

The code required to reproduce the error is:

import kr8s

pods = kr8s.get("pods")

I validated with kr8s v0.8.18 and the tip of main.

Here's what my kube config looks like (sanitized) in case the presence or lack thereof of specific keys is helpful:

apiVersion:
clusters:
- cluster:
    certificate-authority: certs/obfuscated/k8s-ca.crt
    server: 
  name: 
contexts:
- context:
    cluster:
    namespace:
    user:
  name: 
current-context: 
kind: Config
preferences: {}
users:
- name: 
  user:
    auth-provider:
      config:
        client-id:
        client-secret: 
        id-token:
        idp-issuer-url: 
        refresh-token: 
      name: oidc

When I change this line to return True, the problem goes away.

SSLCertVerificationError                  Traceback (most recent call last)
/Users/obfuscated/code/obfuscated/obfuscated/ignore/lab.ipynb Cell 3 line 3
      1 import kr8s
----> 3 pods = kr8s.get("pods")
      4 print(pods)

File ~/code/kr8s/kr8s/_io.py:46, in run_sync.<locals>.wrapped(*args, **kwargs)
     44 with anyio.from_thread.start_blocking_portal() as portal:
     45     if inspect.iscoroutinefunction(coro):
---> 46         return portal.call(wrapped)
     47     if inspect.isasyncgenfunction(coro):
     48         return iter_over_async(wrapped)

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/anyio/from_thread.py:277, in BlockingPortal.call(self, func, *args)
    264 def call(
    265     self, func: Callable[..., Awaitable[T_Retval] | T_Retval], *args: object
    266 ) -> T_Retval:
    267     """
    268     Call the given function in the event loop thread.
    269 
   (...)
    275 
    276     """
--> 277     return cast(T_Retval, self.start_task_soon(func, *args).result())

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/concurrent/futures/_base.py:446, in Future.result(self, timeout)
    444     raise CancelledError()
    445 elif self._state == FINISHED:
--> 446     return self.__get_result()
    447 else:
    448     raise TimeoutError()

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/concurrent/futures/_base.py:391, in Future.__get_result(self)
    389 if self._exception:
    390     try:
--> 391         raise self._exception
    392     finally:
    393         # Break a reference cycle with the exception in self._exception
    394         self = None

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/anyio/from_thread.py:217, in BlockingPortal._call_func(self, func, args, kwargs, future)
    214             else:
    215                 future.add_done_callback(callback)
--> 217             retval = await retval
    218 except self._cancelled_exc_class:
    219     future.cancel()

File ~/code/kr8s/kr8s/asyncio/_helpers.py:23, in get(kind, namespace, label_selector, field_selector, as_object, api, _asyncio, *names, **kwargs)
     21 if api is None:
     22     api = await _api(_asyncio=_asyncio)
---> 23 return await api._get(
     24     kind,
     25     *names,
     26     namespace=namespace,
     27     label_selector=label_selector,
     28     field_selector=field_selector,
     29     as_object=as_object,
     30     **kwargs,
     31 )

File ~/code/kr8s/kr8s/_api.py:332, in Api._get(self, kind, namespace, label_selector, field_selector, as_object, *names, **kwargs)
    328     group, version = as_object.version.split("/")
    329     headers[
    330         "Accept"
    331     ] = f"application/json;as={as_object.kind};v={version};g={group}"
--> 332 async with self._get_kind(
    333     kind,
    334     namespace=namespace,
    335     label_selector=label_selector,
    336     field_selector=field_selector,
    337     headers=headers or None,
    338     **kwargs,
    339 ) as (obj_cls, response):
    340     resourcelist = response.json()
    341     if (
    342         as_object
    343         and "kind" in resourcelist
    344         and resourcelist["kind"] == as_object.kind
    345     ):

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/contextlib.py:181, in _AsyncGeneratorContextManager.__aenter__(self)
    179 del self.args, self.kwds, self.func
    180 try:
--> 181     return await self.gen.__anext__()
    182 except StopAsyncIteration:
    183     raise RuntimeError("generator didn't yield") from None

File ~/code/kr8s/kr8s/_api.py:261, in Api._get_kind(self, kind, namespace, label_selector, field_selector, params, watch, **kwargs)
    259 params = params or None
    260 obj_cls = get_class(kind, _asyncio=self._asyncio)
--> 261 async with self.call_api(
    262     method="GET",
    263     url=kind,
    264     version=obj_cls.version,
    265     namespace=namespace if obj_cls.namespaced else None,
    266     params=params,
    267     **kwargs,
    268 ) as response:
    269     yield obj_cls, response

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/contextlib.py:181, in _AsyncGeneratorContextManager.__aenter__(self)
    179 del self.args, self.kwds, self.func
    180 try:
--> 181     return await self.gen.__anext__()
    182 except StopAsyncIteration:
    183     raise RuntimeError("generator didn't yield") from None

File ~/code/kr8s/kr8s/_api.py:132, in Api.call_api(self, method, version, base, namespace, url, raise_for_status, stream, **kwargs)
    130         yield response
    131 else:
--> 132     response = await self._session.request(**kwargs)
    133     if raise_for_status:
    134         response.raise_for_status()

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/httpx/_client.py:1530, in AsyncClient.request(self, method, url, content, data, files, json, params, headers, cookies, auth, follow_redirects, timeout, extensions)
   1501 """
   1502 Build and send a request.
   1503 
   (...)
   1515 [0]: /advanced/#merging-of-configuration
   1516 """
   1517 request = self.build_request(
   1518     method=method,
   1519     url=url,
   (...)
   1528     extensions=extensions,
   1529 )
-> 1530 return await self.send(request, auth=auth, follow_redirects=follow_redirects)

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/httpx/_client.py:1617, in AsyncClient.send(self, request, stream, auth, follow_redirects)
   1609 follow_redirects = (
   1610     self.follow_redirects
   1611     if isinstance(follow_redirects, UseClientDefault)
   1612     else follow_redirects
   1613 )
   1615 auth = self._build_request_auth(request, auth)
-> 1617 response = await self._send_handling_auth(
   1618     request,
   1619     auth=auth,
   1620     follow_redirects=follow_redirects,
   1621     history=[],
   1622 )
   1623 try:
   1624     if not stream:

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/httpx/_client.py:1645, in AsyncClient._send_handling_auth(self, request, auth, follow_redirects, history)
   1642 request = await auth_flow.__anext__()
   1644 while True:
-> 1645     response = await self._send_handling_redirects(
   1646         request,
   1647         follow_redirects=follow_redirects,
   1648         history=history,
   1649     )
   1650     try:
   1651         try:

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/httpx/_client.py:1682, in AsyncClient._send_handling_redirects(self, request, follow_redirects, history)
   1679 for hook in self._event_hooks["request"]:
   1680     await hook(request)
-> 1682 response = await self._send_single_request(request)
   1683 try:
   1684     for hook in self._event_hooks["response"]:

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/httpx/_client.py:1719, in AsyncClient._send_single_request(self, request)
   1714     raise RuntimeError(
   1715         "Attempted to send an sync request with an AsyncClient instance."
   1716     )
   1718 with request_context(request=request):
-> 1719     response = await transport.handle_async_request(request)
   1721 assert isinstance(response.stream, AsyncByteStream)
   1722 response.request = request

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/httpx/_transports/default.py:353, in AsyncHTTPTransport.handle_async_request(self, request)
    340 req = httpcore.Request(
    341     method=request.method,
    342     url=httpcore.URL(
   (...)
    350     extensions=request.extensions,
    351 )
    352 with map_httpcore_exceptions():
--> 353     resp = await self._pool.handle_async_request(req)
    355 assert isinstance(resp.stream, typing.AsyncIterable)
    357 return Response(
    358     status_code=resp.status,
    359     headers=resp.headers,
    360     stream=AsyncResponseStream(resp.stream),
    361     extensions=resp.extensions,
    362 )

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/httpcore/_async/connection_pool.py:262, in AsyncConnectionPool.handle_async_request(self, request)
    260     with AsyncShieldCancellation():
    261         await self.response_closed(status)
--> 262     raise exc
    263 else:
    264     break

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/httpcore/_async/connection_pool.py:245, in AsyncConnectionPool.handle_async_request(self, request)
    242         raise exc
    244 try:
--> 245     response = await connection.handle_async_request(request)
    246 except ConnectionNotAvailable:
    247     # The ConnectionNotAvailable exception is a special case, that
    248     # indicates we need to retry the request on a new connection.
   (...)
    252     # might end up as an HTTP/2 connection, but which actually ends
    253     # up as HTTP/1.1.
    254     async with self._pool_lock:
    255         # Maintain our position in the request queue, but reset the
    256         # status so that the request becomes queued again.

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/httpcore/_async/http_proxy.py:299, in AsyncTunnelHTTPConnection.handle_async_request(self, request)
    293 kwargs = {
    294     "ssl_context": ssl_context,
    295     "server_hostname": self._remote_origin.host.decode("ascii"),
    296     "timeout": timeout,
    297 }
    298 async with Trace("start_tls", logger, request, kwargs) as trace:
--> 299     stream = await stream.start_tls(**kwargs)
    300     trace.return_value = stream
    302 # Determine if we should be using HTTP/1.1 or HTTP/2

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/httpcore/_backends/anyio.py:78, in AnyIOStream.start_tls(self, ssl_context, server_hostname, timeout)
     76     except Exception as exc:  # pragma: nocover
     77         await self.aclose()
---> 78         raise exc
     79 return AnyIOStream(ssl_stream)

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/httpcore/_backends/anyio.py:69, in AnyIOStream.start_tls(self, ssl_context, server_hostname, timeout)
     67 try:
     68     with anyio.fail_after(timeout):
---> 69         ssl_stream = await anyio.streams.tls.TLSStream.wrap(
     70             self._stream,
     71             ssl_context=ssl_context,
     72             hostname=server_hostname,
     73             standard_compatible=False,
     74             server_side=False,
     75         )
     76 except Exception as exc:  # pragma: nocover
     77     await self.aclose()

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/anyio/streams/tls.py:123, in TLSStream.wrap(cls, transport_stream, server_side, hostname, ssl_context, standard_compatible)
    113 ssl_object = ssl_context.wrap_bio(
    114     bio_in, bio_out, server_side=server_side, server_hostname=hostname
    115 )
    116 wrapper = cls(
    117     transport_stream=transport_stream,
    118     standard_compatible=standard_compatible,
   (...)
    121     _write_bio=bio_out,
    122 )
--> 123 await wrapper._call_sslobject_method(ssl_object.do_handshake)
    124 return wrapper

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/site-packages/anyio/streams/tls.py:131, in TLSStream._call_sslobject_method(self, func, *args)
    129 while True:
    130     try:
--> 131         result = func(*args)
    132     except ssl.SSLWantReadError:
    133         try:
    134             # Flush any pending writes first

File /opt/homebrew/Caskroom/miniconda/base/envs/3.9/lib/python3.9/ssl.py:945, in SSLObject.do_handshake(self)
    943 def do_handshake(self):
    944     """Start the SSL/TLS handshake."""
--> 945     self._sslobj.do_handshake()

SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1129)

Anything else?

No response

Refresh expired OIDC tokens

In #126 I added support for authenticating with an OIDC token. However, I did not implement automatically refreshing that token.

This issue tracks adding token refreshing.

Migrate port forwarding from `aiohttp` to `httpx`

In #103 kr8s._api.Api.call_api was refactored to use httpx instead of aiohttp in order to support using kr8s with trio.

It proved difficult to migrate the websocket behaviour that was used in port forwards so a new method called kr8s._api.Api.open_websocket was introduced which continues to use aiohttp to open a websocket. This means that port forwarding currently does not work when using trio.

It would be good to migrate this over to use some anyio compatible websocket implementation such as httpx-ws or anysocks so that trio users have the same set of features as asyncio users.

kr8s/kr8s/_api.py

Lines 179 to 185 in 26f6c8b

async with aiohttp.ClientSession(
base_url=self.auth.server,
headers=headers,
auth=userauth,
) as session:
async with session.ws_connect(**kwargs) as response:
yield response

We would also need to replace the asyncio server that the port forward starts with an anyio server.

kr8s/kr8s/_portforward.py

Lines 127 to 129 in 26f6c8b

self.server = await asyncio.start_server(
self._sync_sockets, port=self.local_port, host="0.0.0.0"
)

Tasks

  1. hygiene

Secret and ConfigMap should expose "data" attribute

Which project are you requesting an enhancement for?

kr8s

What do you need?

The "Secret" and "ConfigMap" objects are currently missing the "data" attribute.
It's only possible to access the fields via secret.raw["data"].items() or configmap.raw["data"].items().

Add CI for kubectl-ng

Currently only testing kr8s in GitHub Actions. Would be good to add another workflow to test kubectl-ng.

Add method to allow resources to adopt other resources

Which project are you requesting an enhancement for?

kr8s

What do you need?

Kubernetes resources can have owner references that point to a parent resource. When the parent is deleted the child resource gets garbage collected by the Kubernetes controller.

An example of this in the builtin resources is Pods created by a Deployment are deleted when the Deployment is deleted through an owner reference.

However, when building operators and custom controllers it can be useful to be able to set these references.

We should add a method to the APIObject class to simplify patching in an owner reference.

import kr8s

child = kr8s.objects.Foo(...)
parent = kr8s.objects.Bar(...)

child.set_owner(parent)  # patches the child resource with an owner reference to the parent
# or
parent.adopt(child)  # patches the child resource with an owner reference to the parent

Stream pod`s logs.

Which project are you requesting an enhancement for?

kr8s

What do you need?

Hello.
Is it possible to get pod logs dynamically?
Like:

for line in kubernetes.client.CoreV1Api.read_namespaced_pod_log(
        name="podname",
        namespace="podnamespace",
        follow=True,
        _preload_content=False,
        _request_timeout=timedelta(hours=24),
        ).stream():
        if pod_status_phase =="Completed":
            break
        else:
            print(line.decode(), end='')

kr8s pod.logs() resolve "str" object.
Thanks,

Unable to access AKS Clusters

Which project are you reporting a bug for?

kr8s

What happened?

After installing on Python 3.11.5, and attempting to use the example in the documentation:

import kr8s

pods = kr8s.get("pods")

I get the following exception:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/_io.py", line 46, in wrapped
    return portal.call(wrapped)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/anyio/from_thread.py", line 261, in call
    return cast(T_Retval, self.start_task_soon(func, *args).result())
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/.pyenv/versions/3.11.5/lib/python3.11/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/.pyenv/versions/3.11.5/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/anyio/from_thread.py", line 198, in _call_func
    retval = await retval
             ^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/asyncio/_helpers.py", line 22, in get
    api = await _api(_asyncio=_asyncio)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/asyncio/_api.py", line 44, in api
    return await _f(
           ^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/asyncio/_api.py", line 42, in _f
    return await _cls(**kwargs, bypass_factory=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/_api.py", line 59, in f
    await self.auth
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/_auth.py", line 48, in f
    await self.reauthenticate()
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/_auth.py", line 59, in reauthenticate
    await self._load_kubeconfig()
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/_auth.py", line 122, in _load_kubeconfig
    **{e["name"]: e["value"] for e in self._user["exec"].get("env", [])}
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not iterable

The issue might be coming from the fact that kubectl auth is provided by kubelogin.

Anything else?

Using the kubectl proxy solution documented here is a semi functional solution as follow up commands return a number of other exceptions:

>>> import kr8s
>>> client = kr8s.api(url="http://127.0.0.1:8000")
>>> pods = kr8s.get("pods")
>>> pods[0].get()
TypeError: 'NoneType' object is not iterable
>>> pods[0].get()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)

Handle self signed certificates without a CA

Which project are you reporting a bug for?

kr8s

What happened?

In dask/dask-kubernetes#803 we see a case where the Kubernetes API server is using a self-signed certificate but the Kubernetes config only uses a token and does not have any CA or cert chain configured so we can't verify the identity of the server.

apiVersion: v1
clusters:
- cluster:
    server: https://rancher.server/k8s/clusters/...
  name: dev
contexts:
- context:
    cluster: dev
    namespace: dask-operator
    user: dev
  name: dev
current-context: dev
kind: Config
preferences: {}
users:
- name: dev
  user:
    token: kubeconfig-u-...

This results in httpx raising an SSLCertVerificationError exception.

In a case where we have a token and no certificate information we should set SSL verification to False.

Anything else?

cc @dbalabka

Use sync API in event loop

Currently the sync API cannot be used in an asyncio event loop. We have an xfailing test which was added in #40 to demonstrate this.

@pytest.mark.xfail(reason="Cannot run nested event loops", raises=RuntimeError)
async def test_version_sync_in_async():
kubernetes = kr8s.api()
version = kubernetes.version()
assert "major" in version

This situation will come up a lot in Jupyter as there is always an event loop running but many users may expect to be able to use the sync API and will not consider asyncio at all.

We need to be able to support the sync API (which uses asyncio.run under the hood) in an event loop.

kubeconfig insecure-skip-tls-verify option is ignored when using user-token-based authentication

Which project are you reporting a bug for?

kr8s

What happened?

I think this is an authentication bug, but it may be my misunderstanding. Regardless, kr8s behaves differently to kubectl.

In my development environment, my kubeconfig file sets the insecure-skip-tls-verify flag and uses a pregenerated token:

- name: clustername
  cluster:
    insecure-skip-tls-verify: true
    server: https://myserverurl:6443

With this set, I can retrieve pods for example with
kubectl --kubeconfig=kubeconfig get pods --namespace=mynamespace --context=mycontext

but with kr8s I get a traceback. Here are the last few lines:

  File "/usr/local/lib/python3.11/site-packages/httpcore/_async/connection.py", line 159, in _connect
    stream = await stream.start_tls(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 78, in start_tls
    raise exc
  File "/usr/local/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 69, in start_tls
    ssl_stream = await anyio.streams.tls.TLSStream.wrap(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/anyio/streams/tls.py", line 125, in wrap
    await wrapper._call_sslobject_method(ssl_object.do_handshake)
  File "/usr/local/lib/python3.11/site-packages/anyio/streams/tls.py", line 133, in _call_sslobject_method
    result = func(*args)
             ^^^^^^^^^^^
  File "/usr/local/lib/python3.11/ssl.py", line 979, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)

If I force line 80 of _api.py

verify=await self.auth.ssl_context(),

to return False, which is what I expect the call to ssl_context() to return, things work as expected

I think the ssl_context() method

kr8s/kr8s/_auth.py

Lines 67 to 74 in bd6c31d

async with self.__auth_lock:
if (
not self.client_key_file
and not self.client_cert_file
and not self.server_ca_file
):
# If no cert information is provided, fall back to default verification
return True

should be checking self.token, but I don't know the correct logic or I'm misunderstanding.

Anything else?

No response

Sync iterators may create multiple threads

When we call a sync iterator this is handled by kr8s._io.run_sync which creates a new thread and event loop with anyio.from_thread.start_blocking_portal. Then it inspects the callable and if it is an iterable it calls kr8s._io.iter_over_async .

kr8s/kr8s/_io.py

Lines 44 to 48 in 4223da8

with anyio.from_thread.start_blocking_portal() as portal:
if inspect.iscoroutinefunction(coro):
return portal.call(wrapped)
if inspect.isasyncgenfunction(coro):
return iter_over_async(wrapped)

However it looks like iter_over_async also calls anyio.from_thread.start_blocking_portal.

kr8s/kr8s/_io.py

Lines 57 to 72 in 4223da8

def iter_over_async(agen: AsyncGenerator) -> Generator:
ait = agen().__aiter__()
async def get_next() -> Tuple[bool, Any]:
try:
obj = await ait.__anext__()
return False, obj
except StopAsyncIteration:
return True, None
with anyio.from_thread.start_blocking_portal() as portal:
while True:
done, obj = portal.call(get_next)
if done:
break
yield obj

It would be good to investigate whether the same portal is reused or if a second thread/loop is created.

Make API construction and utilities async

Currently, the creation of the API client is synchronous and all of the authentication it does uses blocking IO (see #50).

from kr8s.asyncio import api

# Constructing the client doesn't use await despite making IO calls
client = api()

# Reauthenticating also doesn't await
client.auth.reauthenticate()

For the async API client I would expect these to be async. But for the sync API they should work as they do currently.

Utility functions to generate common resources

Which project are you requesting an enhancement for?

kr8s

What do you need?

This comment dask/dask-kubernetes#805 (comment) got me thinking. Could/should we provide some utility functions that generate resources using sensible defaults?

The goal would be that for the 90% of resources that follow a standard convention can we auto-generate that?

E.g could we have something like this

from kr8s import gen_pod
pod = gen_pod(name="nginx", image="nginx:stable", ports=[80])

# which is equivalent to

from kr8s.objects import Pod
pod = Pod({
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
    "name": "nginx"
  },
  "spec": {
    "containers": [
      {
        "name": "nginx",
        "image": "nginx:stable",
        "ports": [
          {
            "containerPort": 80,
            "name": "http"
          }
        ]
      }
    ]
  }
})

Raise more useful API errors

Which project are you requesting an enhancement for?

kr8s

What do you need?

When we get an error from Kubernetes and we've exhausted various mitigations such as authentication we simply raise the exception up to the user.

raise

However the response.json() contains more useful information on what caused the error. We should include this in the exception before we raise it up.

Re-authenticate if credentials expire

Credentials provided by an exec plugin have a limited lifetime. When making API calls we should catch authentication errors and try refreshing the credentials and retrying.

Allow casting objects to dicts

Which project are you requesting an enhancement for?

kr8s

What do you need?

Casting an object to a dict should be equivalent to accessing object.raw.

Patch Secrets with new data, instead of appending.

Which project are you requesting an enhancement for?

kr8s

What do you need?

Currently kr8s exposes a .patch() mechanism for Secret objects which allows us to add new keys:

>>> from kr8s.objects import Secret
>>> a = Secret("my-secret")
>>> a.refresh()
>>> a.raw["data"]
{"aaa": "ZXhhbXBsZQ=="}
>>> a.patch({"data": {"bbb": "bW9yZV9leGFtcGxl"}})
>>> a.raw["data"]
{"aaa": "ZXhhbXBsZQ==", "bbb": "bW9yZV9leGFtcGxl"}

I'd like the ability to set a replace=True arg to the .patch() call to have the secret data set to the contents of the patch call. Example:

>>> from kr8s.objects import Secret
>>> a = Secret("my-secret")
>>> a.refresh()
>>> a.raw["data"]
{"aaa": "ZXhhbXBsZQ=="}
>>> a.patch({"data": {"bbb": "bW9yZV9leGFtcGxl"}}, replace=True)
>>> a.raw["data"]
{"bbb": "bW9yZV9leGFtcGxl"}

Alternatively, supporting JSON 6902 style patching would also be a valid solution here, similar to how kubectl handles this:

$ kubectl patch secret my-secret \
    --type='json' \
    -p='[{"op": "replace", "path": "/data", "value":{"bbb": "bW9yZV9leGFtcGxl"}}]'

This could be implemented as a new function:

>>> a.patch6902([{"op": "replace", "path": "/data", "value":{"bbb": "bW9yZV9leGFtcGxl"}}])
>>> a.raw["data"]
{"bbb": "bW9yZV9leGFtcGxl"}

Add label selector support to APIObject.get

Which project are you requesting an enhancement for?

kr8s

What do you need?

Currently APIObject.get(...) only supports getting a resource by name. It would be great if it was also possible to get by a label or field selector.

import kr8s

pod = kr8s.objects.Pod.get(label_selector="foo=bar")

Add support for other async frameworks

Which project are you requesting an enhancement for?

kr8s

What do you need?

Rather than directly using asyncio primitives which preclude the use of other async framweorks it would be great if kr8s could utilise anyio instead so that users are free to use the async framework of their choice in their applications.

Tasks

  1. 1 of 4
    enhancement kr8s
  2. hygiene

Support sync API

So far everything in kr8s uses asyncio. We should also provide a sync API.

We need a logo

We need a logo ๐Ÿ˜„

Brief

Project overview

Kr8s is a Python client library for Kubernetes. It's a collection of code to be used by other software developers.

Naming

Our project is called kr8s (pronounced the same as "crates").

Kubernetes is often shortened to the numeronym k8s (the middle 8 letters are replaced with the letter 8), however some people pronounce this as k-eight-s or k-ate-s.

The project is written in Python so projects often have a snake theme and a Krait is a type of sea snake.

The name kr8s is a combination of "k8s" and "Kraits".

Design requirements

  • This is a spare time side project so shouldn't be too serious.
  • I want this project to feel fun and lighthearted so a logo that is cartoony would be great.
  • The logo should include a Krait snake
  • It should have something nautical to tie it into Kubernetes.
  • It also needs to be easily usable as a social media logo.

Inspiration

The logo should include a Krait snake.

A Krait

A cartoon Krait

Many other projects in the software library ecosystem have fun and cartoony logos. Here are a few examples that I like.

Golang

k9s

PHP

Docker

Other relevant imagery

Other logos in the space are either snake themed, or nautical themed. We should try and fit within this.

Python

Helm

Kubernetes

Istio

New thread for every sync call

To provide a sync API we wrap the async API in a run_sync decorator.

This works fine but it seems to create a whole new thread and event loop every time a function is called. This results in the sync API being much slower than the async API.

It would be great to see if we can reuse the same anyio blocking portal for every function call.

with anyio.from_thread.start_blocking_portal() as portal:

"This method cannot be called from the event loop thread" when calling a function inside a Pod spec

Which project are you reporting a bug for?

kr8s

What happened?

Following the fixes applied in #183 a new bug has appeared.

Traceback:

Traceback (most recent call last):
  File "/Users/cpressland/dev/starbug/loop.py", line 27, in <module>
    "value": get_secret_value("azure-storage", "account_name"),
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/loop.py", line 8, in get_secret_value
    secret = Secret.get(name=name, namespace=namespace)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/_io.py", line 75, in wrapped
    return portal.call(wrapped)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/_io.py", line 50, in call
    return self._portal.call(func, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/anyio/from_thread.py", line 277, in call
    return cast(T_Retval, self.start_task_soon(func, *args).result())
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/anyio/from_thread.py", line 217, in _call_func
    retval = await retval
             ^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/_objects.py", line 170, in get
    api = kr8s.api()
          ^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/_io.py", line 75, in wrapped
    return portal.call(wrapped)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/kr8s/_io.py", line 50, in call
    return self._portal.call(func, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/anyio/from_thread.py", line 277, in call
    return cast(T_Retval, self.start_task_soon(func, *args).result())
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/anyio/from_thread.py", line 362, in start_task_soon
    self._check_running()
  File "/Users/cpressland/dev/starbug/.venv/lib/python3.11/site-packages/anyio/from_thread.py", line 174, in _check_running
    raise RuntimeError(
RuntimeError: This method cannot be called from the event loop thread

Sample code:

from base64 import b64decode

from kr8s.objects import Pod, Secret


def get_secret_value(name: str, key: str, namespace: str = "default") -> str:
    """Get the Value of a Secret."""
    secret = Secret.get(name=name, namespace=namespace)
    return b64decode(secret.raw["data"][key]).decode("utf-8")


my_pod = Pod({
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {
        "name": "my-pod",
        "namespace": "default",
    },
    "spec": {
        "containers": [
            {
                "name": "my-container",
                "image": "docker.io/ubuntu:latest",
                "env": [
                    {
                        "name": "MY_ENV_VAR",
                        "value": get_secret_value("azure-storage", "account_name"),
                    },
                ],
            },
        ],
    },
})

my_pod.create()

Anything else?

When downgrading back to version 0.9.0 this code works just fine.

Support dot notation in object scalable_spec to support nested fields

Some objects are scalable. The APIObject.scale() method works by checking if the class var scalable is True and if so it patches the key in the spec specified by the class var scalable_spec.

For example the Job object can be scaled.

class Job(APIObject):
    """A Kubernetes Job."""

    version = "batch/v1"
    endpoint = "jobs"
    kind = "Job"
    plural = "jobs"
    singular = "job"
    namespaced = True

    scalable = True
    scalable_spec = "parallelism"

Calling Job.scale(n) calls Job.patch({"spec": { "parallelism": n}}).

Some resources may have a nested scaling key though. What if we needed it to call Job.patch({"spec": {"scale": {"parallelism": n}}})?

We should be able to set scalable_spec = "scale.parallelism" to do this.

Multi context/cluster querying

Which project are you requesting an enhancement for?

kr8s

What do you need?

It would be really nice to have multi-cluster querying integrated in this library. For example I'd like to be able to craft compact yet complex queries to fetch rich datasets quickly.
Here's a few ideas of what it could look like:

  • kr8s.clusters(["c1a", "c1b", "c2a"]).get('nodes')
  • kr8s.clusters("^c1.*$").get('nodes')
  • kr8s.all_clusters().get('nodes')

Aiohttp session not being closed when using nest-asyncio

Since introducing nest-asyncio in #42 the aiohttp session close finaliser isn't working.

This should close the session.

asyncio_atexit.register(self._session.close)

But it isn't working.

Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7fedfd56afb0>
Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x7fedfd5cdd80>, 1258910.377873378)]']
connector: <aiohttp.connector.TCPConnector object at 0x7fedfd5bd720>

Looks like this is an incompatibility between how nest-asyncio and asyncio-atexit are patching asyncio, see minrk/asyncio-atexit#3.

Old poetry-dynamic-versioning-plugin

Which project are you reporting a bug for?

kubectl-ng

What happened?

It probably doesn't impact your workflow too much, but this repo is using a super old version of poetry-dyanmic-versioning in your release CI. You should probably update poetry to something more modern, and use poetry-dynamic-versioning[plugin].

However, I'm looking through your stuff, and it seems like you don't even use PDV? So should probably just remove it anyways.

Anything else?

No response

Selectors should take dicts

Which project are you requesting an enhancement for?

kr8s

What do you need?

Label and field selectors should take dicts and convert them under the hook.

Create new classes automatically in `kr8s.get`

Which project are you requesting an enhancement for?

kr8s

What do you need?

In #198 I made a bunch of improvements around how we can specify resource names in kr8s.get.

import kr8s

# All of the following are equivalent
ingresses = kr8s.get("ing")
ingresses = kr8s.get("ingress")
ingresses = kr8s.get("ingress.networking.k8s.io")
ingresses = kr8s.get("ingress.v1.networking.k8s.io")

However, we still don't support getting objects that we don't have a base class representation for.

>>> kr8s.get("mutatingwebhookconfigurations.admissionregistration.k8s.io/v1")
...

File ~/Projects/kr8s-org/kr8s/kr8s/_objects.py:1267, in get_class(kind, version, _asyncio)
   1259         if (
   1260             not version
   1261             and "." in group
   1262             and cls_group == group.split(".", 1)[1]
   1263             and cls_version == group.split(".", 1)[0]
   1264         ):
   1265             return cls
-> 1267 raise KeyError(f"No object registered for {kind}{'.' + group if group else ''}")

KeyError: 'No object registered for mutatingwebhookconfigurations.admissionregistration.k8s.io'

However, if we first create a container class for it with the class factory we can do it.

import kr8s

MutatingWebhookConfiguration = kr8s.objects.new_class("mutatingwebhookconfiguration", "admissionregistration.k8s.io/v1", namespaced=False, asyncio=False)

mwcs = kr8s.get("mutatingwebhookconfigurations.admissionregistration.k8s.io/v1")

Alternatives

In the kr8s.objects.object_from_spec(...) method there is a kwarg called allow_unknown_type which if set to True will call new_class automatically if it is passed a spec that it isn't aware of.

We could do this in kr8s.get too so that we can get any API object that we haven't seen before. Given that kr8s is striving to provide a batteries included and close to kubectl experience we may want to make this dynamic classing the default behaviour and switch the kwarg to enable folks to explicitly opt out.

A couple of things to take into consideration:

  • I had to explicitly set namespaced=False to make this work because I chose a non-namespaced resource. We would need to update new_class to look this up automatically via the Kubernetes API.
  • I explicitly set asyncio=False because I wrote all the examples here with the sync API. We would potentially want to create both a sync and async class when calling new_class or have kr8s.objects.new_class create sync objects by default and kr8s.asyncio.objects.new_class continue to create async objects.

Sync API returns async objects

Which project are you reporting a bug for?

kr8s

What happened?

It looks like methods such as get in the sync API are returning async objects when they should be returning sync ones.

Python 3.10.11 | packaged by conda-forge | (main, May 10 2023, 18:58:44) [GCC 11.3.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.10.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import kr8s

In [2]: api = kr8s.api()

In [3]: api._asyncio
Out[3]: False

In [4]: pods = api.get("pods", namespace=kr8s.ALL)

In [5]: pods[0]._asyncio
Out[5]: True

In [6]: pods[0].ready()
Out[6]: <coroutine object Pod.ready at 0x7f3786def4c0>

Anything else?

No response

Replace auth IO with asyncio

Currently, the kr8s._auth submodule uses blocking IO via pathlib and subprocess when authenticating.

Ideally, this should use asyncio (probably via aiopath and asyncio.subprocess) to avoid blocking the loop while reading/writing credentials and waiting for external processes to authenticate.

Support operating on specific API versions

Which project are you requesting an enhancement for?

kr8s

What do you need?

It's not clear from the documentation how to do this, and I'm also not seeing in the code where I would hook into this functionality:

Many different API namespaces implement similarly named resources. For example, Istio has gateway.v1beta1.networking.istio.io or gateway.networking.istio.io (i.e. the Gateway object). Kubernetes also has its own Gateway in gateway.v1alpha1.networking.x-k8s.io (or gateway.networking.x-k8s.io). In kubectl, for example, it will pick one of these kinds if there are multiple kinds with the same name. So if you do kubectl get gateways, it may return the Kubernetes gateways only or the Istio gateways only (I don't actually know how it picks). To pick a specific API version with kubectl, you can do kubectl get gateway.v1beta1.networking.istio.io or kubectl get gateway.networking.istio.io.

I think kr8s.get (and I guess other CRUD implementations) should also support this concept. For example if you have a Kubernetes Gateway named foo and an Istio Gateway named foo, this is perfectly legal to do since they are different API versions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.