Coder Social home page Coder Social logo

k8s's Introduction

k8s-gen

Code generator for Jsonnet Kubernetes libraries.

This repository contains the generator code and relevant bits to generate the jsonnet libraries. It can generate libraries directly from OpenAPI spec v2 (Swagger) or by providing CustomResourceDefinitions.

The CI job is set up to generate and update the content of a corresponding Github repository for each library. The management of these repositories is done through Terraform.

Usage

Create or update a new lib

Create a folder in libs/:

mkdir libs/<name>

Setup config.jsonnet, this example is for rendering a lib from CRDs:

# libs/<name>/config.jsonnet
local config = import 'jsonnet/config.jsonnet';

config.new(
  name='<name>',
  specs=[
    {
      # output directory, usually the version of the upstream application/CRD
      output: '<version>',

      # crds Endpoints of the CRD manifests, should be omitted if there is an openapi spec
      # Only resources of kind CustomResourceDefintion are applied; the default policy is to just
      # pass in the CRDs here though.
      crds: ['https://url.to.crd.manifest/<version>/manifests/crd-all.gen.yaml'],

      # openapi spec v2 endpoint
      # No need to define this if `crds` is defined
      openapi: 'http://localhost:8001/openapi/v2',

      # prefix Regex that should match the reverse of the CRDs spec.group
      # for example `group: networking.istio.io`
      # would become ^io\\.istio\\.networking\\..*"
      prefix: '^<prefix>\\.<name>\\..*',

      # localName used in the docs for the example(s)
      localName: '<name>',
    },
  ]
)

Dry run the generate process:

$ make build        # Build the image
$ make libs/<name>  # Generate the library

Set up CI and Terraform code:

$ make configure

That is it, commit the changes to a branch and make a PR to get things rolling. The CI should take care of the rest.

Customizing

Because the generator only creates the most minimal yet functional code, more sophisticated utilities like constructors (deployment.new(name, replicas, containers), etc) are not created.

For that, there are two methods for extending:

custom patches

The custom/ directory contains a set of .libsonnet files, that are automatically merged with the generated result in main.libsonnet, so they become part of the exported API.

For example the patches in libs/k8s:


libs/k8s/
├── config.jsonnet                   # Config to generate the k8s jsonnet libraries
├── README.md.tmpl                   # Template for the index of the generated docs
└── custom
    └── core
        ├── apps.libsonnet           # Constructors for `core/v1`, ported from `ksonnet-gen` and `kausal.libsonnet`
        ├── autoscaling.libsonnet    # Extends `autoscaling/v2beta2`
        ├── batch.libsonnet          # Constructors for `batch/v1beta1`, `batch/v2alpha1`, ported from `kausal.libsonnet`
        ├── core.libsonnet           # Constructors for `apps/v1`, `apps/v1beta1`, ported from `ksonnet-gen` and `kausal.libsonnet`
        ├── list.libsonnet           # Adds `core.v1.List`
        ├── mapContainers.libsonnet  # Adds `mapContainers` functions for fields that support them
        ├── rbac.libsonnet           # Adds helper functions to rbac objects
        └── volumeMounts.libsonnet   # Adds helper functions to mount volumes

A reference for these must also be made in the config.jsonnet:

# libs/k8s/config.jsonnet
local config = import 'jsonnet/config.jsonnet';

config.new(
  name='k8s',
  specs=[
    {
        ...
        patchDir: 'custom/core',
    },
  ]
)

Extensions

Extensions serve a similar purpose as custom/ patches, but are not automatically applied. However, they are still part of the final artifact, but need to added by the user themselves.

Extensions can be applied as so:

(import "github.com/jsonnet-libs/k8s-libsonnet/1.21/main.libsonnet")
+ (import "github.com/jsonnet-libs/k8s-libsonnet/extensions/<name>.libsonnet")

A reference for these must also be made in the config.jsonnet:

# libs/k8s/config.jsonnet
local config = import 'jsonnet/config.jsonnet';

config.new(
  name='k8s',
  specs=[
    {
        ...
        extensionsDir: 'extensions/core',
    },
  ]
)

k8s's People

Contributors

adinhodovic avatar beyondevil avatar bronzedeer avatar deanbrunt avatar derektamsen avatar duologic avatar grafanalf avatar hamishforbes avatar hjet avatar iainlane avatar jakubhajek avatar jrribeiro avatar jtdoepke avatar julienduchesne avatar jvrplmlmn avatar kugo12 avatar ledouxpl avatar mikael-lindstrom avatar mmusnjak avatar mriedmann avatar nickgooding avatar nogweii avatar pharaujo avatar sh0rez avatar tyler-gs avatar vtomasr5 avatar vvinnich avatar xuanyue202 avatar xvzf avatar yuriy-yarosh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s's Issues

(k8s-libsonnet) inconsistent inclusion of labelSelector

"labelSelector" (definitions/io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector) is a widely referenced struct all throughout the kubernetes api. (A naive text search turns up 30 hits across the swagger.json). However, despite being referenced in many places, the actual label selector object only appears in 3 places in the generated libsonnet:

  • core.v1.topologySpreadConstraint
  • core.v1.podAffinityTerm
  • core.v1.weightedPodAffinityTerm

Even one of the most often used selectors core.v1.service.spec.selector has no labelSelector obj found anywhere near it.

There is the similar core.v1.scopeSelector, which does only matchExpression and not matchLabel, but is included at the correct place in the generated libsonnet. scopeSelector exists in swagger.json as definitions/io.k8s.api.core.v1.ScopeSelector. My first instinct was to think that the apimachinery group just needs to be included in the output via the custom mechanism, similar to the list object. In this case however, I fail to understand why the 3 aforementioned objects correctly included the labelSelector definition (but no transitive definitions!), while all others failed. This might point to a bug in the generator code

Generator can not properly handle keys/values with operand characters (ie $)

In the process of trying to create a jsonnet-lib for the Azure Service Operator v2, I discovered that they label one of their attributes $propertyBag. By doing this, the generator trips over itself. Here is an example of the output (trimmed for visibility):

66:11-12 Expected token OPERATOR but got "$": {
    <trimmed>
    '#workspaceCapping':: d.obj(help="\"Storage version of v1api20210601.WorkspaceCapping The daily volume cap for ingestion.\""),
    workspaceCapping: {
      '#with$PropertyBag':: d.fn(help="\"PropertyBag is an unordered set of stashed information that used for properties not directly supported by storage resources, allowing for full fidelity round trip conversions\"", args=[d.arg(name="$propertyBag", type=d.T.object)]),
      with$PropertyBag($propertyBag): { spec+: { workspaceCapping+: { $propertyBag: $propertyBag } } },
      '#with$PropertyBagMixin':: d.fn(help="\"PropertyBag is an unordered set of stashed information that used for properties not directly supported by storage resources, allowing for full fidelity round trip conversions\"\n\n**Note:** This function appends passed data to existing values", args=[d.arg(name="$propertyBag", type=d.T.object)]),
      with$PropertyBagMixin($propertyBag): { spec+: { workspaceCapping+: { $propertyBag+: $propertyBag } } },
      '#withDailyQuotaGb':: d.fn(help="", args=[d.arg(name="dailyQuotaGb", type=d.T.number)]),
      withDailyQuotaGb(dailyQuotaGb): { spec+: { workspaceCapping+: { dailyQuotaGb: dailyQuotaGb } } }
    }
  },
  '#mixin': "ignore",
  mixin: self
}
make: *** [Makefile:55: libs/azure-service-operator] Error 1

There should to be a way for k8s-gen to be aware of invalid characters and fix them however necessary (wrapping keys in quotes, etc - stripping them from variable names).

New CRD generation only includes resource kinds

Now that #31 has landed(!!) this library just got even better.

That said.. say when it comes to things like istio there are lots of bits missing, which isn't really encapsulated in the CRD spec. In particular, let's take.. VirtualService

...virtualService.new("foo")

virtualService..withHttp( <- takes list of http or single entry
But Http has it's own spec here

With a deep list of references.. such as

  "istio.networking.v1beta1.HTTPRouteDestination": {
    "description": "Each routing rule is associated with one or more service versions (see glossary in beginning of document). Weights associated with the version determine the proportion of traffic it receives. For example, the following rule will route 25% of traffic for the \"reviews\" service to instances with the \"v2\" tag and the remaining traffic (i.e., 75%) to \"v1\".",
    "type": "object",
    "properties": {
      "destination": {
        "$ref": "#/components/schemas/istio.networking.v1beta1.Destination"
      },
      "weight": {
        "description": "The proportion of traffic to be forwarded to the service version. (0-100). Sum of weights across destinations SHOULD BE == 100. If there is only one destination in a rule, the weight value is assumed to be 100.",
        "type": "integer",
        "format": "int32"
      },
      "headers": {
        "$ref": "#/components/schemas/istio.networking.v1beta1.Headers"
      }
    }
  },

Not attempting to bike shed or nit pic here. Genuinely curious if we could extend this in the future? As from a end user perspective.. virtualService.new... without any other typings/hints is a little "meh" ?

autoscaling/v2beta2 CrossVersionObjectReference is missing apiVersion

In order to reference other Objects with a HorizontalPodAutoscaler, it is needed to specify the apiVersion,Kind and Name of the referenced object. Currently only the functions withKind and withName are generated.
See here:
https://jsonnet-libs.github.io/k8s-alpha/1.18/autoscaling/v2beta2/crossVersionObjectReference/
Reference:
https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#crossversionobjectreference-v2beta2-autoscaling

This also affects the HorizontalPodAutoscaler's spec.scaleTargetRef which is a CrossVersionObjectReference:
https://jsonnet-libs.github.io/k8s-alpha/1.18/autoscaling/v2beta2/horizontalPodAutoscaler/#obj-specscaletargetref
Here is the API doc for HorizontalPodAutoscaler v2beta2:
https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#horizontalpodautoscalerspec-v2beta2-autoscaling

New generator to support CloudFormation resources

Hello,

I work with jsonnet library with more or less 1y now and we are quite happy to manage our k8s resources this way (I work with @xvzf ).

We are on the way to support more resources through jsonnet, and I started a generator for the AWS CloudFormation resources types. It generate yet all the resources types and their properties helpers based on the spec file published by AWS, but obviously, this is a proof-of-concept and it miss a lot of features.

I tried to use most as I can the same coding approach than for original k8s generator, but as I am not generating the same kind of library, there will be a significant drift in model and render (from k8s generator), I started a fork rather than a classic PR.

I wonder if there is a way we can work together on that topic, and maybe have this project adopted in your organisation when it reaches necessary stability

Hope you will find this useful !

Support for CustomResourceDefinitions

Would it be feasible for this generator to support generation of Jsonnet mixins given a CustomResourceDefinition that contains an OpenAPI schema?

Ideally, it would take the CRD manifest as input (JSON or YAML), but I suppose it could also fetch the OpenAPI schema from a running K8S API server where the CRD has been installaed.

Version discovery and automated updates

It would've been nice to use the Github API and go-git to query the existing project repos for newer releases and version packages, emit a separate versions.libsonnet file for more manageable version control.

Then create a separate Makefile build script to update all the version files on demand, and automate all the further updates.

I could make a PR with updated lib versions, for now.

Document incompatibilities with ksonnet-lib

  • Constructors now require name field -> make this optional?
  • Object oriented programming style no longer supported -> migration path
  • mixin subkey deprecated
  • type fields removed

IDE Completion / Interface Processes?

I have been using k8s-alpha and ksonnet for a while, it can be annoying to work with based on the complexity of objects and it's structure. Do you guys have any recommendations for how to deal with this library from an IDE perspective? Can you possibly talk about how developers in your company interface with this library (possibly any tooling that you use to aid in field discovery)?

Split by kind

To ease reading of the library, it would be neat to split by kind:

k8s
|– main.libsonnet # imports all <group>/main.libsonnet
|– core
   |– main.libsonnet # imports all <kind>.libsonnet
   |– v1-configmap.libsonnet
|– apps
   |– main.libsonnet
   |– v1-deployment.libsonnet
   |– v1beta1-deployment.libsonnet

If needed, we could introduce another directory level per version:

k8s
|– apps
   |– main.libsonnet
   |– v1
      |– main.libsonnet
      |– deployment.libsonnet
   |– v1beta1
      |– main.libsonnet
      |– deployment.libsonnet

Opinions @woodsaj @malcolmholmes?

Problems importing CRDs with Helm variables in metadata - Linkerd

Hi,

I'm trying to generate jsonnet-libs for the Linkerd2 CRDs. They are published as part of a Helm template, and they use some Helm specific things in the metadata section of the CRDs. This of course, causes the generation to fail.

Is there any way we could either "reset" the metadata section (basically just set it to {}), or somehow ignore these kind of values?

Example CRD is here: https://raw.githubusercontent.com/linkerd/linkerd2/release/stable-2.11/charts/linkerd2/templates/policy-crd.yaml

k8s-libsonnet: Text block's first line must start with whitespace

Hi,

I'll try to run the k8s-libsonnet sample from the documentation: https://jsonnet-libs.github.io/k8s-libsonnet/
When executing `tk show environments/qa' to render the yaml which will be applied, the following error occurs:

$ tk show environments/qa/
Error: evaluating jsonnet: RUNTIME ERROR: C:\DEV\projects\eip-1427\vendor\github.com\jsonnet-libs\k8s-libsonnet\1.26\_custom\core.libsonnet:42:29 Text block's first line must start with whitespace
        C:\DEV\projects\eip-1427\vendor\github.com\jsonnet-libs\k8s-libsonnet\1.26\main.libsonnet:1:145-176     $
        C:\DEV\projects\eip-1427\environments\qa\main.jsonnet:2:14-80   object <anonymous>
        C:\DEV\projects\eip-1427\environments\qa\main.jsonnet:3:8-9
        Field "foo"
        During manifestation

Any help is appreciated!

Allow overriding podLabels' default "name" key

Some prefer to use the k8s standard "app.kubernetes.io/name" or just "app". It seems to be hard-coded in over here for deployments, for example:

local labels = podLabels { name: name };
. There should either be a way of specifying the name of such a label in some global config, or just be able to override it.

Using multiple specs ( for multiple versions ) in a single CRD based library fail

a config.sonnet like this

local config = import 'jsonnet/config.jsonnet';

config.new(
  name='jaeger-operator',
  specs=[
    {
      output: '1.24',
      openapi: 'http://localhost:8001/openapi/v2',
      prefix: '^io\\.jaegertracing\\..*',
      crds: ['https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.24.0/deploy/crds/jaegertracing.io_jaegers_crd.yaml'],
      localName: 'jaeger-operator',
    },
    {
      output: '1.25',
      openapi: 'http://localhost:8001/openapi/v2',
      prefix: '^io\\.jaegertracing\\..*',
      crds: ['https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.25.0/deploy/crds/jaegertracing.io_jaegers_crd.yaml'],
      localName: 'jaeger-operator',
    },
  ]
)

when rendered will cause the last version to be loaded and used to render both of the 1.24 and the 1.25 output

volume-mount helpers only target containers and exclude initContainers

Right now, e.g. k.apps.v1.deployment.emptyVolumeMount only targets containers - but not init containers.

I think it'd be great to add a flag to the helper functions, allowing them to also patch initContainers.

Since containers and initContainer names are in the same group for collision detection, I'd propose to change the function signature like this:

- emptyVolumeMount(name, path, volumeMountMixin, volumeMixin, containers)
+ emptyVolumeMount(name, path, volumeMountMixin, volumeMixin, containers, includeInitContainers)

The containers argument can then be used to steer patching for both regular and init containers. The default value for includeInitContainers could be set to false so we don't introduce a breaking change for existing code-bases.

Another option would be to not pass the includeInitContainers flag, but rather go for initContainers=[...]. Personally I'd prefer the first proposal.

This PR is intended to get some feedback on this and discuss potential bottlenecks. @sh0rez @Duologic @julienduchesne, do you see any concern with this?

Missing List Object

ksonnet includes a generic list object in k.libsonnet that can be used to combine multiple objects into a single object to send to a kubernetes api server. The implementation for this list object is here.

The List type can be found here for kubernetes.

Jsonnet + VSCODE + autocomple

Hi to all.

Can somebody help with autocomplete in vscode? I install all plugins (and autocomplete works in kubernetes-prometheus folder) but I can't import new library with autocomplete

Kubernetes 1.22+ breaks docsonnet

This is a follow up to #132 the issue is still present.

Unless I add the block suggested here I get

Error: evaluating jsonnet: RUNTIME ERROR: max stack frames exceeded.
        Field "#mixin"
        Field "mixin"
        Field "mixin"
        Field "mixin"
        Field "mixin"
        Field "mixin"
        Field "mixin"
        Field "mixin"
        Field "mixin"
        Field "mixin"
        ...
        Field "nodeAffinity"
        Field "affinity"
        Field "spec"
        Field "template"
        Field "spec"
        Field "daemonSet"
        Field "v1beta1"
        Field "extensions"
        Field "test"
        During manifestation

How to dynamically create volume for a deployment

I try to dynamically create volumes for a deployment based on a config file with many volume declarations.

Is there a way to achieve this?

The only way that works is the declarative way, like in the example code

local k8s = (import "github.com/jsonnet-libs/k8s-libsonnet/1.25/main.libsonnet");

local deployment = k8s.apps.v1.deployment.new(
    name=deployment.name, 
    containers=[
      k8sContainer.new(name=deployment.name, image=deployment.image) +
      ...
    ]
  ) +
  k8s.apps.v1.deployment.metadata.withNamespace(namespace) + 

=>  k8s.apps.v1.deployment.emptyVolumeMount("data", deployment.volume.data.path);

config file:

    name: "mydeployment",
    image: "debian",
    namespace: "mynamespace",
    volume:{
        local volume = self,
        local rootMountPath = "/root",
        data:{
            path: rootMountPath + "/data1",
            type: "emptyDir"
        },
        data2:{
            path: rootMountPath + "/data2",
            type: "emptyDir"
        },
        data3:{
            path: rootMountPath + "/data3",
            type: "emptyDir"
        },
    },
}

I try different manner :

1- loop in the deployment configuration, but stuck with super keyword from mapContainers

2- convert the deployment as a function by passing volume

local deployment(volume) = k8sDeployment.new(...

local deployVolume = [deployment(v) for v in std.objectFieldsAll(deployment.volume)];

The result is as expected, there is as much deployment as volume declare.
but how merge config recursively?

Another odd behavior when trying something like:

deployVolume[0]+deployVolume[1]

There is a missing VolumeMount in the result!!

 },
                  "volumeMounts": [
                     {
                        "mountPath": "/data2",
                        "name": "data2"
                     }
                  ]
               }
            ],
            "volumes": [
               {
                  "emptyDir": { },
                  "name": "data1"
               },
               {
                  "emptyDir": { },
                  "name": "data2"
               }
            ]
         }

The purpose of this is to create volume factory.

Hope you can help me! :-)

Strimzi CRD error `got -: {`

Hi,

I am trying to generate libsonnet for strimzi kafka-operator 0.23.0

i added a libs/strimzi/config.jsonnet like

local config = import 'jsonnet/config.jsonnet';

config.new(
  name='strimzi',
  specs=[
    {
      output: '0.23.0',
      openapi: 'http://localhost:8001/openapi/v2',
      prefix: '^io\\.strimzi\\..*',
      crds: ['https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.23.0/strimzi-crds-0.23.0.yaml'],
      localName: 'strimzi',
    },
  ]
)

when run make run INPUT_DIR=libs/strimzi i get

140:11-12 Expected one of :, ::, :::, +:, +::, +:::, got: -: {  

I did check the swagger output and i can't see any -: so not sure what is causing this

when looking in the rendered dir

tree gen/github.com/jsonnet-libs/strimzi-lib
gen/github.com/jsonnet-libs/strimzi-lib
├── 0.23.0/
│   ├── _gen/
│   │   └── kafka/
│   │       ├── v1beta1/
│   │       │   ├── kafkaUser.libsonnet
│   │       │   └── main.libsonnet
│   │       └── v1beta2/
│   │           └── kafkaTopic.libsonnet
│   └── gen.libsonnet
├── docs/
│   ├── stylesheets/
│   │   └── extra.css
│   └── README.md
├── .github/
│   └── workflows/
│       └── main.yml
├── LICENSE
├── mkdocs.yml
├── README.md
└── requirements.txt

so it would seem it breaks after v1beta2/kafkaTopic

Kubernetes 1.22+ breaks docsonnet

EDIT: seems to be since Kubernetes 1.22 release: https://jsonnet-libs.github.io/k8s-libsonnet/1.22/ and the deprecation of extensions/v1beta1

This commit 5e702db broke the docsonnet keys;

when merging the libraries with the root object (which we do), the output is no longer empty; see below as example:

(import 'github.com/jsonnet-libs/k8s-libsonnet/1.23/main.libsonnet')
+ {}

results in

tk eval output
{
  "extensions": {
    "v1beta1": {
      "daemonSet": {
        "#new": {
          "function": {
            "args": [
              {
                "default": null,
                "name": "name",
                "type": "string"
              },
              {
                "default": null,
                "name": "containers",
                "type": "array"
              },
              {
                "default": {},
                "name": "podLabels",
                "type": "object"
              }
            ]
          }
        }
      },
      "deployment": {
        "#new": {
          "function": {
            "args": [
              {
                "default": null,
                "name": "name",
                "type": "string"
              },
              {
                "default": 1,
                "name": "replicas",
                "type": "number"
              },
              {
                "default": null,
                "name": "containers",
                "type": "array"
              },
              {
                "default": {},
                "name": "podLabels",
                "type": "object"
              }
            ]
          }
        }
      },
      "statefulSet": {
        "#new": {
          "function": {
            "args": [
              {
                "default": null,
                "name": "name",
                "type": "string"
              },
              {
                "default": 1,
                "name": "replicas",
                "type": "number"
              },
              {
                "default": null,
                "name": "containers",
                "type": "array"
              },
              {
                "default": [],
                "name": "volumeClaims",
                "type": "array"
              },
              {
                "default": {},
                "name": "podLabels",
                "type": "object"
              }
            ]
          }
        }
      }
    }
  }
}

mapContainer doesn't work on pod.

I'm currently using k8s 1.21, and it seems that I can't mapContainers for pods.

I also can't use the nice volumeMount functions, because they're not defined for pod.

bazel rules

Would this group be open for me adding a bazel build file, some rules and macros?

Currently, I am pulling in k8s-alpha via a bazel http_archive and creating a jsonnet_library out of them. But it would be lovely to just be able to grab this repository and build them directly.

With fluent/fluenbit-operator, generator fails with: Expected token IDENTIFIER but got "null" while parsing parameter

I am trying to generate a jsonnet library from https://github.com/fluent/fluentbit-operator, and the generation fails around a seemingly valid OpenAPI property named "null".

Here is an excerpt from https://raw.githubusercontent.com/fluent/fluentbit-operator/master/manifests/setup/setup.yaml used to generate the lib:

...
              matchRegex:
                description: A regular expression to match against the tags of incoming
                  records. Use this option if you want to use the full regex syntax.
                type: string
              "null":
                description: Null defines Null Output configuration.
                type: object
...

The process seems to proceed as expected, but the k8s-gen fails with

671:14-18 Expected token IDENTIFIER but got "null" while parsing parameter:

Going to the specified line shows (notice withNull(null): , { null: null } and { null+: null } which are invalid):

...
    '#withMatchRegex':: d.fn(help="\"A regular expression to match against the tags of incoming records. Use this option if you want to use the full regex syntax.\"", args=[d.arg(name="matchRegex", type=d.T.string)]),
    withMatchRegex(matchRegex): { spec+: { matchRegex: matchRegex } },
    '#withNull':: d.fn(help="\"Null defines Null Output configuration.\"", args=[d.arg(name="null", type=d.T.object)]),
    withNull(null): { spec+: { null: null } },
    '#withNullMixin':: d.fn(help="\"Null defines Null Output configuration.\"\n\n**Note:** This function appends passed data to existing values", args=[d.arg(name="null", type=d.T.object)]),
    withNullMixin(null): { spec+: { null+: null } },
    '#withRetry_limit':: d.fn(help="\"RetryLimit represents configuration for the scheduler which can be set independently on each output section. This option allows to disable retries or impose a limit to try N times and then discard the data after reaching that limit.\"", args=[d.arg(name="retry_limit", type=d.T.string)]),
...

Obviously null is a keyword in jsonnet, but I am not sure what the generation should look like. I have tried modifying the pkg/model/modifiers.go:fnArg sanitizing function to mimic the pattern and return "Null", but then I get another error:

671:32-36 Unexpected: "null" while parsing field definition:

... 

    withNull(Null): { spec+: { null: Null } },

More info on the fluent Null output type: https://docs.fluentbit.io/manual/pipeline/outputs/null

I believe the field name that should also be quoted (additionally to parameter sanitizing), but I was not able to find where that was generated.

I think this is what we want:

   withNull(Null) : { spec+: { 'null': Null } }

Namespace should be required for all namespaced resource types

I would love to see this library offer better help for avoiding omitted namespaces on objects, and thus going into the default namespace unintentionally. This is an opinionated stance, but I strongly hold the opinion that it is too easy to forget a namespace, and too tricky to declare it when you want to.

My desire in brief: each namespaced type should have a constructor that requires a namespace to be declared. If all types have a new(name,namespace) generated, users physically cannot forget a namespace. If they want to put something in default, they must do so explicitly.

So you would have deployment.new(name, namespace), but clusterRole.new(name) only.

Now, implementation is tricky. The swagger definitions this generator relies on does not have that info. I am not sure the absolute authoritative source, but it is in the depths of the k8s source, possibly hidden in comments. I have 3 ideas:

  1. Petition kubernetes project to include the Namespaced metadata in the swagger definitions. Even if possible, this is not a fast or easy solution for us now.
  2. Generate our own list of namespaced/unnamespaced resource types with kubectl and preserve it in config.yaml. Use this in the config to make the appropriate constructors.
  3. Assume all resources with ObjectMetadata are namespaced and generate the full new(name,namespace) constructor. Make it smart enough to omit the namespace if you explicitly set the namespace to default.

Option 3 is by far the easiest to implement, but has the downside that the only way to make something like a ClusterRole work, you'd need to do clusterRole.new(name, 'default'), which is a bit of a lie.

Perhaps something like "assume all types are namespaced" except for a smaller list of only the global kinds, or perhaps custom overrides for those types that continue to cause problems.

If the k8s library solved this problem, it would ease up the need for tools like tanka to even care about such things.

Thoughts?

doc-util/main.libsonnet missed

Hello!
Every file in k8s-alpha (or k8s-libsonnet) starts with local d = (import 'doc-util/main.libsonnet'),

While it is used in hidden fields only it does not break jsonnet generation, but it breaks jsonnet-lint, which tries to do all the things even if they are not in use.

so the output is

k8s-alpha/1.19/_custom/rbac.libsonnet:1:11-43 couldn't open import "doc-util/main.libsonnet": no match locally or in the Jsonnet library paths

local d = import 'doc-util/main.libsonnet';

Could you please remove it from generated files or tell where we can take this file to pass to linter?

Feature request: add prefix for downloaded library to avoid same version number conflict with other library

Currently downloaded library using jb, i.e

jb install github.com/jsonnet-libs/k8s-libsonnet/1.21@main
jb install github.com/jsonnet-libs/istio-libsonnet/1.13@main

will download the library in vendor folder as following

vendor
|- 1.21
|- 1.13

This convention will one day result in conflict when there are same version number for different library i.e kubernetes 1.21 with istio 1.21.

To avoid such conflict, suggest the folder to add prefix

vendor
|- k8s-1.21
|- istio-1.21

escapeKey does not escape / (forward slash)

When generating a lib with the litmuschaos CRDs, an error was generated:

Generating '2.10.0' from 'https://raw.githubusercontent.com/litmuschaos/chaos-operator/2.10.0/deploy/crds/chaosengine_crd.yaml, ^io\.litmuschaos\..*'
Generating '2.10.0' from 'https://raw.githubusercontent.com/litmuschaos/chaos-operator/2.10.0/deploy/crds/chaoexperiment_crd.yaml, ^io\.litmuschaos\..*'
Generating '2.10.0' from 'https://raw.githubusercontent.com/litmuschaos/chaos-operator/2.10.0/deploy/crds/chaosresults_crd.yaml, ^io\.litmuschaos\..*'
206:19-20 Expected one of :, ::, :::, +:, +::, +:::, got: /: {
  local d = (import 'doc-util/main.libsonnet'),
  '#':: d.pkg(name="chaosEngine", url="", help=""),

...

make: *** [Makefile:55: libs/litmus-chaos] Error 1

due to keys containing a forward slash, e.g. cmdProbe/inputs.

I traced this back to builder.escapeKey() which does a check using strings.ContainsAny(s, "-."). Adding the slash here fixes the issue.

I'll submit a PR for this.

Can't create a deployment without specifying replicas

Currently if we create a deployment with this lib we're forced to have the replicas field set to 1 by default (or a specified integer), because of this line: https://github.com/jsonnet-libs/k8s-libsonnet/blob/main/1.21/_custom/apps.libsonnet#L37

However, when using an HPA and Flux, we're ending up with Flux constantly resetting our replica count back to this setting every time the HPA decides to scale out. See this FAQ in the Flux docs for some details.

I'm not entirely sure how to fix this in a non-breaking way, but it seems that being able to set a null value might be reasonable...

Allow loading CRD data from a local file

We've use this patch on a fork for about 18 months, and it's very useful for working with libraries where they don't make the CRDs available via a single, simple web endpoint.

We often have local Makefiles which pull down manifests and process them before passing them to k8s-gen to write out the libraries.

I've added the patch to a PR here #380 as it would be great to get it merged upstream if possible.

HPA `withScaleTargetRef` custom function returns error

Minimal project that shows the problem follows:

Running tk init in an empty dir, then adding the test code to environments/default/main.jsonnet:

local k = import 'k.libsonnet';
{
  deployment: k.apps.v1.deployment.new('sample', containers=[]),
  hpa:
    k.autoscaling.v2beta2.horizontalPodAutoscaler.new('sample') +
    k.autoscaling.v2beta2.horizontalPodAutoscaler.spec.withScaleTargetRef($.deployment),
}

Running tk show environments/default/ returns the following error:

Error: evaluating jsonnet: RUNTIME ERROR: couldn't manifest function as JSON
	Field "withApiVersion"
	Field "scaleTargetRef"
	Field "spec"
	Field "hpa"
	During manifestation

I believe this is caused by the merge of withApiVersion here:

{ spec+: { scaleTargetRef+: withApiVersion {

I left a comment in the PR that added this customization asking what was the original purpose, but I believe taking it out should solve the issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.