pulumi / kube2pulumi Goto Github PK
View Code? Open in Web Editor NEWUpgrade your Kubernetes YAML to a modern language
Home Page: https://www.pulumi.com/kube2pulumi/
License: Apache License 2.0
Upgrade your Kubernetes YAML to a modern language
Home Page: https://www.pulumi.com/kube2pulumi/
License: Apache License 2.0
The following YAML is valid, but is not parsed correctly into PCL:
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
The conversion fails with the following error message:
Error: Missing newline after argument
on pcl-381721254.pp line 26:
27: apiVersion = for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
An argument definition must end with a newline.
Here's what the generated PCL looks like:
apiVersion = for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
"apps/v1"
Some valid YAML input includes sequences of "mapping values" (each one is a single k: v
pair) rather than a "mapping" of several of them. An example is a Kubernetes container's envFrom
specification, for which see the example below. Currently, the intermediate .pp
file is being generated with commas in the wrong places, which causes the pcl2pulumi step to fail with a parse error like
Error: Error: Missing item separator
on pcl-640577668.pp line 20:
15: envFrom = [
16: {
17: configMapRef = {
18: name = "some-map"
19: },
20: }
21: {
Expected a comma to mark the beginning of the next item.
This is a bit of an edge case in yaml2pcl
, because it only manifests when the inner items are plain k: v
pairs - for example, it doesn't happen for inputs like
containers:
- name: my-amazing-container
image: something
# ...
- name: my-equally-amazing-sidecar
image: something-else
# ...
because here, each item in the sequence has more than one key.
Correct Pulumi code can be generated.
It's not :-(
Run kube2pulumi typescript -f sample.yaml
where sample.yaml
has the contents below.
(This is just enough Kubernetes structure to show the troublesome bit, which is the two items under envFrom
, not a realistic yaml fragment)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: thing
spec:
replicas: 1
template:
spec:
containers:
- name: thing
image: thing
envFrom:
- configMapRef:
name: some-map
- secretRef:
name: some-secrets
We specifically use this envFrom:
structure, but there are probably other occurrences of the same pattern of types elsewhere in the wild.
The relevant comma-insertion logic is in yaml2pcl when traversing a SequenceNode
,
https://github.com/pulumi/kube2pulumi/blob/master/pkg/yaml2pcl/yaml2pcl.go#L413 . Note that suffix
will be either the empty string, or ","
, depending on whether we're on the last item in the list. The suffix is passed down to a recursive call to walkToPCL
, which is hoped to do the right thing.
In the case when the thing in the sequence is a MappingValue
, its surrounding {
/}
are inserted on lines 423 and 440. Since line 440 comes after the recursive call to walkToPCL
, the comma (if any) is in the wrong place. The intermediate .pp
file looks like this, with my comments added.
envFrom = [
{
configMapRef = {
name = "some-map"
}, # <-- comma added from the inner call to walkToPCL, line 408 of the code
} # <-- comma not present, because line 440 does not add it
{
secretRef = {
name = "some-secrets"
}
}
]
Amending line 440 to
_, err = fmt.Fprintf(totalPCL, "%s%s\n", "}", suffix)
appears to produce the desired output in this case, but I haven't thoroughly tested that against other inputs.
In general, the handling of commas and delimiters is a bit hard to follow. I think there are other cases where incorrect results will come about, because not all code paths follow the same pattern. For example, in
foo: ["bar", "baz"]
qux: [1, 2]
the foo
items are handled properly, because the ast.StringNode
case does handle the suffix, but the qux
ones are not, because ast.IntegerNode
ignores the suffix. While the Kubernetes API doesn't include lists of integers (as far as I know), there may well be other bugs of this kind lurking.
I'm getting the following parse error when using kube2pulumi:
> kube2pulumi go --file src/filebeat_autodiscover_2.1.yaml -o filebeats.go
Error: Error: Missing item separator
on pcl-965641736.pp line 39:
36: processors = [
37: {
38: add_cloud_metadata = null
39: }
40: {
Expected a comma to mark the beginning of the next item.
The yaml file is this one: https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.1/config/recipes/beats/filebeat_autodiscover.yaml
brew install pulumi/tap/kube2pulumi
kube2pulumi go --file src/filebeat_autodiscover_2.1.yaml -o filebeats.go
Expected: kube2pulumi successfully converts the yaml file to golang
Actual: It fails and gives a parse error.
kube2pulumi currently does not convert CustomResourceDefinitions or CustomResources. This limitation will be documented for now, but we should handle this automatically in the future.
Some options (best to worst) would be:
yaml.ConfigFile
and apiextensions.CustomResource
resources to handle themAdd java support to kube2pulumi
cli option to show kube2pulumi
version is missing:
❯ kube2pulumi help
converts input files to desired output language
Usage:
kube2pulumi [command]
Available Commands:
all
csharp
go
help Help about any command
python
typescript
Flags:
-d, --directory string file path for directory to convert
-f, --file string YAML file to convert
-h, --help help for kube2pulumi
Use "kube2pulumi [command] --help" for more information about a command.
(crd2pulumi
already has this option.)
When kube2pulumi.exe typescript -f psp.yaml
, it shows:
unable to run program: Error: Argument or block definition required
on C:\Users\***\AppData\Local\Temp\pcl-532074835.pp line 0:
1: resource 00_rook_privilegedPodSecurityPolicy "kubernetes:policy/v1beta1:PodSecurityPolicy" {
An argument or block definition is required here. To set an argument, use the equals sign "=" to introduce the argument value.
the yaml file:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: 00-rook-privileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: "runtime/default"
seccomp.security.alpha.kubernetes.io/defaultProfileName: "runtime/default"
spec:
privileged: true
allowedCapabilities:
- SYS_ADMIN
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
seLinux:
rule: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- secret
- projected
- hostPath
- flexVolume
hostIPC: true
hostPID: true
hostNetwork: true
hostPorts:
- min: 6789
max: 6790
- min: 3300
max: 3300
- min: 6800
max: 7300
- min: 8443
max: 8443
- min: 9283
max: 9283
Attempting to convert this:
apiVersion: v1
kind: Secret
metadata:
name: pulumi-api-secret
type: Opaque
stringData:
accessToken: "<REDACTED: PULUMI_ACCESS_TOKEN>"
---
apiVersion: v1
kind: Secret
metadata:
name: pulumi-aws-secrets
type: Opaque
stringData:
AWS_ACCESS_KEY_ID: "<REDACTED: AWS_ACCESS_KEY_ID>"
AWS_SECRET_ACCESS_KEY: "<REDACTED: AWS_SECRET_ACCESS_KEY>"
---
apiVersion: pulumi.com/v1alpha1
kind: Stack
metadata:
name: s3-bucket-stack
spec:
accessTokenSecret: pulumi-api-secret
envSecrets:
- pulumi-aws-secrets
stack: joeduffy/s3-op-project/dev
initOnCreate: true # noop if stack already exists
projectRepo: https://github.com/joeduffy/test-s3-op-project
commit: cc5442870f1195216d6bc340c14f8ae7d28cf3e2
config:
aws:region: us-east-2
Fails with this:
�[31mError�[0m: Missing attribute separator
on pcl-859051780.pp line 40:
40: config = {
41: aws:region �[1;4m=�[0m "us-east-2"
Expected a newline or comma to mark the beginning of the next attribute.
Seems to be a problem with the key being named aws:region
(which YAML supports just fine, but perhaps one of our conversion steps doesn't?).
Aside - it's a bit of a shame that we surface pcl-859051780.pp
here - since it's not something the user has any control over or understanding of.
When running kube2pulumi
, it automatically outputs anything into index.ts
regardless of the file existing or not.
index.ts
and add any contentkube2pulumi typescript anyvalidk8s.yaml
index.ts
is now lostExpected: Doesn't overwrite index.ts
if it exists
Actual: kube2pulumi
overwrites index.ts
If you delete the entry before or after the commented one in env
, it works, but otherwise it fails with:
unable to run program: Error: Invalid expression
on pcl-158852422.pp line 1543:
1544: ]
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
namespace: default
labels:
app: auth
spec:
selector:
matchLabels:
app: auth
strategy:
rollingUpdate:
maxUnavailable: 0
replicas: 1
template:
metadata:
annotations:
labels:
app: auth
spec:
serviceAccountName: auth
initContainers:
containers:
- name: auth
image: foo
env:
- name: SESSION_KEY
valueFrom:
secretKeyRef:
key: SESSION_KEY
name: secrets
#- name: SESSION_COOKIE_SECURE
# value: "true"
- name: GITHUB_CLIENT_ID
valueFrom:
secretKeyRef:
key: GITHUB_CLIENT_ID
name: secrets
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 128Mi
Given example.yaml
apiVersion: v1
kind: Pod
metadata:
namespace: foo
name: bar
spec:
containers:
- name: nginx
image: nginx:1.14-alpine
resources:
limits:
memory: 20Mi
cpu: 0.2
Running kube2pulumi typescript -f ./example.yaml
results in
2021/01/22 19:58:34 failed to bind program:
Error: binding types: type kubernetes:core/v1:ServiceSpecType must be an object, not a string
Usage:
kube2pulumi typescript [flags]
Flags:
-h, --help help for typescript
Global Flags:
-d, --directory string file path for directory to convert
-f, --file string YAML file to convert
unable to run program: binding types: type kubernetes:core/v1:ServiceSpecType must be an object, not a string
The same error can be reproduced with go
, python
, csharp
, all
.
Context:
brew info pulumi
pulumi: stable 2.18.1 (bottled), HEAD
Cloud native development platform
https://pulumi.io/
/usr/local/Cellar/pulumi/2.18.1 (18 files, 183.5MB) *
Poured from bottle on 2021-01-22 at 19:05:25
brew info kube2pulumi
pulumi/tap/kube2pulumi: stable 0.0.5
Convert Kubernetes manifests to Pulumi code
https://pulumi.io
/usr/local/Cellar/kube2pulumi/0.0.5 (5 files, 27.4MB) *
Built from source on 2021-01-22 at 19:30:36
OS: macOS Big Sur 11.1 (20C69)
we currently don't have a changelog for this, and we probably should
Sample YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: "2020-08-04T18:50:43Z"
generation: 1
name: argocd-server
namespace: default
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-08-04T18:51:31Z"
lastUpdateTime: "2020-08-04T18:51:31Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-08-04T18:50:43Z"
lastUpdateTime: "2020-08-04T18:51:31Z"
message: ReplicaSet "argocd-server-7778cdd5" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
PCL conversion parses the status field which then causes an issue when generating code. Status is an output field and needs to be ignored when parsing. kube2pulumi currently panics out with no output in the pcl2pulumi conversion
kube2pulumi
seems to be attempting to resolve part of a literal string as if it were a variable reference. It crashes when it can't find the obviously non-existent variable.
❯ kube2pulumi python -f test.yaml -o output.py
Error: Error: undefined variable datasource
on pcl-1684386708.pp line 3:
4: some_field = "{\\\"uid\\\": \\\"${datasource}\\\"}"
Usage:
kube2pulumi python [flags]
Flags:
-h, --help help for python
Global Flags:
-d, --directory string file path for directory to convert
-f, --file string YAML file to convert
-o, --outputFile string The name of the output file to write to
unable to run program: Error: undefined variable datasource
on pcl-1684386708.pp line 3:
4: some_field = "{\\\"uid\\\": \\\"${datasource}\\\"}
Perhaps it is treating the literal as a format string?
I expect it to insert the literal string exactly as it is defined in the yaml input file. It should not attempt to resolve any variables inside the literal string.
apiVersion: v1
data:
some_field: "{\"uid\": \"${datasource}\"}"
kind: ConfigMap
metadata:
annotations: {}
labels:
app: my-app
name: my-app-configmap
Put something like that in a test.yaml file, then try to run kube2pulumi
.
kube2pulumi python -f test.yaml -o output.py
pulumi about
CLI
Version 3.70.0
Go Version go1.20.4
Go Compiler gc
Plugins
NAME VERSION
aws 5.41.0
cloudinit 1.3.0
command 0.7.2
crds 0.0.0
docker-buildkit 0.1.21
fivetran 0.1.6
frontegg 0.2.24
kubernetes 3.29.1
purrl 0.4.0
python unknown
random 4.13.2
tls 4.10.0
Host
OS fedora
Version 37
Arch x86_64
This project is written in python: executable='/home/alex/git4/cloud/venv/bin/python3' version='3.11.3
'
Current Stack: materialize/mzcloud/alexhunt
[redacted huge list of urns]
kube2pulumi version v0.0.12
This is a reduced simplified example. The actual K8S resources we're trying to convert is from rendering the Kubecost cost-analyzer Helm chart with helm template
.
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
external-dns.yml
:
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT-ID:role/IAM-SERVICE-ROLE-NAME
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: kube-system
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: us.gcr.io/k8s-artifacts-prod/external-dns/external-dns:v0.7.3
args:
- --source=ingress
- --source=service
# - --domain-filter=example.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=sync
# - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
# - --txt-owner-id=my-hostedzone-identifier
# - --log-level=debug
securityContext:
fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files
❯ kube2pulumi csharp -f .\external-dns.yml
Error: Error: Invalid expression
on C:\Users\sfausett\AppData\Local\Temp\pcl-936730347.pp line 119:
120: ]
Expected the start of an expression, but found an invalid expression token.
Usage:
kube2pulumi csharp [flags]
Flags:
-h, --help help for csharp
Global Flags:
-d, --directory string file path for directory to convert
-f, --file string YAML file to convert
unable to run program Error: Invalid expression
on C:\Users\sfausett\AppData\Local\Temp\pcl-936730347.pp line 119:
120: ]
Expected the start of an expression, but found an invalid expression token.
After removing the comments from the deployment spec.template.spec.containers[0].args
it works:
❯ kube2pulumi csharp -f .\external-dns.yml
Conversion successful! Generated File: Program.cs
PS. it would be more friendly to be able to specify the output file rather than overwrite my existing Program.cs
.
The following YAML generates invalid PCL:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: pulumi-kubernetes-operator
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
resource pulumi-kubernetes-operator "kubernetes:rbac.authorization.k8s.io/v1:Role" {
apiVersion = "rbac.authorization.k8s.io/v1"
kind = "Role"
metadata = {
creationTimestamp = name = "pulumi-kubernetes-operator"
}
rules = [
{
apiGroups = [
""""
]
resources = [
"pods"
]
verbs = [
"get"
]
}
]
}
It looks like the null
value isn't being handled properly.
Inability to handle manifests with comments next to e.g. object keys
volumeMounts:
# Unprivileged containers need to mount /proc/sys/net from the host
# to have write access
- mountPath: /host/proc/sys/net
name: host-proc-sys-net
# Unprivileged containers need to mount /proc/sys/kernel from the host
# to have write access
pulumi about
kube2pulumi 0.0.15
No response
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
When codegen failed, both an error and a "Conversion successful!" message were printed:
Error: Missing newline after argument
on pcl-381721254.pp line 26:
27: apiVersion = for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
An argument definition must end with a newline.
Conversion successful! Generated File: /Users/levi/workspace/pulumi-k8s-test/guestbook.ts%
My application is composed of numerous Kubernetes manifests, each of which may be understood as a component of the application. For example, I have certmanager.yaml
and webapp.yaml
. I'd like to generate a pulumi.ComponentResource
for each of the components, that I can assemble into a whole application using index.ts
.
Please enhance the generator with a component
mode, which wraps the generated statements in a pulumi.ComponentResource
. Some specific suggestions:
Args
interface for the component resource.For example, here's a hypothetical snippet of generated code for cert-manager.yaml.
export class CertManager extends pulumi.ComponentResource {
private readonly certManagerNamespace: kubernetes.core.v1.Namespace;
private readonly cainjectorServiceAccount: kubernetes.core.v1.ServiceAccount;
private readonly managerServiceAccount: kubernetes.core.v1.ServiceAccount;
// ...
constructor(name: string, args: CertManagerArgs, opts?: pulumi.ComponentResourceOptions) {
super("CertManager", name, {}, opts);
this.certManagerNamespace = new kubernetes.core.v1.Namespace("cert_managerNamespace", {
apiVersion: "v1",
kind: "Namespace",
metadata: {
name: "cert-manager",
},
}, { parent: this });
// ...
}
}
export interface CertManagerArgs {
}
The kube2pulumi service is currently emitting diagnostics for two of the three canned examples:
Moreover, when I run the tool locally according to the README, I get errors:
$ cat some-yaml.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pulumi-kubernetes-operator
spec:
replicas: 1
selector:
matchLabels:
name: pulumi-kubernetes-operator
template:
metadata:
labels:
name: pulumi-kubernetes-operator
spec:
serviceAccountName: pulumi-kubernetes-operator
imagePullSecrets:
- name: pulumi-kubernetes-operator
containers:
- name: pulumi-kubernetes-operator
image: pulumi/pulumi-kubernetes-operator:v0.0.2
command:
- pulumi-kubernetes-operator
args:
- "--zap-level=debug"
imagePullPolicy: Always
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "pulumi-kubernetes-operator"
$ kube2pulumi typescript -f ./some-yaml.yaml
2020/11/09 12:16:35 failed to bind program:
Error: rpc error: code = Unimplemented desc = GetSchema is unimplemented
Usage:
kube2pulumi typescript [flags]
Flags:
-h, --help help for typescript
Global Flags:
-d, --directory string file path for directory to convert
-f, --file string YAML file to convert
unable to run program: rpc error: code = Unimplemented desc = GetSchema is unimplemented
Getting a stack overflow panic when trying to convert https://github.com/open-policy-agent/gatekeeper/blob/v3.1.0/deploy/gatekeeper.yaml.
wget https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.1.0/deploy/gatekeeper.yaml
kube2pulumi typescript -f gatekeeper.yaml
% kube2pulumi typescript -f gatekeeper.yaml
runtime: goroutine stack exceeds 1000000000-byte limit
runtime: sp=0xc021100338 stack=[0xc021100000, 0xc041100000]
fatal error: stack overflow
runtime stack:
runtime.throw(0x1e27bcb, 0xe)
/opt/hostedtoolcache/go/1.14.7/x64/src/runtime/panic.go:1116 +0x72
runtime.newstack()
/opt/hostedtoolcache/go/1.14.7/x64/src/runtime/stack.go:1035 +0x6ce
runtime.morestack()
/opt/hostedtoolcache/go/1.14.7/x64/src/runtime/asm_amd64.s:449 +0x8f
goroutine 1 [running]:
runtime.interhash(0xc0211003e8, 0x4cf320ec, 0x0)
/opt/hostedtoolcache/go/1.14.7/x64/src/runtime/alg.go:115 +0x16b fp=0xc021100348 sp=0xc021100340 pc=0x10030fb
runtime.mapassign(0x1ca44a0, 0xc0003e9f20, 0xc0211003e8, 0x8)
/opt/hostedtoolcache/go/1.14.7/x64/src/runtime/map.go:587 +0x64 fp=0xc0211003c8 sp=0xc021100348 pc=0x100e1d4
...
https://gist.github.com/clstokes/5dc1c858c064b2dadac0f663b8761616
kube2pulumi
was written before Pulumi YAML was introduced. Pulumi YAML now supports first-class conversion workflows, so it should be possible to rely on those libraries rather than maintaining a separate conversion implementation.
Related: #82
See #84 (comment)
At the moment if you run kube2pulumi it writes out to index.ts (or appropriate root filename for Pulumi program) which may already be in use.
Can we have a -o
type flag to specify a flag to write to?
It should be possible to automatically detect whether the provided path is a directory or a file and then choose the appropriate conversion option. This would simplify the UX, which currently requires manually specifying a flag.
kube2pulumi is not able to handle env var references such as a command containing an env var
command:
- sh
- -ec
# The statically linked Go program binary is invoked to avoid any
# dependency on utilities like sh and mount that can be missing on certain
# distros installed on the underlying host. Copy the binary to the
# same directory where we install cilium cni plugin so that exec permissions
# are available.
- |
cp /usr/bin/cilium-mount /hostbin/cilium-mount;
nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
rm /hostbin/cilium-mount
pulumi about
kube2pulumi 0.0.15
No response
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
Hey!
Ran in to the following problem when using kube2pulumi with a file containing containerPort definitions.
kube2pulumi generates non-valid property named ContainerPort
in C# for containerPort
definition.
The property ContainerPort
does not exist in the ContainerPortArgs, the actual property in C# is named ContainerPortValue
.
Problem can be reproduced with:
# container-port-error.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
template:
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
Running
$ kube2pulumi csharp -f container-port-error.yaml
Generates the following code
using Pulumi;
using Kubernetes = Pulumi.Kubernetes;
class MyStack : Stack
{
public MyStack()
{
var nginx_ingress_controllerDeployment = new Kubernetes.Apps.V1.Deployment("nginx_ingress_controllerDeployment", new Kubernetes.Types.Inputs.Apps.V1.DeploymentArgs
{
ApiVersion = "apps/v1",
Kind = "Deployment",
Metadata = new Kubernetes.Types.Inputs.Meta.V1.ObjectMetaArgs
{
Name = "nginx-ingress-controller",
},
Spec = new Kubernetes.Types.Inputs.Apps.V1.DeploymentSpecArgs
{
Replicas = 1,
Template = new Kubernetes.Types.Inputs.Core.V1.PodTemplateSpecArgs
{
Spec = new Kubernetes.Types.Inputs.Core.V1.PodSpecArgs
{
ServiceAccountName = "nginx-ingress-serviceaccount",
Containers =
{
new Kubernetes.Types.Inputs.Core.V1.ContainerArgs
{
Name = "nginx-ingress-controller",
Image = "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0",
Ports =
{
new Kubernetes.Types.Inputs.Core.V1.ContainerPortArgs
{
Name = "http",
// Does not exist in ContainerPortArgs, the property is named ContainerPortValue
ContainerPort = 80,
},
new Kubernetes.Types.Inputs.Core.V1.ContainerPortArgs
{
Name = "https",
// Does not exist in ContainerPortArgs, the property is named ContainerPortValue
ContainerPort = 443,
},
},
},
},
},
},
},
});
}
}
Version of kube2pulumi in use:
$ kube2pulumi.exe version
v0.0.8
The csproj contains the following version of Pulumi.Kubernetes:
<PackageReference Include="Pulumi.Kubernetes" Version="2.8.0" />
Got this error:
�[31mError�[0m: Invalid block definition
on pcl-764300749.pp line 0:
1: resource pystolPystol�[1;4m.�[0mv0.8.17ClusterServiceVersion "kubernetes:operators.coreos.com/v1alpha1:ClusterServiceVersion" {
Either a quoted string block label or an opening brace ("{") is expected here.
When I tried using the kube2pulumi web app on this ClusterServiceVersion
Currently, kube2pulumi
bails when it encounters a CRD.
Instead, it should print a warning and continue through the file
I'm using kube2pulumi
with the following code:
curl -fSsL -o infrastructure-components.yaml https://github.com/kubernetes-sigs/cluster-api-provider-packet/releases/download/v$(CLUSTER_API_PACKET_PROVIDER_VERSION)/infrastructure-components.yaml
./kube2pulumi typescript --file infrastructure-components.yaml
The actual generated file is index.ts
Currently kube2pulumi is quite a ways behind pulumi/pulumi latest, can these deps be updated?
kube2pulumi can't handle projected secrets w/ multiple sources. Using the 1.14.2 cilium manifest shown below, kube2pulum fails with the following:
manifest:
- name: clustermesh-secrets
projected:
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
sources:
- secret:
name: cilium-clustermesh
optional: true
# note: items are not explicitly listed here, since the entries of this secret
# depend on the peers configured, and that would cause a restart of all agents
# at every addition/removal. Leaving the field empty makes each secret entry
# to be automatically projected into the volume as a file whose name is the key.
- secret:
name: clustermesh-apiserver-remote-cert
optional: true
items:
- key: tls.key
path: common-etcd-client.key
- key: tls.crt
path: common-etcd-client.crt
- key: ca.crt
path: common-etcd-client-ca.crt
error:
panic: Error: Missing item separator
on pcl.pp line 1238:
1232: sources = [
1233: {
1234: secret = {
1235: name = "cilium-clustermesh"
1236: optional = true
1237: }
1238: }
1239: {
Expected a comma to mark the beginning of the next item.
Below is a repro of the issue:
helm repo add cilium https://helm.cilium.io/
helm template cilium/cilium --version 1.14.2 > manifest.yaml
# remove most comments because kube2pulumi can't handle them
sed -i.bk 's/^[[:space:]]*#.*$//' manifest.yaml
kube2pulumi go -f manifest.yaml
pulumi about
4.3.0 pulumi-kubernetes
It could be three issues:
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
Looking at setting up an nginx ingress controller in Azure. There is an example yaml file here which has an empty property on line 37, which looks like this:
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-3.23.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.44.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
---
(I've included the two dividers deliberately)
I would expect the data property to either be ignored, or to be a new object ({}
).
Currently it adds the resource below (in this case the cluster role) to the data property which then breaks the code.
import * as pulumi from "@pulumi/pulumi";
import * as kubernetes from "@pulumi/kubernetes";
const ingress_nginxNamespace = new kubernetes.core.v1.Namespace("ingress_nginxNamespace", {
apiVersion: "v1",
kind: "Namespace",
metadata: {
name: "ingress-nginx",
labels: {
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
},
},
});
const ingress_nginxIngress_nginxServiceAccount = new kubernetes.core.v1.ServiceAccount("ingress_nginxIngress_nginxServiceAccount", {
apiVersion: "v1",
kind: "ServiceAccount",
metadata: {
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "controller",
},
name: "ingress-nginx",
namespace: "ingress-nginx",
},
});
const ingress_nginxIngress_nginx_controllerConfigMap = new kubernetes.core.v1.ConfigMap("ingress_nginxIngress_nginx_controllerConfigMap", {
apiVersion: "v1",
kind: "ConfigMap",
metadata: {
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "controller",
},
name: "ingress-nginx-controller",
namespace: "ingress-nginx",
},
data: { // Issue is here
apiVersion: "rbac.authorization.k8s.io/v1",
kind: "ClusterRole",
metadata: {
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
},
name: "ingress-nginx",
},
rules: [
{
apiGroups: [""],
resources: [
"configmaps",
"endpoints",
"nodes",
"pods",
"secrets",
],
verbs: [
"list",
"watch",
],
},
{
apiGroups: [""],
resources: ["nodes"],
verbs: ["get"],
},
{
apiGroups: [""],
resources: ["services"],
verbs: [
"get",
"list",
"watch",
],
},
{
apiGroups: [
"extensions",
"networking.k8s.io",
],
resources: ["ingresses"],
verbs: [
"get",
"list",
"watch",
],
},
{
apiGroups: [""],
resources: ["events"],
verbs: [
"create",
"patch",
],
},
{
apiGroups: [
"extensions",
"networking.k8s.io",
],
resources: ["ingresses/status"],
verbs: ["update"],
},
{
apiGroups: ["networking.k8s.io"],
resources: ["ingressclasses"],
verbs: [
"get",
"list",
"watch",
],
},
],
},
});
const ingress_nginxClusterRoleBinding = new kubernetes.rbac.v1.ClusterRoleBinding("ingress_nginxClusterRoleBinding", {
apiVersion: "rbac.authorization.k8s.io/v1",
kind: "ClusterRoleBinding",
metadata: {
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
},
name: "ingress-nginx",
},
roleRef: {
apiGroup: "rbac.authorization.k8s.io",
kind: "ClusterRole",
name: "ingress-nginx",
},
subjects: [{
kind: "ServiceAccount",
name: "ingress-nginx",
namespace: "ingress-nginx",
}],
});
const ingress_nginxIngress_nginxRole = new kubernetes.rbac.v1.Role("ingress_nginxIngress_nginxRole", {
apiVersion: "rbac.authorization.k8s.io/v1",
kind: "Role",
metadata: {
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "controller",
},
name: "ingress-nginx",
namespace: "ingress-nginx",
},
rules: [
{
apiGroups: [""],
resources: ["namespaces"],
verbs: ["get"],
},
{
apiGroups: [""],
resources: [
"configmaps",
"pods",
"secrets",
"endpoints",
],
verbs: [
"get",
"list",
"watch",
],
},
{
apiGroups: [""],
resources: ["services"],
verbs: [
"get",
"list",
"watch",
],
},
{
apiGroups: [
"extensions",
"networking.k8s.io",
],
resources: ["ingresses"],
verbs: [
"get",
"list",
"watch",
],
},
{
apiGroups: [
"extensions",
"networking.k8s.io",
],
resources: ["ingresses/status"],
verbs: ["update"],
},
{
apiGroups: ["networking.k8s.io"],
resources: ["ingressclasses"],
verbs: [
"get",
"list",
"watch",
],
},
{
apiGroups: [""],
resources: ["configmaps"],
resourceNames: ["ingress-controller-leader-nginx"],
verbs: [
"get",
"update",
],
},
{
apiGroups: [""],
resources: ["configmaps"],
verbs: ["create"],
},
{
apiGroups: [""],
resources: ["events"],
verbs: [
"create",
"patch",
],
},
],
});
const ingress_nginxIngress_nginxRoleBinding = new kubernetes.rbac.v1.RoleBinding("ingress_nginxIngress_nginxRoleBinding", {
apiVersion: "rbac.authorization.k8s.io/v1",
kind: "RoleBinding",
metadata: {
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "controller",
},
name: "ingress-nginx",
namespace: "ingress-nginx",
},
roleRef: {
apiGroup: "rbac.authorization.k8s.io",
kind: "Role",
name: "ingress-nginx",
},
subjects: [{
kind: "ServiceAccount",
name: "ingress-nginx",
namespace: "ingress-nginx",
}],
});
const ingress_nginxIngress_nginx_controller_admissionService = new kubernetes.core.v1.Service("ingress_nginxIngress_nginx_controller_admissionService", {
apiVersion: "v1",
kind: "Service",
metadata: {
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "controller",
},
name: "ingress-nginx-controller-admission",
namespace: "ingress-nginx",
},
spec: {
type: "ClusterIP",
ports: [{
name: "https-webhook",
port: 443,
targetPort: "webhook",
}],
selector: {
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/component": "controller",
},
},
});
const ingress_nginxIngress_nginx_controllerService = new kubernetes.core.v1.Service("ingress_nginxIngress_nginx_controllerService", {
apiVersion: "v1",
kind: "Service",
metadata: {
annotations: undefined,
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "controller",
},
name: "ingress-nginx-controller",
namespace: "ingress-nginx",
},
spec: {
type: "LoadBalancer",
externalTrafficPolicy: "Local",
ports: [
{
name: "http",
port: 80,
protocol: "TCP",
targetPort: "http",
},
{
name: "https",
port: 443,
protocol: "TCP",
targetPort: "https",
},
],
selector: {
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/component": "controller",
},
},
});
const ingress_nginxIngress_nginx_controllerDeployment = new kubernetes.apps.v1.Deployment("ingress_nginxIngress_nginx_controllerDeployment", {
apiVersion: "apps/v1",
kind: "Deployment",
metadata: {
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "controller",
},
name: "ingress-nginx-controller",
namespace: "ingress-nginx",
},
spec: {
selector: {
matchLabels: {
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/component": "controller",
},
},
revisionHistoryLimit: 10,
minReadySeconds: 0,
template: {
metadata: {
labels: {
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/component": "controller",
},
},
spec: {
dnsPolicy: "ClusterFirst",
containers: [{
name: "controller",
image: "k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a",
imagePullPolicy: "IfNotPresent",
lifecycle: {
preStop: {
exec: {
command: ["/wait-shutdown"],
},
},
},
args: [
"/nginx-ingress-controller",
`--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller`,
"--election-id=ingress-controller-leader",
"--ingress-class=nginx",
`--configmap=$(POD_NAMESPACE)/ingress-nginx-controller`,
"--validating-webhook=:8443",
"--validating-webhook-certificate=/usr/local/certificates/cert",
"--validating-webhook-key=/usr/local/certificates/key",
],
securityContext: {
capabilities: {
drop: ["ALL"],
add: ["NET_BIND_SERVICE"],
},
runAsUser: 101,
allowPrivilegeEscalation: true,
},
env: [
{
name: "POD_NAME",
valueFrom: {
fieldRef: {
fieldPath: "metadata.name",
},
},
},
{
name: "POD_NAMESPACE",
valueFrom: {
fieldRef: {
fieldPath: "metadata.namespace",
},
},
},
{
name: "LD_PRELOAD",
value: "/usr/local/lib/libmimalloc.so",
},
],
livenessProbe: {
httpGet: {
path: "/healthz",
port: 10254,
scheme: "HTTP",
},
initialDelaySeconds: 10,
periodSeconds: 10,
timeoutSeconds: 1,
successThreshold: 1,
failureThreshold: 5,
},
readinessProbe: {
httpGet: {
path: "/healthz",
port: 10254,
scheme: "HTTP",
},
initialDelaySeconds: 10,
periodSeconds: 10,
timeoutSeconds: 1,
successThreshold: 1,
failureThreshold: 3,
},
ports: [
{
name: "http",
containerPort: 80,
protocol: "TCP",
},
{
name: "https",
containerPort: 443,
protocol: "TCP",
},
{
name: "webhook",
containerPort: 8443,
protocol: "TCP",
},
],
volumeMounts: [{
name: "webhook-cert",
mountPath: "/usr/local/certificates/",
readOnly: true,
}],
resources: {
requests: {
cpu: "100m",
memory: "90Mi",
},
},
}],
nodeSelector: {
"kubernetes.io/os": "linux",
},
serviceAccountName: "ingress-nginx",
terminationGracePeriodSeconds: 300,
volumes: [{
name: "webhook-cert",
secret: {
secretName: "ingress-nginx-admission",
},
}],
},
},
},
});
const ingress_nginx_admissionValidatingWebhookConfiguration = new kubernetes.admissionregistration.v1.ValidatingWebhookConfiguration("ingress_nginx_admissionValidatingWebhookConfiguration", {
apiVersion: "admissionregistration.k8s.io/v1",
kind: "ValidatingWebhookConfiguration",
metadata: {
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "admission-webhook",
},
name: "ingress-nginx-admission",
},
webhooks: [{
name: "validate.nginx.ingress.kubernetes.io",
matchPolicy: "Equivalent",
rules: [{
apiGroups: ["networking.k8s.io"],
apiVersions: ["v1beta1"],
operations: [
"CREATE",
"UPDATE",
],
resources: ["ingresses"],
}],
failurePolicy: "Fail",
sideEffects: "None",
admissionReviewVersions: [
"v1",
"v1beta1",
],
clientConfig: {
service: {
namespace: "ingress-nginx",
name: "ingress-nginx-controller-admission",
path: "/networking/v1beta1/ingresses",
},
},
}],
});
const ingress_nginxIngress_nginx_admissionServiceAccount = new kubernetes.core.v1.ServiceAccount("ingress_nginxIngress_nginx_admissionServiceAccount", {
apiVersion: "v1",
kind: "ServiceAccount",
metadata: {
name: "ingress-nginx-admission",
annotations: {
"helm.sh/hook": "pre-install,pre-upgrade,post-install,post-upgrade",
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded",
},
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "admission-webhook",
},
namespace: "ingress-nginx",
},
});
const ingress_nginx_admissionClusterRole = new kubernetes.rbac.v1.ClusterRole("ingress_nginx_admissionClusterRole", {
apiVersion: "rbac.authorization.k8s.io/v1",
kind: "ClusterRole",
metadata: {
name: "ingress-nginx-admission",
annotations: {
"helm.sh/hook": "pre-install,pre-upgrade,post-install,post-upgrade",
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded",
},
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "admission-webhook",
},
},
rules: [{
apiGroups: ["admissionregistration.k8s.io"],
resources: ["validatingwebhookconfigurations"],
verbs: [
"get",
"update",
],
}],
});
const ingress_nginx_admissionClusterRoleBinding = new kubernetes.rbac.v1.ClusterRoleBinding("ingress_nginx_admissionClusterRoleBinding", {
apiVersion: "rbac.authorization.k8s.io/v1",
kind: "ClusterRoleBinding",
metadata: {
name: "ingress-nginx-admission",
annotations: {
"helm.sh/hook": "pre-install,pre-upgrade,post-install,post-upgrade",
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded",
},
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "admission-webhook",
},
},
roleRef: {
apiGroup: "rbac.authorization.k8s.io",
kind: "ClusterRole",
name: "ingress-nginx-admission",
},
subjects: [{
kind: "ServiceAccount",
name: "ingress-nginx-admission",
namespace: "ingress-nginx",
}],
});
const ingress_nginxIngress_nginx_admissionRole = new kubernetes.rbac.v1.Role("ingress_nginxIngress_nginx_admissionRole", {
apiVersion: "rbac.authorization.k8s.io/v1",
kind: "Role",
metadata: {
name: "ingress-nginx-admission",
annotations: {
"helm.sh/hook": "pre-install,pre-upgrade,post-install,post-upgrade",
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded",
},
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "admission-webhook",
},
namespace: "ingress-nginx",
},
rules: [{
apiGroups: [""],
resources: ["secrets"],
verbs: [
"get",
"create",
],
}],
});
const ingress_nginxIngress_nginx_admissionRoleBinding = new kubernetes.rbac.v1.RoleBinding("ingress_nginxIngress_nginx_admissionRoleBinding", {
apiVersion: "rbac.authorization.k8s.io/v1",
kind: "RoleBinding",
metadata: {
name: "ingress-nginx-admission",
annotations: {
"helm.sh/hook": "pre-install,pre-upgrade,post-install,post-upgrade",
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded",
},
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "admission-webhook",
},
namespace: "ingress-nginx",
},
roleRef: {
apiGroup: "rbac.authorization.k8s.io",
kind: "Role",
name: "ingress-nginx-admission",
},
subjects: [{
kind: "ServiceAccount",
name: "ingress-nginx-admission",
namespace: "ingress-nginx",
}],
});
const ingress_nginxIngress_nginx_admission_createJob = new kubernetes.batch.v1.Job("ingress_nginxIngress_nginx_admission_createJob", {
apiVersion: "batch/v1",
kind: "Job",
metadata: {
name: "ingress-nginx-admission-create",
annotations: {
"helm.sh/hook": "pre-install,pre-upgrade",
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded",
},
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "admission-webhook",
},
namespace: "ingress-nginx",
},
spec: {
template: {
metadata: {
name: "ingress-nginx-admission-create",
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "admission-webhook",
},
},
spec: {
containers: [{
name: "create",
image: "docker.io/jettech/kube-webhook-certgen:v1.5.1",
imagePullPolicy: "IfNotPresent",
args: [
"create",
`--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc`,
`--namespace=$(POD_NAMESPACE)`,
"--secret-name=ingress-nginx-admission",
],
env: [{
name: "POD_NAMESPACE",
valueFrom: {
fieldRef: {
fieldPath: "metadata.namespace",
},
},
}],
}],
restartPolicy: "OnFailure",
serviceAccountName: "ingress-nginx-admission",
securityContext: {
runAsNonRoot: true,
runAsUser: 2000,
},
},
},
},
});
const ingress_nginxIngress_nginx_admission_patchJob = new kubernetes.batch.v1.Job("ingress_nginxIngress_nginx_admission_patchJob", {
apiVersion: "batch/v1",
kind: "Job",
metadata: {
name: "ingress-nginx-admission-patch",
annotations: {
"helm.sh/hook": "post-install,post-upgrade",
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded",
},
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "admission-webhook",
},
namespace: "ingress-nginx",
},
spec: {
template: {
metadata: {
name: "ingress-nginx-admission-patch",
labels: {
"helm.sh/chart": "ingress-nginx-3.23.0",
"app.kubernetes.io/name": "ingress-nginx",
"app.kubernetes.io/instance": "ingress-nginx",
"app.kubernetes.io/version": "0.44.0",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/component": "admission-webhook",
},
},
spec: {
containers: [{
name: "patch",
image: "docker.io/jettech/kube-webhook-certgen:v1.5.1",
imagePullPolicy: "IfNotPresent",
args: [
"patch",
"--webhook-name=ingress-nginx-admission",
`--namespace=$(POD_NAMESPACE)`,
"--patch-mutating=false",
"--secret-name=ingress-nginx-admission",
"--patch-failure-policy=Fail",
],
env: [{
name: "POD_NAMESPACE",
valueFrom: {
fieldRef: {
fieldPath: "metadata.namespace",
},
},
}],
}],
restartPolicy: "OnFailure",
serviceAccountName: "ingress-nginx-admission",
securityContext: {
runAsNonRoot: true,
runAsUser: 2000,
},
},
},
},
});
When running kube2pulumi on prometheus operator for alerts the following code:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: pagerduty-config
data:
apiSecret: <base64 encoded key>
prometheus_operator:
enabled: true
alert_manager:
enabled: true
and then selecting python
, we get:
Sorry, we were unable to convert your code. There could be a problem with the code you submitted, or it might use a feature kube2pulumi doesn't support
The converter seems to stop on:
prometheus_operator:
Expect that python, go, or c# code would be generated.
Instead only the typescript code is generated successfully.
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: pagerduty-config
data:
apiSecret: <base64 encoded key>
prometheus_operator:
enabled: true
alert_manager:
enabled: true
typescript
and it will work. For example, the following code is generated:import * as pulumi from "@pulumi/pulumi";
import * as kubernetes from "@pulumi/kubernetes";
const pagerduty_configSecret = new kubernetes.core.v1.Secret("pagerduty_configSecret", {
apiVersion: "v1",
kind: "Secret",
type: "Opaque",
metadata: {
name: "pagerduty-config",
},
data: {
apiSecret: "<base64 encoded key>",
},
prometheus_operator: {
enabled: true,
alert_manager: {
enabled: true,
},
},
});
Select convert to python
and you will see the error
However, if you nest the prometheus_operator
like the following under apiSecret
:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: pagerduty-config
data:
apiSecret: <base64 encoded key>
prometheus_operator:
enabled: true
alert_manager:
enabled: true
extra_routes: |
- receiver: pager-duty
match_re:
severity: critical
team: serin
group_wait: 10s
extra_receivers: |
- name: pager-duty
pagerduty_configs:
- service_key: '******'
slack_configs:
- send_resolved: true
channel: '#slack-channel-name'
http_config:
proxy_url: http://proxy.config.pcp.local:3128
api_url: 'https://hooks.slack.com/services/xxxxxxxxx/yyyyyyyyyyy/zzzzzzzzzzzzzzzzzzzzzzzz'
text: |-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
*Description:* {{ .Annotations.description }}
*Runbook:*
{{ if .Annotations.runbook_url }} {{ .Annotations.runbook_url }} {{ else }} https://my.com/.. {{ end }}
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}
then it works and generates the following python code:
import pulumi
import pulumi_kubernetes as kubernetes
pagerduty_config_secret = kubernetes.core.v1.Secret("pagerduty_configSecret",
api_version="v1",
kind="Secret",
type="Opaque",
metadata=kubernetes.meta.v1.ObjectMetaArgs(
name="pagerduty-config",
),
data={
"apiSecret": "<base64 encoded key>",
"prometheus_operator": {
"enabled": True,
"alert_manager": {
"enabled": True,
},
"extra_routes": """- receiver: pager-duty
match_re:
severity: critical
team: serin
group_wait: 10s
""",
},
"extra_receivers": """- name: pager-duty
pagerduty_configs:
- service_key: '******'
slack_configs:
- send_resolved: true
channel: '#slack-channel-name'
http_config:
proxy_url: http://proxy.config.pcp.local:3128
api_url: 'https://hooks.slack.com/services/xxxxxxxxx/yyyyyyyyyyy/zzzzzzzzzzzzzzzzzzzzzzzz'
text: |-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
*Description:* {{ .Annotations.description }}
*Runbook:*
{{ if .Annotations.runbook_url }} {{ .Annotations.runbook_url }} {{ else }} https://my.com/.. {{ end }}
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}
""",
})
This will also generate valid C# code, however, for GO it will still throw an error.
pulumi about
pulumi about
CLI
Version 3.56.0
Go Version go1.20.1
Go Compiler gc
Plugins
NAME VERSION
go unknown
kubernetes 3.21.2
Host
OS darwin
Version 11.7.4
Arch x86_64
No response
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
raw YAML for failing PCL (currently in separate files in same directory):
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pulumi-kubernetes-operator
subjects:
- kind: ServiceAccount
name: pulumi-kubernetes-operator
roleRef:
kind: Role
name: pulumi-kubernetes-operator
apiGroup: rbac.authorization.k8s.io
apiVersion: v1
kind: ServiceAccount
metadata:
name: pulumi-kubernetes-operator
Failing PCL for pcl2pulumi:
resource pulumi-kubernetes-operator "kubernetes:rbac.authorization.k8s.io/v1:RoleBinding" {
kind = "RoleBinding"
apiVersion = "rbac.authorization.k8s.io/v1"
metadata = {
name = "pulumi-kubernetes-operator"
}
subjects = [
{
kind = "ServiceAccount"
name = "pulumi-kubernetes-operator"
}
]
roleRef = {
kind = "Role"
name = "pulumi-kubernetes-operator"
apiGroup = "rbac.authorization.k8s.io"
}
}
resource pulumi-kubernetes-operator "kubernetes:core/v1:ServiceAccount" {
apiVersion = "v1"
kind = "ServiceAccount"
metadata = {
name = "pulumi-kubernetes-operator"
}
}
Code Segment:
filePath := "testdata/k8sOperator/"
result, err := yaml2pcl.ConvertDirectory(filePath)
if err != nil {
fmt.Println(err)
return
}
//output format options are "nodejs", "python", "dotnet", "go"
err = pcl2pulumi.Pcl2Pulumi(result, filePath, "python")
if err != nil {
fmt.Println(err)
}
Error: "pulumi-kubernetes-operator" already declared
The generated code currently includes the apiVersion
and kind
in the resource args. These fields are redundant because they are already set in the resource constructors.
Example:
const test = new kubernetes.rbac.v1.Role("test", {
apiVersion: "rbac.authorization.k8s.io/v1",
kind: "Role",
metadata: {
name: "test",
},
rules: [{
apiGroups: [""],
resources: [
"pods",
"services",
"services/finalizers",
],
verbs: [
"create",
"delete",
"get",
"list",
"patch",
"update",
"watch",
],
}],
});
Should be:
const test = new kubernetes.rbac.v1.Role("test", {
metadata: {
name: "test",
},
rules: [{
apiGroups: [""],
resources: [
"pods",
"services",
"services/finalizers",
],
verbs: [
"create",
"delete",
"get",
"list",
"patch",
"update",
"watch",
],
}],
});
kube2pulumi cannot create a proper app/code fragment for this yaml fragment.
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes CLUSTER_DOMAIN REVERSE_CIDRS {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . UPSTREAMNAMESERVER {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}STUBDOMAINS
The data property in the pulumi code fragment is empty.
import * as pulumi from "@pulumi/pulumi";
import * as kubernetes from "@pulumi/kubernetes";
const kube_systemCorednsConfigMap = new kubernetes.core.v1.ConfigMap("kube_systemCorednsConfigMap", {
apiVersion: "v1",
kind: "ConfigMap",
metadata: {
name: "coredns",
namespace: "kube-system",
},
data: {}, // <-- this
});
This coredns fragment came from the coredns github repo
https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed
line 42
It would be nice to have a Chocolatey package for Windows.
Redo Error handling so users have better control over what is wrong. Currently exposes PCL errors which should not be exposed to the users as they don'y have control over the PCL produced:
Error:
�[31mError�[0m: Missing attribute separator
on pcl-859051780.pp line 40:
40: config = {
41: aws:region �[1;4m=�[0m "us-east-2"
Expected a newline or comma to mark the beginning of the next attribute.
vpc-resource-controller.yaml
from Amazon EKS Windows Support:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: vpc-resource-controller
rules:
- apiGroups:
- ""
resources:
- nodes
- nodes/status
- pods
- configmaps
verbs:
- update
- get
- list
- watch
- patch
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vpc-resource-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: vpc-resource-controller
subjects:
- kind: ServiceAccount
name: vpc-resource-controller
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vpc-resource-controller
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vpc-resource-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: vpc-resource-controller
tier: backend
track: stable
template:
metadata:
labels:
app: vpc-resource-controller
tier: backend
track: stable
spec:
serviceAccount: vpc-resource-controller
containers:
- command:
- /vpc-resource-controller
args:
- -stderrthreshold=info
image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/vpc-resource-controller:0.2.3
imagePullPolicy: Always
livenessProbe:
failureThreshold: 5
httpGet:
host: 127.0.0.1
path: /healthz
port: 61779
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
name: vpc-resource-controller
securityContext:
privileged: true
hostNetwork: true
nodeSelector:
beta.kubernetes.io/os: linux
beta.kubernetes.io/arch: amd64
---
❯ kube2pulumi csharp -f .\vpc-resource-controller.yaml
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x48 pc=0xf382ed]
goroutine 1 [running]:
github.com/goccy/go-yaml/ast.(*filterWalker).Visit(0xc0005cf460, 0x0, 0x0, 0xc0005cf240, 0x56)
/home/runner/go/pkg/mod/github.com/goccy/[email protected]/ast/ast.go:1472 +0x2d
github.com/goccy/go-yaml/ast.Walk(0x146fb00, 0xc0005cf460, 0x0, 0x0)
/home/runner/go/pkg/mod/github.com/goccy/[email protected]/ast/ast.go:1428 +0x53
github.com/goccy/go-yaml/ast.Filter(...)
/home/runner/go/pkg/mod/github.com/goccy/[email protected]/ast/ast.go:1481
github.com/pulumi/kube2pulumi/pkg/yaml2pcl.convert(0xc0000344a0, 0x1e, 0xc00058bcc0, 0x5, 0x8, 0x0, 0xc00051bb70, 0x497c78, 0x1072ce0, 0xc0000ca700, ...)
/home/runner/work/kube2pulumi/kube2pulumi/pkg/yaml2pcl/yaml2pcl.go:83 +0x13a
github.com/pulumi/kube2pulumi/pkg/yaml2pcl.ConvertFile(0xc0000344a0, 0x1e, 0xc0000ca700, 0x98, 0x14abcc0, 0x14abcc0, 0xc9b4e2, 0x1072ce0, 0xc0000ca700)
/home/runner/work/kube2pulumi/kube2pulumi/pkg/yaml2pcl/yaml2pcl.go:33 +0x16f
github.com/pulumi/kube2pulumi/pkg/kube2pulumi.Kube2PulumiFile(0xc0000344a0, 0x1e, 0x124106d, 0x6, 0x1072ce0, 0xc0000ca700, 0x1d56020, 0x0, 0x4, 0x0, ...)
/home/runner/work/kube2pulumi/kube2pulumi/pkg/kube2pulumi/kube2pulumi.go:14 +0x46
github.com/pulumi/kube2pulumi/cmd/kube2pulumi/util.RunConversion(0x0, 0x0, 0xc0000344a0, 0x1e, 0x124106d, 0x6, 0x4, 0x123e4ef, 0x4, 0xcc9740)
/home/runner/work/kube2pulumi/kube2pulumi/cmd/kube2pulumi/util/conversion.go:26 +0xa7
github.com/pulumi/kube2pulumi/cmd/kube2pulumi/csharp.Command.func1(0xc00044cdc0, 0xc00058c880, 0x0, 0x2, 0x0, 0x0)
/home/runner/work/kube2pulumi/kube2pulumi/cmd/kube2pulumi/csharp/cli.go:18 +0xc8
github.com/spf13/cobra.(*Command).execute(0xc00044cdc0, 0xc00058c840, 0x2, 0x2, 0xc00044cdc0, 0xc00058c840)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:842 +0x45a
github.com/spf13/cobra.(*Command).ExecuteC(0xc00044c000, 0x44570e, 0x1ca44e0, 0xc00006df78)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x350
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main()
/home/runner/work/kube2pulumi/kube2pulumi/cmd/kube2pulumi/main.go:42 +0x32
After removing the trailing ---
, it works:
❯ kube2pulumi csharp -f .\vpc-resource-controller.yaml
Diagnostics:
Conversion successful! Generated File: Program.cs
Currently, kube2pulumi emits snakeCase. Investigate the ability to emit kebab-case instead.
Upgrading kube2pulumi to use the latest Pulumi SDK v3.46.0 forces an upgrade from github.com/goccy/go-yaml v1.8.* to github.com/goccy/go-yaml v1.9.*. Unfortunately, go-yaml seems to have changed how comments are represented in yaml AST.
pkg/yaml2pcl/yaml2pcl.go:490:43: comment.Value undefined (type *ast.CommentGroupNode has no field or method Value)
pkg/yaml2pcl/yaml2pcl.go:506:11: comment.Prev undefined (type *ast.CommentGroupNode has no field or method Prev)
pkg/yaml2pcl/yaml2pcl.go:507:11: comment.Next undefined (type *ast.CommentGroupNode has no field or method Next)
see above
no change in behavior
change in comments
pulumi about
No response
No response
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
When following README instructions, there is and error on generating pulumi resource:
Error: binding types: type kubernetes:core/v1:ServiceSpecType must be an object, not a string
There is no error and resource is generated
Using pulumi docker with installed kube2pulumi tool
The following YAML generates invalid PCL:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pulumi-kubernetes-operator
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
resource pulumi-kubernetes-operator "kubernetes:rbac.authorization.k8s.io/v1:Role" {
apiVersion = "rbac.authorization.k8s.io/v1"
kind = "Role"
metadata = {
name = "pulumi-kubernetes-operator"
}
rules = [
{
apiGroups = [
""""
]
resources = [
"pods"
]
verbs = [
"get"
]
}
]
}
The ""
string under apiGroups
is incorrectly generating an extra set of ""
in the PCL. It should instead be:
apiGroups = [
""
]
i installed istio using pulumi helm chat using python. I am trying to convert Gateway and virtual service yaml file to python code using kube2pulumi but not able to do. Getting this error
Error: unknown resource type 'kubernetes:networking.istio.io/v1alpha3:Gateway'
Not working
Not working
It should work
pulumi about
No response
No response
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: mm-database
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- patch
- update
# the following three privileges are necessary only when using endpoints
- create
- list
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- create
Response:
kube2pulumi typescript -f yamls/templates/database/role.yaml Error: Error: Invalid expression
on pcl-910958302.pp line 51:
52: ]
Expected the start of an expression, but found an invalid expression token.
When one attempts to convert a YAML file that includes a quoted string for the "APIVersion" key, an error is thrown of the form:
❯ kube2pulumi go
Error: Error: Invalid block definition
on pcl-224864188.pp line 1:
2: resource "nginx_deploymentDeployment" "kubernetes:"apps/v1":Deployment" {
Either a quoted string block label or an opening brace ("{") is expected here.
Usage:
kube2pulumi go [flags]
Flags:
-h, --help help for go
Global Flags:
-d, --directory string file path for directory to convert
-f, --file string YAML file to convert
unable to run program: Error: Invalid block definition
on pcl-224864188.pp line 1:
2: resource "nginx_deploymentDeployment" "kubernetes:"apps/v1":Deployment" {
Either a quoted string block label or an opening brace ("{") is expected here.
The yaml file is of the form:
# depl.yaml
apiVersion: "apps/v1"
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
env:
- NAME: "PATH"
VALUE: "/pathy/to/thing"
ports:
- containerPort: 80
---
apiVersion: "apps/v1"
kind: Deployment
metadata:
name: nginx-deployment-two
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
env:
- NAME: "PATH"
VALUE: "/pathy/to/thing"
ports:
- containerPort: 80
depl.yaml
file into a directorykube2pulumi go
Expected: Pulumi code is generated from the manifest
Actual: An error is thrown ( see above )
The following YAML generates invalid PCL:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pulumi-kubernetes-operator
rules:
- apiGroups:
- foo
resources:
- pods
- services
- services/finalizers
- endpoints
- persistentvolumeclaims
- events
- configmaps
- secrets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
resource pulumi-kubernetes-operator "kubernetes:rbac.authorization.k8s.io/v1:Role" {
apiVersion = "rbac.authorization.k8s.io/v1"
kind = "Role"
metadata = {
name = "pulumi-kubernetes-operator"
}
rules = [
{
apiGroups = [
"foo"
]
resources = [
"pods"
"services"
"services/finalizers"
"endpoints"
"persistentvolumeclaims"
"events"
"configmaps"
"secrets"
]
verbs = [
"create"
"delete"
"get"
"list"
"patch"
"update"
"watch"
]
}
]
}
The items in the array need to be string separated like this:
resources = [
"pods",
"services",
"services/finalizers",
"endpoints",
"persistentvolumeclaims",
"events",
"configmaps",
"secrets"
]
golang codegen panics with the following error:
unable to convert program: invalid Go source code:
package main
import (
"github.com/pulumi/pulumi/sdk/v2/go/pulumi"
appsv1 "github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes/apps/v1"
corev1 "github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes/core/v1"
metav1 "github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes/meta/v1"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
_, err := appsv1.NewDeployment(ctx, "argocd_serverDeployment", &appsv1.DeploymentArgs{
ApiVersion: pulumi.String("apps/v1"),
Kind: pulumi.String("Deployment"),
Metadata: &metav1.ObjectMetaArgs{
Name: pulumi.String("argocd-server"),
},
Spec: &appsv1.DeploymentSpecArgs{
Template: &corev1.PodTemplateSpecArgs{
Spec: &corev1.PodSpecArgs{
Containers: corev1.ContainerArray{
&corev1.ContainerArgs{
ReadinessProbe: &corev1.ProbeArgs{
HttpGet: &corev1.HTTPGetActionArgs{
Port: pulumi.Int(8080%!v(PANIC=Format method: not a string)),
},
},
},
},
},
},
},
})
if err != nil {
return err
}
return nil
})
}
repro details: https://gist.github.com/sashu-shankar/0858c533d00cf5e1da141cab51701f14
Generated Go files should be gofmt
'd prior to writing them to disk.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.