Coder Social home page Coder Social logo

api's Introduction

api

The canonical location of the OpenShift API definition. This repo holds the API type definitions and serialization code used by openshift/client-go APIs in this repo ship inside OCP payloads.

Adding new FeatureGates

Add your FeatureGate to feature_gates.go. The threshold for merging a fully disabled or TechPreview FeatureGate is an open enhancement. To promote to Default on any ClusterProfile, the threshold is 99% passing tests on all platforms or QE sign off.

Adding new TechPreview FeatureGate to all ClusterProfiles (Hypershift and SelfManaged)

FeatureGateMyFeatureName = newFeatureGate("MyFeatureName").
			reportProblemsToJiraComponent("my-jira-component").
			contactPerson("my-team-lead").
			productScope(ocpSpecific).
			enableIn(TechPreviewNoUpgrade).
			mustRegister()

Adding new TechPreview FeatureGate to all only Hypershift

This will be enabled in TechPreview on Hypershift, but never enabled on SelfManaged

FeatureGateMyFeatureName = newFeatureGate("MyFeatureName").
			reportProblemsToJiraComponent("my-jira-component").
			contactPerson("my-team-lead").
			productScope(ocpSpecific).
			enableForClusterProfile(Hypershift, TechPreviewNoUpgrade).
			mustRegister()

Promoting to Default, but only on Hypershift

This will be enabled in TechPreview on all ClusterProfiles and also by Default on Hypershift. It will be disabled in Default on SelfManaged.

FeatureGateMyFeatureName = newFeatureGate("MyFeatureName").
			reportProblemsToJiraComponent("my-jira-component").
			contactPerson("my-team-lead").
			productScope([ocpSpecific|kubernetes]).
			enableIn(TechPreviewNoUpgrade).
			enableForClusterProfile(Hypershift, Default).
			mustRegister()

Promoting to Default on all ClusterProfiles

FeatureGateMyFeatureName = newFeatureGate("MyFeatureName").
			reportProblemsToJiraComponent("my-jira-component").
			contactPerson("my-team-lead").
			productScope([ocpSpecific|kubernetes]).
            enableIn(Default, TechPreviewNoUpgrade).
			mustRegister()

defining API validation tests

Tests are logically associated with FeatureGates. When adding any FeatureGated functionality a new test file is required. The test files are located in <group>/<version>/tests/<crd-name>/FeatureGate.yaml:

route/
  v1/
    tests/
      routes.route.openshift.io/
        AAA_ungated.yaml
        ExternalRouteCertificate.yaml

Here's an AAA_ungated.yaml example:

apiVersion: apiextensions.k8s.io/v1 # Hack because controller-gen complains if we don't have this.
name: Route
crdName: routes.route.openshift.io
tests:

Here's an ExternalRouteCertificate.yaml example:

apiVersion: apiextensions.k8s.io/v1 # Hack because controller-gen complains if we don't have this.
name: Route
crdName: routes.route.openshift.io
featureGate: ExternalRouteCertificate
tests:

The integration tests use the crdName and featureGate to determine which tests apply to which manifests and automatically react to changes when the FeatureGates are enabled/disabled on various FeatureSets and ClusterProfiles.

gen-minimal-test.sh can still function to stub out files if you don't want to copy/paste an existing one.

defining FeatureGate e2e tests

In order to move an API into the Default FeatureSet, it is necessary to demonstrate completeness and reliability. E2E tests are the ONLY category of test that automatically prevents regression over time: repository presubmits do NOT provide equivalent protection. To confirm this, there is an automated verify script that runs every time a FeatureGate is added to the Default FeatureSet. The script queries our CI system (sippy/component readiness) to retrieve a list of all automated tests for a given FeatureGate and then enforces the following rules.

  1. Tests must contain either [OCPFeatureGate:<FeatureGateName>] or the standard upstream [FeatureGate:<FeatureGateName>].
  2. There must be at least five tests for each FeatureGate.
  3. Every test must be run on every TechPreview platform we have jobs for. (Ask for an exception if your feature doesn't support a variant.)
  4. Every test must run at least 14 times on every platform/variant.
  5. Every test must pass at least 95% of the time on every platform/variant.

If your FeatureGate lacks automated testing, there is an exception process that allows QE to sign off on the promotion by commenting on the PR.

defining new APIs

When defining a new API, please follow the OpenShift API conventions, and then follow the instructions below to regenerate CRDs (if necessary) and submit a pull request with your new API definitions and generated files.

Adding a new stable API (v1)

When copying, it matters which // +foo markers are two comments blocks up and which are one comment block up.

// +genclient
// +genclient:nonNamespaced
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// the next line of whitespace matters

// MyAPI is amazing, let me describe it!
//
// Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer).
// +openshift:compatibility-gen:level=1
// +openshift:file-pattern=cvoRunLevel=0000_50,operatorName=my-operator,operatorOrdering=01
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:resource:path=myapis,scope=Cluster
// +openshift:api-approved.openshift.io=https://github.com/openshift/api/pull/<this PR number>
// +openshift:capability=IfYouHaveOne
// +kubebuilder:printcolumn:name=Column Name,JSONPath=.status.something,type=string,description=how users should interpret this.
// +kubebuilder:metadata:annotations=key=value
// +kubebuilder:metadata:labels=key=value
// +kubebuilder:validation:XValidation:rule=
type MyAPI struct {
	metav1.TypeMeta `json:",inline"`

	// metadata is the standard object's metadata.
	// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
	metav1.ObjectMeta `json:"metadata,omitempty"`

	// spec is the desired state of the cluster version - the operator will work
	// to ensure that the desired version is applied to the cluster.
	// +kubebuilder:validation:Required
	Spec MyAPISpec `json:"spec"`
	// status contains information about the available updates and any in-progress
	// updates.
	// +optional
	Status MyAPIStatus `json:"status"`
}

Adding a new unstable API (v1alpha)

First, add a FeatureGate as described above.

Like above, but there's an additional

// +kubebuilder:validation:XValidation:rule=
// +openshift:enable:FeatureGate=MyFeatureGate
type MyAPI struct {
	...
}

Adding new fields

Here are few other use-cases for convenience, but have a look in ./example for other possibilities.

// +openshift:validation:FeatureGateAwareXValidation:featureGate=MyFeatureGate,rule="has(oldSelf.coolNewField) ? has(self.coolNewField) : true",message="coolNewField may not be removed once set"
type MyAPI struct {
    // +openshift:enable:FeatureGate=MyFeatureGate
    // +optional
    CoolNewField string `json:"coolNewField"`
}

// EvolvingDiscriminator defines the audit policy profile type.
// +openshift:validation:FeatureGateAwareEnum:featureGate="",enum="";StableValue
// +openshift:validation:FeatureGateAwareEnum:featureGate=MyFeatureGate,enum="";StableValue;TechPreviewOnlyValue
type EvolvingDiscriminator string

const (
  // "StableValue" is always present.
  StableValue EvolvingDiscriminator = "StableValue"

  // "TechPreviewOnlyValue" should only be allowed when TechPreviewNoUpgrade is set in the cluster
  TechPreviewOnlyValue EvolvingDiscriminator = "TechPreviewOnlyValue"
)

required labels

In addition to the standard lgtm and approved labels this repository requires either:

bugzilla/valid-bug - applied if your PR references a valid bugzilla bug

OR

qe-approved, docs-approved, and px-approved - these labels can be applied by anyone in the openshift org via the /label command.

Who should apply these qe/docs/px labels?

  • For a no-FF team who is merging a feature before code freeze, they need to get those labels applied to their api repo PR by the appropriate teams (i.e. qe, docs, px)
  • For a FF(traditional) team who is merging a feature before FF, they can self-apply the labels(via /label commands), they are basically irrelevant for those teams
  • For a FF team who is merging a feature after FF, the PR should be rejected barring an exception

Why are these labels needed?

We need a way for no-FF teams to be able to merge post-FF that does not require a BZ. For non-shared repos that mechanism is the qe/docs/px-approved labels. We are expanding that mechanism to shared repos because the alternative would be that no-FF teams would put a dummy bugzilla/valid-bug label on their feature PRs in order to be able to merge them after feature freeze. Since most individuals can't apply a bugzilla/valid-bug label to a PR, this introduces additional obstacles on those PRs. Conversely, anyone can apply the docs/qe/px-approved labels, so "FF" teams that need to apply these labels to merge can do so w/o needing to involve anyone additional.

Does this mean feature-freeze teams can use the no-FF process to merge code?

No, signing a team up to be a no-FF team includes some basic education on the process and includes ensuring the associated QE+Docs participants are aware the team is moving to that model. If you'd like to sign your team up, please speak with Gina Hargan who will be happy to help on-board your team.

vendoring generated manifests into other repositories

If your repository relies on vendoring and copying CRD manifests (good job!), you'll need have an import line that depends on the package that contains the CRD manifests. For example, adding

import (
	_ "github.com/openshift/api/operatoringress/v1/zz_generated.crd-manifests"
)

to any .go file will work, but some commonly chosen files are tools/tools.go or pkg/dependencymagnet/doc.go. Once added, a go mod vendor will pick up the package containing the manifests for you to copy.

generating CRD schemas

Since Kubernetes 1.16, every CRD created in apiextensions.k8s.io/v1 is required to have a structural OpenAPIV3 schema. The schemas provide server-side validation for fields, as well as providing the descriptions for oc explain. Moreover, schemas ensure structural consistency of data in etcd. Without it anything can be stored in a resource which can have security implications. As we host many of our CRDs in this repo along with their corresponding Go types we also require them to have schemas. However, the following instructions apply for CRDs that are not hosted here as well.

These schemas are often very long and complex, and should not be written by hand. For OpenShift, we provide Makefile targets in build-machinery-go which generate the schema, built on upstream's controller-gen tool.

If you make a change to a CRD type in this repo, simply calling make update-codegen-crds should regenerate all CRDs and update the manifests. If yours is not updated, ensure that the path to its API is included in our calls to the Makefile targets, if this doesn't help try calling make generate-with-container for executing the generators in a controlled environment.

To add this generator to another repo:

  1. Vendor github.com/openshift/build-machinery-go

  2. Update your Makefile to include the following:

include $(addprefix ./vendor/github.com/openshift/build-machinery-go/make/, \
  targets/openshift/crd-schema-gen.mk \
)

$(call add-crd-gen,<TARGET_NAME>,<API_DIRECTORY>,<CRD_MANIFESTS>,<MANIFEST_OUTPUT>)

The parameters for the call are:

  1. TARGET_NAME: The name of your generated Make target. This can be anything, as long as it does not conflict with another make target. Recommended to be your api name.
  2. API_DIRECTORY: The location of your API. For example if your Go types are located under pkg/apis/myoperator/v1/types.go, this should be ./pkg/apis/myoperator/v1.
  3. CRD_MANIFESTS: The directory your CRDs are located in. For example, if that is manifests/my_operator.crd.yaml then it should be ./manifests
  4. MANIFEST_OUTPUT: This should most likely be the same as CRD_MANIFESTS, and is only provided for flexibility to output generated code to a different directory.

You can include as many calls to different APIs as necessary, or if you have multiple APIs under the same directory (eg, v1 and v2beta1) you can use 1 call to the parent directory pointing to your API.

After this, calling make update-codegen-crds should generate a new structural OpenAPIV3 schema for your CRDs.

Notes

  • This will not generate entire CRDs, only their OpenAPIV3 schemas. If you do not already have a CRD, you will get no output from the generator.
  • Ensure that your API is correctly declared for the generator to pick it up. That means, in your doc.go, include the following:
    1. // +groupName=<API_GROUP_NAME>, this should match the group in your CRD spec
    2. // +kubebuilder:validation:Optional, this tells the operator that fields should be optional unless explicitly marked with // +kubebuilder:validation:Required

For more information on the API markers to add to your Go types, see the Kubebuilder book

Order of generation

make update-codegen-crds does roughly this:

  1. Run the empty-partial-schema tool. This creates empty CRD manifests in zz_generated.featuregated-crd-manifests for each FeatureGate.
  2. Run the schemapatch tool. This fills in the schema for each per-FeatureGate CRD manifest.
  3. Run the manifest-merge tool. This combines all the per-FeatureGate CRD manifests and manual-overrides

empty-partial-schema

This tool is gengo based and scans all types for a // +kubebuilder:object:root=true marker. For each type match, the type is navigated and all tags that include a featureGate (// +openshift:enable:FeatureGate, // +openshift:validation:FeatureGateAwareEnum, and // +openshift:validation:FeatureGateAwareXValidation) are tracked. For each type, for each FeatureGate, a file CRD manifest is created in zz_generated.featuregated-crd-manifests. The most common kube-builder tags are re-implemented in this stage to fill in the non-schema portion of the CRD manifests. This includes things like metadata, resource, and some custom openshift tags as well.

The generator ignores the schema when doing verify, so it doesn't fail on needing to run schemapatch. The generator should clean up old FeatureGated manifests when the gate is removed. Ungated files are created for resources that are sometimes ungated. Annotations are injected to indicate which FeatureGate a manifest is for: this is later read by schemapatch and manifest-merge.

schemapatch

This tool is kubebuilder based with patches to handle FeatureGated types, members, and validation. It reads the injected annotation from empty-partial-schema to decide which FeatureGate should be considered enabled when creating the schema that needs to be injected. It has no knowledge of whether the FeatureGate is enabled or disabled in particular ClusterProfile,FeatureSet tuples. It only needs a single pass over all the FeatureGated partial manifests.

If the schema generation isn't doing what you want, manual-override-crd-manifests allows partially overlaying bits of the CRD manifest. yamlpatch is no longer supported. The format is just "write the CRD you want and delete the stuff the generator sets properly". More specifically, it is the partial manifest that server-side-apply (structured merge diff) would properly merge on top of the CRD that is generated otherwise. Caveat, you cannot test this with a kube-apiserver because the CRD schema uses atomic lists and we had to patch that schema to indicate map lists keyed by version.

manifest-merge

This tool is gengo based and it combines the files in zz_generated.featuregated-crd-manifests and manual-override-crd-manifests on a per ClusterProfile,FeatureSet tuple. This tool takes as input all possible ClusterProfiles and all possible FeatureSets. It then maps from ClusterProfile,FeatureSet tuple to the set of enabled and disabled FeatureGates. Then for each CRD,ClusterProfile,Feature tuple, it merges the pertinent input using structured-merge-diff (SSA) logic based on the CRD schema plus a patch to make atomic fields map-lists. Pertinence is determined based on

  1. does this manifest have preferred ClusterProfile annotations: if so, honor them; if not, include everywhere.
  2. does this manifest have FeatureGate annotations: if so, match against the enabled set for the ClusterProfile,FeatureSet tuple. Note that CustomNoUpgrade selects everything

Once we have CRD for each ClusterProfile,FeatureSet tuple we choose what to serialize. This roughly follows:

  1. if all the CRDs are the same, write a single file and annotate with no FeatureSet and every ClusterProfile. Done.
  2. if all the CRDs are the same across all ClusterProfiles for each FeatureSet, create one file per FeatureSet and annotate with one FeatureSet and all ClusterProfiles. Done.
  3. if all the CRDs are the same across all FeatureSets for one ClusterProfile, create one file and annotate with no FeatureSet and one ClusterProfile. Continue to 4.
  4. for all remaining ClusterProfile,FeatureSet tuples, serialize a file with one FeatureSet and one ClusterProfile.

api's People

Contributors

0xmichalis avatar adambkaplan avatar arkadeepsen avatar bertinatto avatar bparees avatar coreydaley avatar dafsjr avatar damemi avatar danwinship avatar deads2k avatar enj avatar gabemontero avatar ingvagabund avatar ironcladlou avatar jhadvig avatar joelspeed avatar liggitt avatar marun avatar mfojtik avatar miciah avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar sanchezl avatar smarterclayton avatar soltysh avatar stlaz avatar sttts avatar tnozicka avatar wking avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

api's Issues

Need API guidelines/standards doc

I'm sure most of it can point to k8s, but we recently had some more generic api-type items come up like the fact that the registry uses port 5000 for its service when it would have been preferred to use :443 so the port suffix is not needed.

I don't know where else the guidelines could go, so including then in an api guidelines doc in this repo seems like as good a place as any.

/cc @smarterclayton @openshift/api-reviewers @adambkaplan

Does the operator support private and public DNS zones at the same time ?

Hi,

we are trying to setup the following infrastructure.

An internal cluster in AWS which means API- and APPS- routers using private subnets. Beside that we want to provide a public router for running some services in the internet.

We tried the following:

Updating the cluster CRD:

spec:
  baseDomain: internal.foo.bar
  privateZone:
    tags:
      Name: cluster-n4sbw-int
      kubernetes.io/cluster/cluster-n4sbw: owned
  publicZone: public.foo.bar
    tags:
      Name: cluster-n4sbw-external
      kubernetes.io/cluster/cluster-n4sbw-external: owned

Create another ingress controller:

apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
  name: external
  namespace: openshift-ingress-operator
spec:
  domain: public.foo.bar
  endpointPublishingStrategy:
    loadBalancer:
      scope: External
    type: LoadBalancerService

This seems to work and we are now able to attach public route to our services but the IngressController ends up in a degraded state with the following error:

Some ingresscontrollers are degraded: ingresscontroller "default" is degraded: DegradedConditions: One or more other status conditions indicate a degraded state: DNSReady=False (FailedZones: The record failed to provision in some zones: [{ map[Name:cluster-n4sbw-external kubernetes.io/cluster/luster-n4sbw-external:owned]}])

Removing the publicZone from the cluster CRD does not work because then the operator creates the wildcard DNS entry for the public Ingress in the private hosted zone of Route 53 which is not resolvable from outside the VPC.

Is there a way to solve that problem ?

We are using Openshift 4.6.15.

Make mirror-by-digest-only option configurable through ImageContentSourcePolicy

When working with disconnected environments, sometimes you need to define multiple ImageContentSourcePolicies, some of them are used by apps / manifests that don't use digests when pulling the images.

Currently, all configurations are added with mirror-by-digest-only property set to true. It will be nice if this property could be configured using the ImageContentSourcePolicy.

Thanks,

Missing documentation for API schema policy

In slack today I was alerted to the possibility that an OpenShift component to which I contribute may not be in compliance with a policy requiring a schema for its APIs. I don't know the details of that policy, how to tell if it actually affects the component I'm working on, or what sort of code (or other) changes might be needed in order to comply. There does not appear to be any documentation about that here in this repo, so I'm filing this ticket to ask someone to write all of that down so I can ensure someone takes care of any gaps we have.

I don't think we link to slack conversations in GitHub issues, so ping me and I'll share a link privately if you would like more details.

Start on

[set= id#0000]
Call_udid//
Ipconfig "null"

Add shortNames for CRD resources

Currently, only clusteroperator has a short name as co, but other CRD resources do not have a short name, it is better to add short names for CRDs in OCP to better usability.

tools makefile version checks output git errors when vendored in another repo

The tools Makefile version checks output git errors when vendored / ran from another repository via make -C as described in the codegen README for inclusion in other repositories. openshift/api vendor, go.mod, go.sum files in the check aren't there when vendored into a different project. The check ultimately works but outputs git errors when ran.

Example output from including in the cloud credential operator repo:

➜  cloud-credential-operator ✗ make -C vendor/github.com/openshift/api/tools run-codegen BASE_DIR="${PWD}/pkg/apis" API_GROUP_VERSIONS="cloudcredential.openshift.io/v1" OPENSHIFT_REQUIRED_FEATURESETS="Default"
make: Entering directory '/home/abutcher/go/src/github.com/openshift/cloud-credential-operator/vendor/github.com/openshift/api/tools'
fatal: ambiguous argument 'vendor': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
fatal: ambiguous argument 'vendor': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
fatal: ambiguous argument 'vendor': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
fatal: vendor: no such path in the working tree.
Use 'git <command> -- <path>...' to specify paths that do not exist locally.
fatal: vendor: no such path in the working tree.
Use 'git <command> -- <path>...' to specify paths that do not exist locally.
fatal: vendor: no such path in the working tree.
Use 'git <command> -- <path>...' to specify paths that do not exist locally.
Building codegen version 0f37397c68ee97ff55ba80aba040fc84d4a65653-dirty
/home/abutcher/go/src/github.com/openshift/cloud-credential-operator/vendor/github.com/openshift/api/tools/_output/bin/linux/amd64/codegen --base-dir /home/abutcher/go/src/github.com/openshift/cloud-credential-operator/pkg/apis --api-group-versions cloudcredential.openshift.io/v1 --required-feature-sets Default
I0228 10:37:37.514558  613509 root.go:80] Running generators for cloudcredential.openshift.io
...

CRDs of historical Openshift resources (Route, ImageStream) for interop with envtest / operator-sdk

Hello.

This a question regarding the context explained in kubernetes-sigs/controller-runtime#1191 (comment)

TL; DR of the above:
This repo (openshift/api) does not have some historical Openshift resources CRDs in config/v1 (ex: Route, ImageStream), (unless I've missed them somehow).

Those CRDs can be helpful to work on openshift operators using the operator-sdk (because envtest, the test framework coming with operator-sdk, will apply CRDs on a pseudo K8S API to test the operators control loops), if they interact with those resources.

Could those CRDs be added along the others ? Is there significant blocker which prevents that to happens, and can we help ?

(The question and the need is motivated by thoth-station/meteor-operator#126)

Thanks

Add tags for go mod compatibility

API does not have tags, only release branches.

Using tags is more friendly with go mod-based projects.

fyi @openshift/api-reviewers @openshift/api-approvers

Incorrect validation of hostname type in IngressController.operator.openshift.io/v1

Hostname type definition in ./config/v1/types_ingress.go defines Hostname as a string of "hostname" format. However, validation of hostname format does not allow for top level domain to include a digit.

Example:

my.domain.com works
my.domain.c4m does not

How did I test? Creating componentRoute in Ingress. This definition works:

apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
creationTimestamp: "2022-01-07T10:12:30Z"
generation: 4
name: cluster
resourceVersion: "1239439"
uid: 3460d6bf-8907-4b54-a920-3cc6a7e5ba18
spec:
componentRoutes:

  • hostname: console.apps.cp.mbs.sk.vwgtr
    name: test
    namespace: openshift-console
    domain: apps.alfa.cp.mbs.sk.vwgtr
    status: {}

This one returns an error:

ingresses.config.openshift.io "cluster" was not valid:

  • spec.componentRoutes.hostname: Invalid value: "console.apps.cp.mbs.sk.vw8tr": spec.componentRoutes.hostname in body must be of type hostname: "console.apps.cp.mbs.sk.vw8tr"

apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
creationTimestamp: "2022-01-07T10:12:30Z"
generation: 4
name: cluster
resourceVersion: "1239439"
uid: 3460d6bf-8907-4b54-a920-3cc6a7e5ba18
spec:
componentRoutes:

  • hostname: console.apps.cp.mbs.sk.vw8tr
    name: test
    namespace: openshift-console
    domain: apps.alfa.cp.mbs.sk.vwgtr
    status: {}

Type definition in ./config/v1/types_ingress.go:

// Hostname is an alias for hostname string validation.
// +kubebuilder:validation:Format=hostname
type Hostname string

'make generate-with-container' is broken with latest master

make generate-with-container results in,

hack/update-swagger-docs.sh
Generating swagger type docs for apiserver/v1 at apiserver/v1
# sigs.k8s.io/json/internal/golang/encoding/json
vendor/sigs.k8s.io/json/internal/golang/encoding/json/encode.go:1249:12: sf.IsExported undefined (type reflect.StructField has no field or method IsExported)
vendor/sigs.k8s.io/json/internal/golang/encoding/json/encode.go:1255:18: sf.IsExported undefined (type reflect.StructField has no field or method IsExported)
make: *** [Makefile:65: update-scripts] Error 2
make: *** [Makefile:70: generate-with-container] Error 2

AllowPrivilegeEscalation field not exported for SecurityContextConstraints

When importing to github.com/openshift/api/security/v1 in a Golang operator project, it came to my attention that one is not able to set AllowPrivilegeEscalation when creating a custom security context constraint.

This can be verified by looking at the SecurityContextConstraints struct shown in https://pkg.go.dev/github.com/openshift/api/security/v1#SecurityContextConstraints. However, I did notice the field is exported in the GitHub repository. See [line 97](the source code at

AllowPrivilegeEscalation *bool `json:"allowPrivilegeEscalation,omitempty" protobuf:"varint,23,rep,name=allowPrivilegeEscalation"`
)

At this time, I am not sure what would be the proper way to reconcile these two values to allow users to set a value for AllowPrivilegeEscalation.

Only one version of the module shows to be available:

➜  git:(main) ✗ go list -u -m github.com/openshift/api
github.com/openshift/api v3.9.0+incompatible
➜  git:(main) ✗ 

Dependbot raised a PR to upgrade this dep to `v3.9.0+incompatible` but I don't see a tag with this name

Dependbot raised a PR in my project that upgrades the openshift/api dependency to v3.9.0+incompatible. But I don't see any tag in this repo with this name. go get github.com/openshift/[email protected]+incompatible is working fine though.
So I am not able to understand where is that tag being pulled from. If I know that I can check if it's really an upgrade to the pseudo version that I already have for this module.

'make update-codegen-crds' generates a diff with latest master

Not sure if the file authorization/v1/generated.pb.go is up-to-date in the latest master.

After running make update-codegen-crds,

$ git status 
On branch check_make
Your branch is up to date with 'origin/master'.

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
        modified:   authorization/v1/generated.pb.go

no changes added to commit (use "git add" and/or "git commit -a")

AWS load balancer annotations aren't supported

Running Openshift on AWS.

Ultimately -- I think -- what comes out of the Openshift Ingress Operator is an AWS Applicaiton Load Balancer ingress controller. The Openshift Ingress Operator will drop an AWS Load Balancer Ingress Controller which supports either ELB Classic load balancers or NLBs; no ALBs.

type AWSNetworkLoadBalancerParameters struct {
suggests that adding annotations which could be passed down to the underlying ingress (and hence the created ELB/NLB resources) aren't?

Specifically, we would like to create NLBs with static IPs, and to add custom AWS resource tags to our load balancers, for cost reporting: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/

ImageContentPolicy Schema does not allow many valid hostnames for mirrors or sources

My reading of the regex below is that all hostname components must start with an alpha. It seems like this was designed to prevent IP addresses. It seems that there are valid use cases to use IP addresses but the rfc1123 also allows for hostname components to start with a number.

pattern: ^(([a-zA-Z]|[a-zA-Z][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z]|[A-Za-z][A-Za-z0-9\-]*[A-Za-z0-9])(:[0-9]+)?(\/[^\/:\n]+)*(\/[^\/:\n]+((:[^\/:\n]+)|(@[^\n]+)))?$

RFC1123 Section 2.1 Hostnames and Numbers specifically calls out:

The syntax of a legal Internet host name was specified in [RFC-952](https://datatracker.ietf.org/doc/html/rfc952)
      [DNS:4].  One aspect of host name syntax is hereby changed: the
      restriction on the first character is relaxed to allow either a
      letter or a digit.  Host software MUST support this more liberal
      syntax.

and later in the same section:

Whenever a user inputs the identity of an Internet host, it SHOULD
      be possible to enter either (1) a host domain name or (2) an IP
      address in dotted-decimal ("#.#.#.#") form.

no high level end user api for primary operations like application deployment

Existing openshift's rest api is low level and doesn't allow to do primary operations like application deployment, status check, etc.
If I'd like to deploy some application from template thru api, I go through the following steps:

  1. create new project (oc new-project).
    Issues: when I create project thru api, I get project w/o role bindings.
  2. create necessary persistent volumes (oc process/create)
    Issues: there is no api to process templates, nfs/hostPath pvs cannot create folders
  3. process and deploy template (oc process/oc create/oc new-app)
    Issues: no processing templates, code should recognize type of each template object and create it by appropriate api call

So, it would be great to implement some end user api like it can be done using oc command.

Rename GenerationHistory

I'm thinking about an operator and I like storing the last generation of the objects that the operator created / updated so it can detect updates in .spec of the objects. GenerationHistory has all the information, however, I don't like the name. There is no history. It's generation of single object that the operator knows it's correct.

Perhaps ChildGeneration (the objects that the operator creates could be called children). Or ObjectGeneration?

type GenerationHistory struct {

Have the tags been removed recently?

Several other issues suggest that there at least used to be tags in this repository, when the tag list is empty now.

I came here to investigate as to why my go project, which depends on openshift api, started complaining that certain version (mentioned in my go.mod) is not found.

Were the tags completely removed from the repository? Is this an intentional change or an unintentional mistake?

Route status does not have `omitempty` set

The status field of the Route type does not have the omitempty json tag, thus marking it required.

Using the openshift api in other tools validating the resources results in missing required field "status" in openshift.api.route.v1.Route. Other resources like deploymentconfigs have their status marked as omitempty

Openshift REST API - /webhooks/ returns wrong selflink (BUILDCONFIG instead of BUILD)

Trigger a pipeline based build thru webhook

curl -k -X POST APIHOST/oapi/v1/namespaces/NAMESPACE/buildconfigs/BUILDCONFIG/webhooks/SECRET/generic

The call returns
"selfLink":"/apis/build.openshift.io/v1/namespaces/NAMESPACE/buildconfigs/BUILDNAME/instantiate

which imho is wrong - because no new build config is created, but rather a build ...
so the selflink should be

/apis/build.openshift.io/v1/namespaces/NAMESPACE/builds/BUILD NAME

imagecontentsourcepolicy has no shortname

Not sure I'm in the right repository, sorry if I'm wrong.

This is so boring to type "oc get imagecontentsourcepolicy" tens times a day, could be so much better to have a shortname like "oc get icsp"

machineconfiguration API protobuf

We're working to move the MCO types from the machine-config-operator repo. Within our types we're using the ignition types https://github.com/openshift/machine-config-operator/blob/master/pkg/apis/machineconfiguration.openshift.io/v1/types.go#L233. We're already doing an hack to provide DeepCopy https://github.com/openshift/machine-config-operator/blob/master/pkg/apis/machineconfiguration.openshift.io/v1/machineconfig.deepcopy.go but we're now stuck because the Ignition types don't have protobuf annotations (nor the generate target writes them ofc). This result in make generate working just fine except for the protobuf case because of Ignition. I can see a $PROTO_OPTIONAL env but it doesn't seem to take effect.

Do we need to have ingnition types also have protobuf annotation? can we port the api w/o protobuf?

We do have a WIP PR here runcom#1 which fails at the bare make target because of protobuf as said.

'generate' is not an supported value of mappingMethod

Per Openshift doc, generate is a supported mappingMethod.

image

However, on Openshift cluster, an error is reported when set the 'mappingMethod' to 'generate'
image

Seems 'generate' is not listed as MappingMethodType in https://github.com/openshift/api/blob/e34bc2276d2e91e2e4f37395349c253652511754/config/v1/types_oauth.go

// MappingMethodType specifies how new identities should be mapped to users when they log in
type MappingMethodType string

const (
	// MappingMethodClaim provisions a user with the identity’s preferred user name. Fails if a user
	// with that user name is already mapped to another identity.
	// Default.
	MappingMethodClaim MappingMethodType = "claim"

	// MappingMethodLookup looks up existing users already mapped to an identity but does not
	// automatically provision users or identities. Requires identities and users be set up
	// manually or using an external process.
	MappingMethodLookup MappingMethodType = "lookup"

	// MappingMethodAdd provisions a user with the identity’s preferred user name. If a user with
	// that user name already exists, the identity is mapped to the existing user, adding to any
	// existing identity mappings for the user.
	MappingMethodAdd MappingMethodType = "add"
)

Errors running go mod vendor after pull request #937 merged

I am getting all kinds of errors trying to run go mod vendor in openshift/oc while trying to pull in the commit from pull request #937 , or the one after it.

  • malformed file path
  • `invalid char ':'

Example:

tls/docs/kube-apiserver Serving Certificates/subcert-openshift-kube-apiserver-operator_localhost-recovery-serving-signer@1622133567::2777012960471375622.png: malformed file path "tls/docs/kube-apiserver Serving Certificates/subcert-openshift-kube-apiserver-operator_localhost-recovery-serving-signer@1622133567::2777012960471375622.png": invalid char ':'

Can we get this either fixed or reverted?

route: new optional attribute subdomain lacks json:",omitempty"

The new Subdomain attribute does not have omitempty in the Go struct tag, unlike other fields like Path. Is this an accidental omission?

api/route/v1/types.go

Lines 86 to 90 in d297251

// +optional
Subdomain string `json:"subdomain" protobuf:"bytes,8,opt,name=subdomain"`
// path that the router watches for, to route traffic for to the service. Optional
Path string `json:"path,omitempty" protobuf:"bytes,2,opt,name=path"`

It affects the output of "oc get route" and client-go serializing to yaml using k8s.io/apimachinery/pkg/runtime/serializer/json

No Release Tag Pushed since 4.1.0

Hi guys,

I'm from @fabric8io team. We maintain Kubernetes and Openshift Client which is used by a lot of people in the java ecosystem. We generate Kubernetes/Openshift model from go structs in api and origin. Right now our Openshift go struct model is pointing at v4.1.0[0], but I have noticed that no other tag has been pushed since then. Even Openshift 4.3.0 got released recently.

I'm just curious why Openshift upstream maintainers are not pushing tags. It's not like we're blocked as we can update model to latest master revision. But it would be nice if we upgrade with respect to
Openshift releases.

[0] https://github.com/fabric8io/kubernetes-client#compatibility-matrix

Depends on github.com/gogo/protobuf v1.3.2

See gogo/protobuf#691 for context/background

I'm writing as the maintainer of the package in Debian. We were discussing internally about when we can move the distribution to the newer protobuf v2 apis and the dependency on gogo/protobuf seemed problematic.

Is gogo/protobuf a critical dependency of openshift/api? If not, maybe it'd be appropriate to move to google.golang.org/protobuf instead?

Restoring deploymentconfigs with history result in weird state

Consider this scenario:

# *(this is broken, you have to specify the command manually with oc edit...)
$ oc create deploymentconfig test --image=centos:7 -- /bin/sleep infinity
# run this 3 times:
$ oc rollout latest test
$ oc get rc
NAME      DESIRED   CURRENT   READY     AGE
test-1    0         0         0         1m
test-2    0         0         0         1m
test-3    1         1         1         1m

$ oc get rc,dc -o yaml > backup.yaml
$ oc delete all --all

# Now restore everything back:

$ oc create -f backup.yaml

What happens here is basically all replication controllers are deleted right after are created. Then a new RC is created with revision=1... This is because the replication controllers have ownerRefs set to DC which was deleted and the UUID does not match with the newly created DC.

If you edit the backup.yaml and remove all ownerRef fields from the RCs and recreate everything
then the 3 RC's will stay, but the revision for the DC is set to 1 instead of 3...

That means, when you do oc rollout latest test, it will tell you that it successfully rolled out, but nothing will happen (just the DC revision is bumped) until you call that command three times. On fourth time, it will actually trigger a new rollout.

*openshift/origin#20728

Support toleration for buildconfig

Hi

It would be useful to support toleration for buildconfig.
Buildconfig support nodeSelector but that's not always convenient.

In my case I had 3 nodes out of 72 that have GPU and I put a taint on them to make sure that they won't be scheduled by other tenant workload. But then I failed to schedule a build on it because buildconfig do not allow toleration.

Thanks.

Need generation verify scripts

We need generation verify scripts and something in the update-deps.sh script that removes those proto lines for now.

Also, @sttts thanks for working the generators. It made this a lot easier.

docstrings for sub-structs of CRDs are not reflected by 'oc explain'

Let me start of by saying: not sure if this is the right place to report this.

The problem: docstrings written above sub-structs for CRDs are not reflected when doing a oc explain, see example below.

in config/v1/types_network.go

type Network struct {
        metav1.TypeMeta `json:",inline"`
        // Standard object's metadata.
        metav1.ObjectMeta `json:"metadata,omitempty"`

        // spec holds user settable values for configuration.
        // +kubebuilder:validation:Required
        // +required
        Spec NetworkSpec `json:"spec"`
        // status holds observed values from the cluster. They may not be overridden.
        // +optional
        Status NetworkStatus `json:"status"`
}

// NetworkSpec is the desired network configuration.
// As a general rule, this SHOULD NOT be read directly. Instead, you should
// consume the NetworkStatus, as it indicates the currently deployed configuration.
// Currently, changing ClusterNetwork, ServiceNetwork, or NetworkType after
// installation is not supported.
type NetworkSpec struct {

Output by oc explain:

$ oc explain network.spec
KIND:     Network
VERSION:  config.openshift.io/v1

RESOURCE: spec <Object>

DESCRIPTION:
     spec holds user settable values for configuration.

FIELDS:

This example was taken for network, but it seems to apply to all openshift CRDs defined in config/v1

Let me know if you guys prefer that I file a Bugzilla report for this instead?

ImageStreamTags is missing watch verb

See that the ImageStreamTags is not implementing the watch which would be important to observe changes in this resource as well.

$ oc api-resources -o wide | grep ImageStream
imagestreamimages                     isimage          image.openshift.io                    true         ImageStreamImage                     [get]
imagestreamimports                                     image.openshift.io                    true         ImageStreamImport                    [create]
imagestreammappings                                    image.openshift.io                    true         ImageStreamMapping                   [create]
imagestreams                          is               image.openshift.io                    true         ImageStream                          [create delete deletecollection get list patch update watch]
imagestreamtags                       istag            image.openshift.io                    true         ImageStreamTag                       [create delete get list patch update]

OCP version: 4.3.1

Garbage collection does not seem to work

Hi,
we are building an operator where we have a higher level object called PAAS, which in turn creates Namespaces, ClusterResourceQuota's, Groups, etc. All works as expected:

  • Creating a Paas also creates namespaces, ClusterResourceQuotas and Groups
  • Removing a Paas also cleans namespaces (we use controllerutil.SetControllerReference(paas, ns, r.Scheme) for that)

But

  • Removing a Paas does not clean ClusterResourceQuotas and Groups

Which is weird because

  • we also use controllerutil.SetControllerReference(paas, quota, r.Scheme) for quotas, and same for the group
  • The expected .metadata.ownerReferences items appears as it should
    It is just that the garbage collector does not seem to work for objects that come from the OpenShift/api project

My questions:

  • Is this expected behavior, or considered a bug?
  • Is there any way to resolve this, and how?
    Note, btw that de quota names are derived from the paas name, and so I was able to write a hack.
    But that hack will not work for Groups, and I don't like the hack (even for quotas)...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.