Coder Social home page Coder Social logo

operator-framework / operator-registry Goto Github PK

View Code? Open in Web Editor NEW
211.0 23.0 242.0 48.12 MB

Operator Registry runs in a Kubernetes or OpenShift cluster to provide operator catalog data to Operator Lifecycle Manager.

License: Apache License 2.0

Dockerfile 0.25% Go 99.24% Makefile 0.33% Shell 0.18%
kubernetes kubernetes-operator grpc

operator-registry's Introduction

operator-registry

Operator Registry runs in a Kubernetes or OpenShift cluster to provide operator catalog data to Operator Lifecycle Manager.

Certificate of Origin

By contributing to this project you agree to the Developer Certificate of Origin (DCO). This document was created by the Linux Kernel community and is a simple statement that you, as a contributor, have the legal right to make the contribution. See the DCO file for details.

Overview

This project provides the following binaries:

  • opm, which generates and updates registry databases as well as the index images that encapsulate them.
  • initializer, which takes as an input a directory of operator manifests and outputs a sqlite database containing the same data for querying
    • Deprecated - use opm registry|index add instead
  • registry-server, which takes a sqlite database loaded with manifests, and exposes a gRPC interface to it.
    • Deprecated - use opm registry serve instead
  • configmap-server, which takes a kubeconfig and a configmap reference, and parses the configmap into the sqlite database before exposing it via the same interface as registry-server.

And libraries:

  • pkg/client - providing a high-level client interface for the gRPC api.
  • pkg/api - providing low-level client libraries for the gRPC interface exposed by registry-server.
  • pkg/registry - providing basic registry types like Packages, Channels, and Bundles.
  • pkg/sqlite - providing interfaces for building sqlite manifest databases from ConfigMaps or directories, and for querying an existing sqlite database.
  • pkg/lib - providing external interfaces for interacting with this project as an api that defines a set of standards for operator bundles and indexes.
  • pkg/containertools - providing an interface to interact with and shell out to common container tooling binaries (if installed on the environment)

NOTE: The purpose of opm tool is to help who needs to manage index catalogues for OLM instances. However, if you are looking for a tool to help you to integrate your operator project with OLM then you should use Operator-SDK.

Manifest format

We refer to a directory of files with one ClusterServiceVersion as a "bundle". A bundle typically includes a ClusterServiceVersion and the CRDs that define the owned APIs of the CSV in its manifest directory, though additional objects may be included. It also includes an annotations file in its metadata folder which defines some higher level aggregate data that helps to describe the format and package information about how the bundle should be added into an index of bundles.

 # example bundle
 etcd
 ├── manifests
 │   ├── etcdcluster.crd.yaml
 │   └── etcdoperator.clusterserviceversion.yaml
 └── metadata
     └── annotations.yaml

When loading manifests into the database, the following invariants are validated:

  • The bundle must have at least one channel defined in the annotations.
  • Every bundle has exactly one ClusterServiceVersion.
  • If a ClusterServiceVersion owns a CRD, that CRD must exist in the bundle.

Bundle directories are identified solely by the fact that they contain a ClusterServiceVersion, which provides an amount of freedom for layout of manifests.

Check out the operator bundle design for more detail on the bundle format.

Bundle images

Using OCI spec container images as a method of storing the manifest and metadata contents of individual bundles, opm interacts directly with these images to generate and incrementally update the database. Once you have your manifests defined and have created a directory in the format defined above, building the image is as simple as defining a Dockerfile and building that image:

podman build -t quay.io/my-container-registry-namespace/my-manifest-bundle:latest -f bundle.Dockerfile .

Once you have built the container, you can publish it like any other container image:

podman push quay.io/my-container-registry-namespace/my-manifest-bundle:latest

Of course, this build step can be done with any other OCI spec container tools like docker, buildah, libpod, etc.

Note that you do not need to create your bundle manually. Operator-SDK provide features and helpers to build, to update, to validate and to test bundles for projects which follows the SDK layout or not. For more information check its documentations over Integration with OLM

Building an index of Operators using opm

Now that you have published the container image containing your manifests, how do you actually make that bundle available to other users' Kubernetes clusters so that the Operator Lifecycle Manager can install the operator? This is where the meat of the operator-registry project comes in. OLM has the concept of CatalogSources which define a reference to what packages are available to install onto a cluster. To make your bundle available, you can add the bundle to a container image which the CatalogSource points to. This image contains a database of pointers to bundle images that OLM can pull and extract the manifests from in order to install an operator. So, to make your operator available to OLM, you can generate an index image via opm with your bundle reference included:

opm index add --bundles quay.io/my-container-registry-namespace/my-manifest-bundle:0.0.1 --tag quay.io/my-container-registry-namespace/my-index:1.0.0
podman push quay.io/my-container-registry-namespace/my-index:1.0.0

The resulting image is referred to as an "Index". It is an image which contains a database of pointers to operator manifest content that is easily queriable via an included API that is served when the container image is run.

Now that image is available for clusters to use and reference with CatalogSources on their cluster.

Index images are additive, so you can add a new version of your operator bundle when you publish a new version:

opm index add --bundles quay.io/my-container-registry-namespace/my-manifest-bundle:0.0.2 --from-index quay.io/my-container-registry-namespace/my-index:1.0.0 --tag quay.io/my-container-registry-namespace/my-index:1.0.1

For more detail on using opm to generate index images, take a look at the documentation.

Using the index with Operator Lifecycle Manager

To add an index packaged with operator-registry to your cluster for use with Operator Lifecycle Manager (OLM) create a CatalogSource referencing the image you created and pushed above:

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: example-manifests
  namespace: default
spec:
  sourceType: grpc
  image: example-registry:latest

This will download the referenced image and start a pod in the designated namespace (default). Watch the catalog pods to verify it's starting its gRPC frontend correctly:

$ kubectl logs example-manifests-wfh5h -n default

time="2019-03-18T10:20:14Z" level=info msg="serving registry" database=bundles.db port=50051

Once the catalog has been loaded, your Operators package definitions are read by the package-server, a component of OLM. Watch your Operator packages become available:

$ watch kubectl get packagemanifests

[...]

NAME                     AGE
prometheus               13m
etcd                     27m

Once loaded, you can query a particular package for its Operators that it serves across multiple channels. To obtain the default channel run:

$ kubectl get packagemanifests etcd -o jsonpath='{.status.defaultChannel}'

alpha

With this information, the operators package name, the channel and the name and namespace of your catalog you can now subscribe to Operators with Operator Lifecycle Manager. This represents an intent to install an Operator and get subsequent updates from the catalog:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: etcd-subscription
  namespace: default
spec:
  channel: alpha
  name: etcd
  source: example-manifests
  sourceNamespace: default

Using the catalog locally

After starting a catalog locally:

docker run --rm -p 50051:50051 <index image>

grpcurl is a useful tool for interacting with the api:

$ grpcurl -plaintext  localhost:50051 list api.Registry
GetBundle
GetBundleForChannel
GetBundleThatReplaces
GetChannelEntriesThatProvide
GetChannelEntriesThatReplace
GetDefaultBundleThatProvides
GetLatestChannelEntriesThatProvide
GetPackage
ListPackages
grpcurl -plaintext  localhost:50051 api.Registry/ListPackages
{
  "name": "etcd"
}
{
  "name": "prometheus"
}
grpcurl -plaintext -d '{"name":"etcd"}' localhost:50051 api.Registry/GetPackage
{
  "name": "etcd",
  "channels": [
    {
      "name": "alpha",
      "csvName": "etcdoperator.v0.9.2"
    }
  ],
  "defaultChannelName": "alpha"
}
$ grpcurl localhost:50051 describe api.Registry.GetBundleForChannel
api.Registry.GetBundleForChannel is a method:
{
  "name": "GetBundleForChannel",
  "inputType": ".api.GetBundleInChannelRequest",
  "outputType": ".api.Bundle",
  "options": {
  }
}
$ grpcurl localhost:50051 describe api.GetBundleInChannelRequest
api.GetBundleInChannelRequest is a message:
{
  "name": "GetBundleInChannelRequest",
  "field": [
    {
      "name": "pkgName",
      "number": 1,
      "label": "LABEL_OPTIONAL",
      "type": "TYPE_STRING",
      "options": {

      },
      "jsonName": "pkgName"
    },
    {
      "name": "channelName",
      "number": 2,
      "label": "LABEL_OPTIONAL",
      "type": "TYPE_STRING",
      "options": {

      },
      "jsonName": "channelName"
    }
  ],
  "options": {

  }
}
grpcurl -plaintext -d '{"pkgName":"etcd","channelName":"alpha"}' localhost:50051 api.Registry/GetBundleForChannel
{
  "csvName": "etcdoperator.v0.9.2",
  "csvJson": "{\"apiVersion\":\"operators.coreos.com/v1alpha1\",\"kind\":\"ClusterServiceVersion\",\"metadata\":{\"annotations\":{\"alm-examples\":\"[{\\\"apiVersion\\\":\\\"etcd.database.coreos.com/v1beta2\\\",\\\"kind\\\":\\\"EtcdCluster\\\",\\\"metadata\\\":{\\\"name\\\":\\\"example\\\",\\\"namespace\\\":\\\"default\\\"},\\\"spec\\\":{\\\"size\\\":3,\\\"version\\\":\\\"3.2.13\\\"}},{\\\"apiVersion\\\":\\\"etcd.database.coreos.com/v1beta2\\\",\\\"kind\\\":\\\"EtcdRestore\\\",\\\"metadata\\\":{\\\"name\\\":\\\"example-etcd-cluster\\\"},\\\"spec\\\":{\\\"etcdCluster\\\":{\\\"name\\\":\\\"example-etcd-cluster\\\"},\\\"backupStorageType\\\":\\\"S3\\\",\\\"s3\\\":{\\\"path\\\":\\\"\\u003cfull-s3-path\\u003e\\\",\\\"awsSecret\\\":\\\"\\u003caws-secret\\u003e\\\"}}},{\\\"apiVersion\\\":\\\"etcd.database.coreos.com/v1beta2\\\",\\\"kind\\\":\\\"EtcdBackup\\\",\\\"metadata\\\":{\\\"name\\\":\\\"example-etcd-cluster-backup\\\"},\\\"spec\\\":{\\\"etcdEndpoints\\\":[\\\"\\u003cetcd-cluster-endpoints\\u003e\\\"],\\\"storageType\\\":\\\"S3\\\",\\\"s3\\\":{\\\"path\\\":\\\"\\u003cfull-s3-path\\u003e\\\",\\\"awsSecret\\\":\\\"\\u003caws-secret\\u003e\\\"}}}]\",\"tectonic-visibility\":\"ocs\"},\"name\":\"etcdoperator.v0.9.2\",\"namespace\":\"placeholder\"},\"spec\":{\"customresourcedefinitions\":{\"owned\":[{\"description\":\"Represents a cluster of etcd nodes.\",\"displayName\":\"etcd Cluster\",\"kind\":\"EtcdCluster\",\"name\":\"etcdclusters.etcd.database.coreos.com\",\"resources\":[{\"kind\":\"Service\",\"version\":\"v1\"},{\"kind\":\"Pod\",\"version\":\"v1\"}],\"specDescriptors\":[{\"description\":\"The desired number of member Pods for the etcd cluster.\",\"displayName\":\"Size\",\"path\":\"size\",\"x-descriptors\":[\"urn:alm:descriptor:com.tectonic.ui:podCount\"]},{\"description\":\"Limits describes the minimum/maximum amount of compute resources required/allowed\",\"displayName\":\"Resource Requirements\",\"path\":\"pod.resources\",\"x-descriptors\":[\"urn:alm:descriptor:com.tectonic.ui:resourceRequirements\"]}],\"statusDescriptors\":[{\"description\":\"The status of each of the member Pods for the etcd cluster.\",\"displayName\":\"Member Status\",\"path\":\"members\",\"x-descriptors\":[\"urn:alm:descriptor:com.tectonic.ui:podStatuses\"]},{\"description\":\"The service at which the running etcd cluster can be accessed.\",\"displayName\":\"Service\",\"path\":\"serviceName\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes:Service\"]},{\"description\":\"The current size of the etcd cluster.\",\"displayName\":\"Cluster Size\",\"path\":\"size\"},{\"description\":\"The current version of the etcd cluster.\",\"displayName\":\"Current Version\",\"path\":\"currentVersion\"},{\"description\":\"The target version of the etcd cluster, after upgrading.\",\"displayName\":\"Target Version\",\"path\":\"targetVersion\"},{\"description\":\"The current status of the etcd cluster.\",\"displayName\":\"Status\",\"path\":\"phase\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes.phase\"]},{\"description\":\"Explanation for the current status of the cluster.\",\"displayName\":\"Status Details\",\"path\":\"reason\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes.phase:reason\"]}],\"version\":\"v1beta2\"},{\"description\":\"Represents the intent to backup an etcd cluster.\",\"displayName\":\"etcd Backup\",\"kind\":\"EtcdBackup\",\"name\":\"etcdbackups.etcd.database.coreos.com\",\"specDescriptors\":[{\"description\":\"Specifies the endpoints of an etcd cluster.\",\"displayName\":\"etcd Endpoint(s)\",\"path\":\"etcdEndpoints\",\"x-descriptors\":[\"urn:alm:descriptor:etcd:endpoint\"]},{\"description\":\"The full AWS S3 path where the backup is saved.\",\"displayName\":\"S3 Path\",\"path\":\"s3.path\",\"x-descriptors\":[\"urn:alm:descriptor:aws:s3:path\"]},{\"description\":\"The name of the secret object that stores the AWS credential and config files.\",\"displayName\":\"AWS Secret\",\"path\":\"s3.awsSecret\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes:Secret\"]}],\"statusDescriptors\":[{\"description\":\"Indicates if the backup was successful.\",\"displayName\":\"Succeeded\",\"path\":\"succeeded\",\"x-descriptors\":[\"urn:alm:descriptor:text\"]},{\"description\":\"Indicates the reason for any backup related failures.\",\"displayName\":\"Reason\",\"path\":\"reason\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes.phase:reason\"]}],\"version\":\"v1beta2\"},{\"description\":\"Represents the intent to restore an etcd cluster from a backup.\",\"displayName\":\"etcd Restore\",\"kind\":\"EtcdRestore\",\"name\":\"etcdrestores.etcd.database.coreos.com\",\"specDescriptors\":[{\"description\":\"References the EtcdCluster which should be restored,\",\"displayName\":\"etcd Cluster\",\"path\":\"etcdCluster.name\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes:EtcdCluster\",\"urn:alm:descriptor:text\"]},{\"description\":\"The full AWS S3 path where the backup is saved.\",\"displayName\":\"S3 Path\",\"path\":\"s3.path\",\"x-descriptors\":[\"urn:alm:descriptor:aws:s3:path\"]},{\"description\":\"The name of the secret object that stores the AWS credential and config files.\",\"displayName\":\"AWS Secret\",\"path\":\"s3.awsSecret\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes:Secret\"]}],\"statusDescriptors\":[{\"description\":\"Indicates if the restore was successful.\",\"displayName\":\"Succeeded\",\"path\":\"succeeded\",\"x-descriptors\":[\"urn:alm:descriptor:text\"]},{\"description\":\"Indicates the reason for any restore related failures.\",\"displayName\":\"Reason\",\"path\":\"reason\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes.phase:reason\"]}],\"version\":\"v1beta2\"}]},\"description\":\"etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines. It’s open-source and available on GitHub. etcd gracefully handles leader elections during network partitions and will tolerate machine failure, including the leader. Your applications can read and write data into etcd.\\nA simple use-case is to store database connection details or feature flags within etcd as key value pairs. These values can be watched, allowing your app to reconfigure itself when they change. Advanced uses take advantage of the consistency guarantees to implement database leader elections or do distributed locking across a cluster of workers.\\n\\n_The etcd Open Cloud Service is Public Alpha. The goal before Beta is to fully implement backup features._\\n\\n### Reading and writing to etcd\\n\\nCommunicate with etcd though its command line utility `etcdctl` or with the API using the automatically generated Kubernetes Service.\\n\\n[Read the complete guide to using the etcd Open Cloud Service](https://coreos.com/tectonic/docs/latest/alm/etcd-ocs.html)\\n\\n### Supported Features\\n\\n\\n**High availability**\\n\\n\\nMultiple instances of etcd are networked together and secured. Individual failures or networking issues are transparently handled to keep your cluster up and running.\\n\\n\\n**Automated updates**\\n\\n\\nRolling out a new etcd version works like all Kubernetes rolling updates. Simply declare the desired version, and the etcd service starts a safe rolling update to the new version automatically.\\n\\n\\n**Backups included**\\n\\n\\nComing soon, the ability to schedule backups to happen on or off cluster.\\n\",\"displayName\":\"etcd\",\"icon\":[{\"base64data\":\"iVBORw0KGgoAAAANSUhEUgAAAOEAAADZCAYAAADWmle6AAAACXBIWXMAAAsTAAALEwEAmpwYAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAEKlJREFUeNrsndt1GzkShmEev4sTgeiHfRYdgVqbgOgITEVgOgLTEQydwIiKwFQCayoCU6+7DyYjsBiBFyVVz7RkXvqCSxXw/+f04XjGQ6IL+FBVuL769euXgZ7r39f/G9iP0X+u/jWDNZzZdGI/Ftama1jjuV4BwmcNpbAf1Fgu+V/9YRvNAyzT2a59+/GT/3hnn5m16wKWedJrmOCxkYztx9Q+py/+E0GJxtJdReWfz+mxNt+QzS2Mc0AI+HbBBwj9QViKbH5t64DsP2fvmGXUkWU4WgO+Uve2YQzBUGd7r+zH2ZG/tiUQc4QxKwgbwFfVGwwmdLL5wH78aPC/ZBem9jJpCAX3xtcNASSNgJLzUPSQyjB1zQNl8IQJ9MIU4lx2+Jo72ysXYKl1HSzN02BMa/vbZ5xyNJIshJzwf3L0dQhJw4Sih/SFw9Tk8sVeghVPoefaIYCkMZCKbrcP9lnZuk0uPUjGE/KE8JQry7W2tgfuC3vXgvNV+qSQbyFtAtyWk7zWiYevvuUQ9QEQCvJ+5mmu6dTjz1zFHLFj8Eb87MtxaZh/IQFIHom+9vgTWwZxAQjT9X4vtbEVPojwjiV471s00mhAckpwGuCn1HtFtRDaSh6y9zsL+LNBvCG/24ThcxHObdlWc1v+VQJe8LcO0jwtuF8BwnAAUgP9M8JPU2Me+Oh12auPGT6fHuTePE3bLDy+x9pTLnhMn+07TQGh//Bz1iI0c6kvtqInjvPZcYR3KsPVmUsPYt9nFig9SCY8VQNhpPBzn952bbgcsk2EvM89wzh3UEffBbyPqvBUBYQ8ODGPFOLsa7RF096WJ69L+E4EmnpjWu5o4ChlKaRTKT39RMMaVPEQRsz/nIWlDN80chjdJlSd1l0pJCAMVZsniobQVuxceMM9OFoaMd9zqZtjMEYYDW38Drb8Y0DYPLShxn0pvIFuOSxd7YCPet9zk452wsh54FJoeN05hcgSQoG5RR0Qh9Q4E4VvL4wcZq8UACgaRFEQKgSwWrkr5WFnGxiHSutqJGlXjBgIOayhwYBTA0ER0oisIVSUV0AAMT0IASCUO4hRIQSAEECMCCEPwqyQA0JCQBzEGjWNAqHiUVAoXUWbvggOIQCEAOJzxTjoaQ4AIaE64/aZridUsBYUgkhB15oGg1DBIl8IqirYwV6hPSGBSFteMCUBSVXwfYixBmamRubeMyjzMJQBDDowE3OesDD+zwqFoDqiEwXoXJpljB+PvWJGy75BKF1FPxhKygJuqUdYQGlLxNEXkrYyjQ0GbaAwEnUIlLRNvVjQDYUAsJB0HKLE4y0AIpQNgCIhBIhQTgCKhZBBpAN/v6LtQI50JfUgYOnnjmLUFHKhjxbAmdTCaTiBm3ovLPqG2urWAij6im0Nd9aTN9ygLUEt9LgSRnohxUPIKxlGaE+/6Y7znFf0yX+GnkvFFWmarkab2o9PmTeq8sbd2a7DaysXz7i64VeznN4jCQhN9gdDbRiuWrfrsq0mHIrlaq+hlotCtd3Um9u0BYWY8y5D67wccJoZjFca7iUs9VqZcfsZwTd1sbWGG+OcYaTnPAP7rTQVVlM4Sg3oGvB1tmNh0t/HKXZ1jFoIMwCQjtqbhNxUmkGYqgZEDZP11HN/S3gAYRozf0l8C5kKEKUvW0t1IfeWG/5MwgheZTT1E0AEhDkAePQO+Ig2H3DncAkQM4cwUQCD530dU4B5Yvmi2LlDqXfWrxMCcMth51RToRMNUXFnfc2KJ0+Ryl0VNOUwlhh6NoxK5gnViTgQpUG4SqSyt5z3zRJpuKmt3Q1614QaCBPaN6je+2XiFcWAKOXcUfIYKRyL/1lb7pe5VxSxxjQ6hImshqGRt5GWZVKO6q2wHwujfwDtIvaIdexj8Cm8+a68EqMfox6x/voMouZF4dHnEGNeCDMwT6vdNfekH1MafMk4PI06YtqLVGl95aEM9Z5vAeCTOA++YLtoVJRrsqNCaJ6WRmkdYaNec5BT/lcTRMqrhmwfjbpkj55+OKp8IEbU/JLgPJE6Wa3TTe9sHS+ShVD5QIyqIxMEwKh12olC6mHIed5ewEop80CNlfIOADYOT2nd6ZXCop+Ebqchc0JqxKcKASxChycJgUh1rnHA5ow9eTrhqNI7JWiAYYwBGGdpyNLoGw0Pkh96h1BpHihyywtATDM/7Hk2fN9EnH8BgKJCU4ooBkbXFMZJiPbrOyecGl3zgQDQL4hk10IZiOe+5w99Q/gBAEIJgPhJM4QAEEoFREAIAAEiIASAkD8Qt4AQAEIAERAGFlX4CACKAXGVM4ivMwWwCLFAlyeoaa70QePKm5Dlp+/n+ye/5dYgva6YsUaVeMa+tzNFeJtWwc+udbJ0Fg399kLielQJ5Ze61c2+7ytA6EZetiPxZC6tj22yJCv6jUwOyj/zcbqAxOMyAKEbfeHtNa7DtYXptjsk2kJxR+eIeim/tHNofUKYy8DMrQcAKWz6brpvzyIAlpwPhQ49l6b7skJf5Z+YTOYQc4FwLDxvoTDwaygQK+U/kVr+ytSFBG01Q3gnJJR4cNiAhx4HDub8/b5DULXlj6SVZghFiE+LdvE9vo/o8Lp1RmH5hzm0T6wdbZ6n+D6i44zDRc3ln6CpAEJfXiRU45oqLz8gFAThWsh7ughrRibc0QynHgZpNJa/ENJ+loCwu/qOGnFIjYR/n7TfgycULhcQhu6VC+HfF+L3BoAQ4WiZTw1M+FPCnA2gKC6/FAhXgDC+ojQGh3NuWsvfF1L/D5ohlCKtl1j2ldu9a/nPAKFwN56Bst10zCG0CPleXN/zXPgHQZXaZaBgrbzyY5V/mUA+6F0hwtGN9rwu5DVZPuwWqfxdFz1LWbJ2lwKEa+0Qsm4Dl3fp+Pu0lV97PgwIPfSsS+UQhj5Oo+vvFULazRIQyvGEcxPuNLCth2MvFsrKn8UOilAQShkh7TTczYNMoS6OdP47msrPi82lXKGWhCdMZYS0bFy+vcnGAjP1CIfvgbKNA9glecEH9RD6Ol4wRuWyN/G9MHnksS6o/GPf5XcwNSUlHzQhDuAKtWJmkwKElU7lylP5rgIcsquh/FI8YZCDpkJBuE4FQm7Icw8N+SrUGaQKyi8FwiDt1ve5o+Vu7qYHy/psgK8cvh+FTYuO77bhEC7GuaPiys/L1X4IgXDL+e3M5+ovLxBy5VLuIebw1oqcHoPfoaMJUsHays878r8KbDc3xtPx/84gZPBG/JwaufrsY/SRG/OY3//8QMNdsvdZCFtbW6f8pFuf5bflILAlX7O+4fdfugKyFYS8T2zAsXthdG0VurPGKwI06oF5vkBgHWkNp6ry29+lsPZMU3vijnXFNmoclr+6+Ou/FIb8yb30sS8YGjmTqCLyQsi5N/6ZwKs0Yenj68pfPjF6N782Dp2FzV9CTyoSeY8mLK16qGxIkLI8oa1n8tz9juP40DlK0epxYEbojbq+9QfurBeVIlCO9D2396bxiV4lkYQ3hOAFw2pbhqMGISkkQOMcQ9EqhDmGZZdo92JC0YHRNTfoSg+5e0IT+opqCKHoIU+4ztQIgBD1EFNrQAgIpYSil9lDmPHqkROPt+JC6AgPquSuumJmg0YARVCuneDfvPVeJokZ6pIXDkNxQtGzTF9/BQjRG0tQznfb74RwCQghpALBtIQnfK4zhxdyQvVCUeknMIT3hLyY+T5jo0yABqKPQNpUNw/09tGZod5jgCaYFxyYvJcNPkv9eof+I3pnCFEHIETjSM8L9tHZHYCQT9PaZGycU6yg8S4akDnJ+P03L0+t23XGzCLzRgII/Wqa+fv/xlfvmKvMUOcOrlCDdoei1MGdZm6G5VEIfRzzjd4aQs69n699Rx7ewhvCGzr2gmTPs8zNsJOrXt24FbkhhOjCfT4ICA/rPbyhUy94Dks0gJCX1NzCZui9YUd3oei+c257TalFbgg19ILHrlrL2gvWgXAL26EX76gZTNASQnad8Ibwhl284NhgXpB0c+jKhWO3Ms1hP9ihJYB9eMF6qd1BCPk0qA1s+LimFIu7m4nsdQIzPK4VbQ8hYvrnuSH2G9b2ggP78QmWqBdF9Vx8SSY6QYdUW7BTA1schZATyhvY8lHvcRbNUS9YGFy2U+qmzh2YPVc0I7yAOFyHfRpyUwtCSzOdPXMHmz7qDIM0e0V2wZTEk+6Ym6N63eBLp/b5Bts+2cKCSJ/LuoZO3ANSiE5hKAZjnvNSS4931jcw9jpwT0feV/qSJ1pVtCyfHKDkvK8Ejx7pUxGh2xFNSwx8QTi2H9ceC0/nni64MS/5N5dG39pDqvRV+WgGk71c9VFXF9b+xYvOw/d61iv7m3MvEHryhvecwC52jSSx4VIIgwnMNT/UsTxIgpPt3K/ARj15CptwL3Zd/ceDSATj2DGQjbxgWwhdeMMte7zpy5On9vymRm/YxBYljGVjKWF9VJf7I1+sex3wY8w/V1QPTborW/72gkdsRDaZMJBdbdHIC7aCkAu9atlLbtnrzerMnyToDaGwelOnk3/hHSem/ZK7e/t7jeeR20LYBgqa8J80gS8jbwi5F02Uj1u2NYJxap8PLkJfLxA2hIJyvnHX/AfeEPLpBfe0uSFHbnXaea3Qd5d6HcpYZ8L6M7lnFwMQ3MNg+RxUR1+6AshtbsVgfXTEg1sIGax9UND2p7f270wdG3eK9gXVGHdw2k5sOyZv+Nbs39Z308XR9DqWb2J+PwKDhuKHPobfuXf7gnYGHdCs7bhDDadD4entDug7LWNsnRNW4mYqwJ9dk+GGSTPBiA2j0G8RWNM5upZtcG4/3vMfP7KnbK2egx6CCnDPhRn7NgD3cghLIad5WcM2SO38iqHvvMOosyeMpQ5zlVCaaj06GVs9xUbHdiKoqrHWgquFEFMWUEWfXUxJAML23hAHFOctmjZQffKD2pywkhtSGHKNtpitLroscAeE7kCkSsC60vxEl6yMtL9EL5HKGCMszU5bk8gdkklAyEn5FO0yK419rIxBOIqwFMooDE0tHEVYijAUECIshRCGIhxFWIowFJ5QkEYIS5PTJrUwNGlPyN6QQPyKtpuM1E/K5+YJDV/MiA3AaehzqgAm7QnZG9IGYKo8bHnSK7VblLL3hOwNHziPuEGOqE5brrdR6i+atCfckyeWD47HkAkepRGLY/e8A8J0gCwYSNypF08bBm+e6zVz2UL4AshhBUjML/rXLefqC82bcQFhGC9JDwZ1uuu+At0S5gCETYHsV4DUeD9fDN2Zfy5OXaW2zAwQygCzBLJ8cvaW5OXKC1FxfTggFAHmoAJnSiOw2wps9KwRWgJCLaEswaj5NqkLwAYIU4BxqTSXbHXpJdRMPZgAOiAMqABCNGYIEEJutEK5IUAIwYMDQgiCACEEAcJs1Vda7gGqDhCmoiEghAAhBAHCrKXVo2C1DCBMRlp37uMIEECoX7xrX3P5C9QiINSuIcoPAUI0YkAICLNWgfJDh4T9hH7zqYH9+JHAq7zBqWjwhPAicTVCVQJCNF50JghHocahKK0X/ZnQKyEkhSdUpzG8OgQI42qC94EQjsYLRSmH+pbgq73L6bYkeEJ4DYTYmeg1TOBFc/usTTp3V9DdEuXJ2xDCUbXhaXk0/kAYmBvuMB4qkC35E5e5AMKkwSQgyxufyuPy6fMMgAFCSI73LFXU/N8AmEL9X4ABACNSKMHAgb34AAAAAElFTkSuQmCC\",\"mediatype\":\"image/png\"}],\"install\":{\"spec\":{\"deployments\":[{\"name\":\"etcd-operator\",\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"name\":\"etcd-operator-alm-owned\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"etcd-operator-alm-owned\"},\"name\":\"etcd-operator-alm-owned\"},\"spec\":{\"containers\":[{\"command\":[\"etcd-operator\",\"--create-crd=false\"],\"env\":[{\"name\":\"MY_POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}},{\"name\":\"MY_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}}],\"image\":\"quay.io/coreos/etcd-operator@sha256:c0301e4686c3ed4206e370b42de5a3bd2229b9fb4906cf85f3f30650424abec2\",\"name\":\"etcd-operator\"},{\"command\":[\"etcd-backup-operator\",\"--create-crd=false\"],\"env\":[{\"name\":\"MY_POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}},{\"name\":\"MY_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}}],\"image\":\"quay.io/coreos/etcd-operator@sha256:c0301e4686c3ed4206e370b42de5a3bd2229b9fb4906cf85f3f30650424abec2\",\"name\":\"etcd-backup-operator\"},{\"command\":[\"etcd-restore-operator\",\"--create-crd=false\"],\"env\":[{\"name\":\"MY_POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}},{\"name\":\"MY_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}}],\"image\":\"quay.io/coreos/etcd-operator@sha256:c0301e4686c3ed4206e370b42de5a3bd2229b9fb4906cf85f3f30650424abec2\",\"name\":\"etcd-restore-operator\"}],\"serviceAccountName\":\"etcd-operator\"}}}}],\"permissions\":[{\"rules\":[{\"apiGroups\":[\"etcd.database.coreos.com\"],\"resources\":[\"etcdclusters\",\"etcdbackups\",\"etcdrestores\"],\"verbs\":[\"*\"]},{\"apiGroups\":[\"\"],\"resources\":[\"pods\",\"services\",\"endpoints\",\"persistentvolumeclaims\",\"events\"],\"verbs\":[\"*\"]},{\"apiGroups\":[\"apps\"],\"resources\":[\"deployments\"],\"verbs\":[\"*\"]},{\"apiGroups\":[\"\"],\"resources\":[\"secrets\"],\"verbs\":[\"get\"]}],\"serviceAccountName\":\"etcd-operator\"}]},\"strategy\":\"deployment\"},\"keywords\":[\"etcd\",\"key value\",\"database\",\"coreos\",\"open source\"],\"labels\":{\"alm-owner-etcd\":\"etcdoperator\",\"operated-by\":\"etcdoperator\"},\"links\":[{\"name\":\"Blog\",\"url\":\"https://coreos.com/etcd\"},{\"name\":\"Documentation\",\"url\":\"https://coreos.com/operators/etcd/docs/latest/\"},{\"name\":\"etcd Operator Source Code\",\"url\":\"https://github.com/coreos/etcd-operator\"}],\"maintainers\":[{\"email\":\"[email protected]\",\"name\":\"CoreOS, Inc\"}],\"maturity\":\"alpha\",\"provider\":{\"name\":\"CoreOS, Inc\"},\"replaces\":\"etcdoperator.v0.9.0\",\"selector\":{\"matchLabels\":{\"alm-owner-etcd\":\"etcdoperator\",\"operated-by\":\"etcdoperator\"}},\"version\":\"0.9.2\"}}",
  "object": [
    "{\"apiVersion\":\"apiextensions.k8s.io/v1beta1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"name\":\"etcdbackups.etcd.database.coreos.com\"},\"spec\":{\"group\":\"etcd.database.coreos.com\",\"names\":{\"kind\":\"EtcdBackup\",\"listKind\":\"EtcdBackupList\",\"plural\":\"etcdbackups\",\"singular\":\"etcdbackup\"},\"scope\":\"Namespaced\",\"version\":\"v1beta2\"}}",
    "{\"apiVersion\":\"apiextensions.k8s.io/v1beta1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"name\":\"etcdclusters.etcd.database.coreos.com\"},\"spec\":{\"group\":\"etcd.database.coreos.com\",\"names\":{\"kind\":\"EtcdCluster\",\"listKind\":\"EtcdClusterList\",\"plural\":\"etcdclusters\",\"shortNames\":[\"etcdclus\",\"etcd\"],\"singular\":\"etcdcluster\"},\"scope\":\"Namespaced\",\"version\":\"v1beta2\"}}",
    "{\"apiVersion\":\"operators.coreos.com/v1alpha1\",\"kind\":\"ClusterServiceVersion\",\"metadata\":{\"annotations\":{\"alm-examples\":\"[{\\\"apiVersion\\\":\\\"etcd.database.coreos.com/v1beta2\\\",\\\"kind\\\":\\\"EtcdCluster\\\",\\\"metadata\\\":{\\\"name\\\":\\\"example\\\",\\\"namespace\\\":\\\"default\\\"},\\\"spec\\\":{\\\"size\\\":3,\\\"version\\\":\\\"3.2.13\\\"}},{\\\"apiVersion\\\":\\\"etcd.database.coreos.com/v1beta2\\\",\\\"kind\\\":\\\"EtcdRestore\\\",\\\"metadata\\\":{\\\"name\\\":\\\"example-etcd-cluster\\\"},\\\"spec\\\":{\\\"etcdCluster\\\":{\\\"name\\\":\\\"example-etcd-cluster\\\"},\\\"backupStorageType\\\":\\\"S3\\\",\\\"s3\\\":{\\\"path\\\":\\\"\\u003cfull-s3-path\\u003e\\\",\\\"awsSecret\\\":\\\"\\u003caws-secret\\u003e\\\"}}},{\\\"apiVersion\\\":\\\"etcd.database.coreos.com/v1beta2\\\",\\\"kind\\\":\\\"EtcdBackup\\\",\\\"metadata\\\":{\\\"name\\\":\\\"example-etcd-cluster-backup\\\"},\\\"spec\\\":{\\\"etcdEndpoints\\\":[\\\"\\u003cetcd-cluster-endpoints\\u003e\\\"],\\\"storageType\\\":\\\"S3\\\",\\\"s3\\\":{\\\"path\\\":\\\"\\u003cfull-s3-path\\u003e\\\",\\\"awsSecret\\\":\\\"\\u003caws-secret\\u003e\\\"}}}]\",\"tectonic-visibility\":\"ocs\"},\"name\":\"etcdoperator.v0.9.2\",\"namespace\":\"placeholder\"},\"spec\":{\"customresourcedefinitions\":{\"owned\":[{\"description\":\"Represents a cluster of etcd nodes.\",\"displayName\":\"etcd Cluster\",\"kind\":\"EtcdCluster\",\"name\":\"etcdclusters.etcd.database.coreos.com\",\"resources\":[{\"kind\":\"Service\",\"version\":\"v1\"},{\"kind\":\"Pod\",\"version\":\"v1\"}],\"specDescriptors\":[{\"description\":\"The desired number of member Pods for the etcd cluster.\",\"displayName\":\"Size\",\"path\":\"size\",\"x-descriptors\":[\"urn:alm:descriptor:com.tectonic.ui:podCount\"]},{\"description\":\"Limits describes the minimum/maximum amount of compute resources required/allowed\",\"displayName\":\"Resource Requirements\",\"path\":\"pod.resources\",\"x-descriptors\":[\"urn:alm:descriptor:com.tectonic.ui:resourceRequirements\"]}],\"statusDescriptors\":[{\"description\":\"The status of each of the member Pods for the etcd cluster.\",\"displayName\":\"Member Status\",\"path\":\"members\",\"x-descriptors\":[\"urn:alm:descriptor:com.tectonic.ui:podStatuses\"]},{\"description\":\"The service at which the running etcd cluster can be accessed.\",\"displayName\":\"Service\",\"path\":\"serviceName\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes:Service\"]},{\"description\":\"The current size of the etcd cluster.\",\"displayName\":\"Cluster Size\",\"path\":\"size\"},{\"description\":\"The current version of the etcd cluster.\",\"displayName\":\"Current Version\",\"path\":\"currentVersion\"},{\"description\":\"The target version of the etcd cluster, after upgrading.\",\"displayName\":\"Target Version\",\"path\":\"targetVersion\"},{\"description\":\"The current status of the etcd cluster.\",\"displayName\":\"Status\",\"path\":\"phase\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes.phase\"]},{\"description\":\"Explanation for the current status of the cluster.\",\"displayName\":\"Status Details\",\"path\":\"reason\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes.phase:reason\"]}],\"version\":\"v1beta2\"},{\"description\":\"Represents the intent to backup an etcd cluster.\",\"displayName\":\"etcd Backup\",\"kind\":\"EtcdBackup\",\"name\":\"etcdbackups.etcd.database.coreos.com\",\"specDescriptors\":[{\"description\":\"Specifies the endpoints of an etcd cluster.\",\"displayName\":\"etcd Endpoint(s)\",\"path\":\"etcdEndpoints\",\"x-descriptors\":[\"urn:alm:descriptor:etcd:endpoint\"]},{\"description\":\"The full AWS S3 path where the backup is saved.\",\"displayName\":\"S3 Path\",\"path\":\"s3.path\",\"x-descriptors\":[\"urn:alm:descriptor:aws:s3:path\"]},{\"description\":\"The name of the secret object that stores the AWS credential and config files.\",\"displayName\":\"AWS Secret\",\"path\":\"s3.awsSecret\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes:Secret\"]}],\"statusDescriptors\":[{\"description\":\"Indicates if the backup was successful.\",\"displayName\":\"Succeeded\",\"path\":\"succeeded\",\"x-descriptors\":[\"urn:alm:descriptor:text\"]},{\"description\":\"Indicates the reason for any backup related failures.\",\"displayName\":\"Reason\",\"path\":\"reason\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes.phase:reason\"]}],\"version\":\"v1beta2\"},{\"description\":\"Represents the intent to restore an etcd cluster from a backup.\",\"displayName\":\"etcd Restore\",\"kind\":\"EtcdRestore\",\"name\":\"etcdrestores.etcd.database.coreos.com\",\"specDescriptors\":[{\"description\":\"References the EtcdCluster which should be restored,\",\"displayName\":\"etcd Cluster\",\"path\":\"etcdCluster.name\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes:EtcdCluster\",\"urn:alm:descriptor:text\"]},{\"description\":\"The full AWS S3 path where the backup is saved.\",\"displayName\":\"S3 Path\",\"path\":\"s3.path\",\"x-descriptors\":[\"urn:alm:descriptor:aws:s3:path\"]},{\"description\":\"The name of the secret object that stores the AWS credential and config files.\",\"displayName\":\"AWS Secret\",\"path\":\"s3.awsSecret\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes:Secret\"]}],\"statusDescriptors\":[{\"description\":\"Indicates if the restore was successful.\",\"displayName\":\"Succeeded\",\"path\":\"succeeded\",\"x-descriptors\":[\"urn:alm:descriptor:text\"]},{\"description\":\"Indicates the reason for any restore related failures.\",\"displayName\":\"Reason\",\"path\":\"reason\",\"x-descriptors\":[\"urn:alm:descriptor:io.kubernetes.phase:reason\"]}],\"version\":\"v1beta2\"}]},\"description\":\"etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines. It’s open-source and available on GitHub. etcd gracefully handles leader elections during network partitions and will tolerate machine failure, including the leader. Your applications can read and write data into etcd.\\nA simple use-case is to store database connection details or feature flags within etcd as key value pairs. These values can be watched, allowing your app to reconfigure itself when they change. Advanced uses take advantage of the consistency guarantees to implement database leader elections or do distributed locking across a cluster of workers.\\n\\n_The etcd Open Cloud Service is Public Alpha. The goal before Beta is to fully implement backup features._\\n\\n### Reading and writing to etcd\\n\\nCommunicate with etcd though its command line utility `etcdctl` or with the API using the automatically generated Kubernetes Service.\\n\\n[Read the complete guide to using the etcd Open Cloud Service](https://coreos.com/tectonic/docs/latest/alm/etcd-ocs.html)\\n\\n### Supported Features\\n\\n\\n**High availability**\\n\\n\\nMultiple instances of etcd are networked together and secured. Individual failures or networking issues are transparently handled to keep your cluster up and running.\\n\\n\\n**Automated updates**\\n\\n\\nRolling out a new etcd version works like all Kubernetes rolling updates. Simply declare the desired version, and the etcd service starts a safe rolling update to the new version automatically.\\n\\n\\n**Backups included**\\n\\n\\nComing soon, the ability to schedule backups to happen on or off cluster.\\n\",\"displayName\":\"etcd\",\"icon\":[{\"base64data\":\"iVBORw0KGgoAAAANSUhEUgAAAOEAAADZCAYAAADWmle6AAAACXBIWXMAAAsTAAALEwEAmpwYAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAEKlJREFUeNrsndt1GzkShmEev4sTgeiHfRYdgVqbgOgITEVgOgLTEQydwIiKwFQCayoCU6+7DyYjsBiBFyVVz7RkXvqCSxXw/+f04XjGQ6IL+FBVuL769euXgZ7r39f/G9iP0X+u/jWDNZzZdGI/Ftama1jjuV4BwmcNpbAf1Fgu+V/9YRvNAyzT2a59+/GT/3hnn5m16wKWedJrmOCxkYztx9Q+py/+E0GJxtJdReWfz+mxNt+QzS2Mc0AI+HbBBwj9QViKbH5t64DsP2fvmGXUkWU4WgO+Uve2YQzBUGd7r+zH2ZG/tiUQc4QxKwgbwFfVGwwmdLL5wH78aPC/ZBem9jJpCAX3xtcNASSNgJLzUPSQyjB1zQNl8IQJ9MIU4lx2+Jo72ysXYKl1HSzN02BMa/vbZ5xyNJIshJzwf3L0dQhJw4Sih/SFw9Tk8sVeghVPoefaIYCkMZCKbrcP9lnZuk0uPUjGE/KE8JQry7W2tgfuC3vXgvNV+qSQbyFtAtyWk7zWiYevvuUQ9QEQCvJ+5mmu6dTjz1zFHLFj8Eb87MtxaZh/IQFIHom+9vgTWwZxAQjT9X4vtbEVPojwjiV471s00mhAckpwGuCn1HtFtRDaSh6y9zsL+LNBvCG/24ThcxHObdlWc1v+VQJe8LcO0jwtuF8BwnAAUgP9M8JPU2Me+Oh12auPGT6fHuTePE3bLDy+x9pTLnhMn+07TQGh//Bz1iI0c6kvtqInjvPZcYR3KsPVmUsPYt9nFig9SCY8VQNhpPBzn952bbgcsk2EvM89wzh3UEffBbyPqvBUBYQ8ODGPFOLsa7RF096WJ69L+E4EmnpjWu5o4ChlKaRTKT39RMMaVPEQRsz/nIWlDN80chjdJlSd1l0pJCAMVZsniobQVuxceMM9OFoaMd9zqZtjMEYYDW38Drb8Y0DYPLShxn0pvIFuOSxd7YCPet9zk452wsh54FJoeN05hcgSQoG5RR0Qh9Q4E4VvL4wcZq8UACgaRFEQKgSwWrkr5WFnGxiHSutqJGlXjBgIOayhwYBTA0ER0oisIVSUV0AAMT0IASCUO4hRIQSAEECMCCEPwqyQA0JCQBzEGjWNAqHiUVAoXUWbvggOIQCEAOJzxTjoaQ4AIaE64/aZridUsBYUgkhB15oGg1DBIl8IqirYwV6hPSGBSFteMCUBSVXwfYixBmamRubeMyjzMJQBDDowE3OesDD+zwqFoDqiEwXoXJpljB+PvWJGy75BKF1FPxhKygJuqUdYQGlLxNEXkrYyjQ0GbaAwEnUIlLRNvVjQDYUAsJB0HKLE4y0AIpQNgCIhBIhQTgCKhZBBpAN/v6LtQI50JfUgYOnnjmLUFHKhjxbAmdTCaTiBm3ovLPqG2urWAij6im0Nd9aTN9ygLUEt9LgSRnohxUPIKxlGaE+/6Y7znFf0yX+GnkvFFWmarkab2o9PmTeq8sbd2a7DaysXz7i64VeznN4jCQhN9gdDbRiuWrfrsq0mHIrlaq+hlotCtd3Um9u0BYWY8y5D67wccJoZjFca7iUs9VqZcfsZwTd1sbWGG+OcYaTnPAP7rTQVVlM4Sg3oGvB1tmNh0t/HKXZ1jFoIMwCQjtqbhNxUmkGYqgZEDZP11HN/S3gAYRozf0l8C5kKEKUvW0t1IfeWG/5MwgheZTT1E0AEhDkAePQO+Ig2H3DncAkQM4cwUQCD530dU4B5Yvmi2LlDqXfWrxMCcMth51RToRMNUXFnfc2KJ0+Ryl0VNOUwlhh6NoxK5gnViTgQpUG4SqSyt5z3zRJpuKmt3Q1614QaCBPaN6je+2XiFcWAKOXcUfIYKRyL/1lb7pe5VxSxxjQ6hImshqGRt5GWZVKO6q2wHwujfwDtIvaIdexj8Cm8+a68EqMfox6x/voMouZF4dHnEGNeCDMwT6vdNfekH1MafMk4PI06YtqLVGl95aEM9Z5vAeCTOA++YLtoVJRrsqNCaJ6WRmkdYaNec5BT/lcTRMqrhmwfjbpkj55+OKp8IEbU/JLgPJE6Wa3TTe9sHS+ShVD5QIyqIxMEwKh12olC6mHIed5ewEop80CNlfIOADYOT2nd6ZXCop+Ebqchc0JqxKcKASxChycJgUh1rnHA5ow9eTrhqNI7JWiAYYwBGGdpyNLoGw0Pkh96h1BpHihyywtATDM/7Hk2fN9EnH8BgKJCU4ooBkbXFMZJiPbrOyecGl3zgQDQL4hk10IZiOe+5w99Q/gBAEIJgPhJM4QAEEoFREAIAAEiIASAkD8Qt4AQAEIAERAGFlX4CACKAXGVM4ivMwWwCLFAlyeoaa70QePKm5Dlp+/n+ye/5dYgva6YsUaVeMa+tzNFeJtWwc+udbJ0Fg399kLielQJ5Ze61c2+7ytA6EZetiPxZC6tj22yJCv6jUwOyj/zcbqAxOMyAKEbfeHtNa7DtYXptjsk2kJxR+eIeim/tHNofUKYy8DMrQcAKWz6brpvzyIAlpwPhQ49l6b7skJf5Z+YTOYQc4FwLDxvoTDwaygQK+U/kVr+ytSFBG01Q3gnJJR4cNiAhx4HDub8/b5DULXlj6SVZghFiE+LdvE9vo/o8Lp1RmH5hzm0T6wdbZ6n+D6i44zDRc3ln6CpAEJfXiRU45oqLz8gFAThWsh7ughrRibc0QynHgZpNJa/ENJ+loCwu/qOGnFIjYR/n7TfgycULhcQhu6VC+HfF+L3BoAQ4WiZTw1M+FPCnA2gKC6/FAhXgDC+ojQGh3NuWsvfF1L/D5ohlCKtl1j2ldu9a/nPAKFwN56Bst10zCG0CPleXN/zXPgHQZXaZaBgrbzyY5V/mUA+6F0hwtGN9rwu5DVZPuwWqfxdFz1LWbJ2lwKEa+0Qsm4Dl3fp+Pu0lV97PgwIPfSsS+UQhj5Oo+vvFULazRIQyvGEcxPuNLCth2MvFsrKn8UOilAQShkh7TTczYNMoS6OdP47msrPi82lXKGWhCdMZYS0bFy+vcnGAjP1CIfvgbKNA9glecEH9RD6Ol4wRuWyN/G9MHnksS6o/GPf5XcwNSUlHzQhDuAKtWJmkwKElU7lylP5rgIcsquh/FI8YZCDpkJBuE4FQm7Icw8N+SrUGaQKyi8FwiDt1ve5o+Vu7qYHy/psgK8cvh+FTYuO77bhEC7GuaPiys/L1X4IgXDL+e3M5+ovLxBy5VLuIebw1oqcHoPfoaMJUsHays878r8KbDc3xtPx/84gZPBG/JwaufrsY/SRG/OY3//8QMNdsvdZCFtbW6f8pFuf5bflILAlX7O+4fdfugKyFYS8T2zAsXthdG0VurPGKwI06oF5vkBgHWkNp6ry29+lsPZMU3vijnXFNmoclr+6+Ou/FIb8yb30sS8YGjmTqCLyQsi5N/6ZwKs0Yenj68pfPjF6N782Dp2FzV9CTyoSeY8mLK16qGxIkLI8oa1n8tz9juP40DlK0epxYEbojbq+9QfurBeVIlCO9D2396bxiV4lkYQ3hOAFw2pbhqMGISkkQOMcQ9EqhDmGZZdo92JC0YHRNTfoSg+5e0IT+opqCKHoIU+4ztQIgBD1EFNrQAgIpYSil9lDmPHqkROPt+JC6AgPquSuumJmg0YARVCuneDfvPVeJokZ6pIXDkNxQtGzTF9/BQjRG0tQznfb74RwCQghpALBtIQnfK4zhxdyQvVCUeknMIT3hLyY+T5jo0yABqKPQNpUNw/09tGZod5jgCaYFxyYvJcNPkv9eof+I3pnCFEHIETjSM8L9tHZHYCQT9PaZGycU6yg8S4akDnJ+P03L0+t23XGzCLzRgII/Wqa+fv/xlfvmKvMUOcOrlCDdoei1MGdZm6G5VEIfRzzjd4aQs69n699Rx7ewhvCGzr2gmTPs8zNsJOrXt24FbkhhOjCfT4ICA/rPbyhUy94Dks0gJCX1NzCZui9YUd3oei+c257TalFbgg19ILHrlrL2gvWgXAL26EX76gZTNASQnad8Ibwhl284NhgXpB0c+jKhWO3Ms1hP9ihJYB9eMF6qd1BCPk0qA1s+LimFIu7m4nsdQIzPK4VbQ8hYvrnuSH2G9b2ggP78QmWqBdF9Vx8SSY6QYdUW7BTA1schZATyhvY8lHvcRbNUS9YGFy2U+qmzh2YPVc0I7yAOFyHfRpyUwtCSzOdPXMHmz7qDIM0e0V2wZTEk+6Ym6N63eBLp/b5Bts+2cKCSJ/LuoZO3ANSiE5hKAZjnvNSS4931jcw9jpwT0feV/qSJ1pVtCyfHKDkvK8Ejx7pUxGh2xFNSwx8QTi2H9ceC0/nni64MS/5N5dG39pDqvRV+WgGk71c9VFXF9b+xYvOw/d61iv7m3MvEHryhvecwC52jSSx4VIIgwnMNT/UsTxIgpPt3K/ARj15CptwL3Zd/ceDSATj2DGQjbxgWwhdeMMte7zpy5On9vymRm/YxBYljGVjKWF9VJf7I1+sex3wY8w/V1QPTborW/72gkdsRDaZMJBdbdHIC7aCkAu9atlLbtnrzerMnyToDaGwelOnk3/hHSem/ZK7e/t7jeeR20LYBgqa8J80gS8jbwi5F02Uj1u2NYJxap8PLkJfLxA2hIJyvnHX/AfeEPLpBfe0uSFHbnXaea3Qd5d6HcpYZ8L6M7lnFwMQ3MNg+RxUR1+6AshtbsVgfXTEg1sIGax9UND2p7f270wdG3eK9gXVGHdw2k5sOyZv+Nbs39Z308XR9DqWb2J+PwKDhuKHPobfuXf7gnYGHdCs7bhDDadD4entDug7LWNsnRNW4mYqwJ9dk+GGSTPBiA2j0G8RWNM5upZtcG4/3vMfP7KnbK2egx6CCnDPhRn7NgD3cghLIad5WcM2SO38iqHvvMOosyeMpQ5zlVCaaj06GVs9xUbHdiKoqrHWgquFEFMWUEWfXUxJAML23hAHFOctmjZQffKD2pywkhtSGHKNtpitLroscAeE7kCkSsC60vxEl6yMtL9EL5HKGCMszU5bk8gdkklAyEn5FO0yK419rIxBOIqwFMooDE0tHEVYijAUECIshRCGIhxFWIowFJ5QkEYIS5PTJrUwNGlPyN6QQPyKtpuM1E/K5+YJDV/MiA3AaehzqgAm7QnZG9IGYKo8bHnSK7VblLL3hOwNHziPuEGOqE5brrdR6i+atCfckyeWD47HkAkepRGLY/e8A8J0gCwYSNypF08bBm+e6zVz2UL4AshhBUjML/rXLefqC82bcQFhGC9JDwZ1uuu+At0S5gCETYHsV4DUeD9fDN2Zfy5OXaW2zAwQygCzBLJ8cvaW5OXKC1FxfTggFAHmoAJnSiOw2wps9KwRWgJCLaEswaj5NqkLwAYIU4BxqTSXbHXpJdRMPZgAOiAMqABCNGYIEEJutEK5IUAIwYMDQgiCACEEAcJs1Vda7gGqDhCmoiEghAAhBAHCrKXVo2C1DCBMRlp37uMIEECoX7xrX3P5C9QiINSuIcoPAUI0YkAICLNWgfJDh4T9hH7zqYH9+JHAq7zBqWjwhPAicTVCVQJCNF50JghHocahKK0X/ZnQKyEkhSdUpzG8OgQI42qC94EQjsYLRSmH+pbgq73L6bYkeEJ4DYTYmeg1TOBFc/usTTp3V9DdEuXJ2xDCUbXhaXk0/kAYmBvuMB4qkC35E5e5AMKkwSQgyxufyuPy6fMMgAFCSI73LFXU/N8AmEL9X4ABACNSKMHAgb34AAAAAElFTkSuQmCC\",\"mediatype\":\"image/png\"}],\"install\":{\"spec\":{\"deployments\":[{\"name\":\"etcd-operator\",\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"name\":\"etcd-operator-alm-owned\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"etcd-operator-alm-owned\"},\"name\":\"etcd-operator-alm-owned\"},\"spec\":{\"containers\":[{\"command\":[\"etcd-operator\",\"--create-crd=false\"],\"env\":[{\"name\":\"MY_POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}},{\"name\":\"MY_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}}],\"image\":\"quay.io/coreos/etcd-operator@sha256:c0301e4686c3ed4206e370b42de5a3bd2229b9fb4906cf85f3f30650424abec2\",\"name\":\"etcd-operator\"},{\"command\":[\"etcd-backup-operator\",\"--create-crd=false\"],\"env\":[{\"name\":\"MY_POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}},{\"name\":\"MY_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}}],\"image\":\"quay.io/coreos/etcd-operator@sha256:c0301e4686c3ed4206e370b42de5a3bd2229b9fb4906cf85f3f30650424abec2\",\"name\":\"etcd-backup-operator\"},{\"command\":[\"etcd-restore-operator\",\"--create-crd=false\"],\"env\":[{\"name\":\"MY_POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}},{\"name\":\"MY_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}}],\"image\":\"quay.io/coreos/etcd-operator@sha256:c0301e4686c3ed4206e370b42de5a3bd2229b9fb4906cf85f3f30650424abec2\",\"name\":\"etcd-restore-operator\"}],\"serviceAccountName\":\"etcd-operator\"}}}}],\"permissions\":[{\"rules\":[{\"apiGroups\":[\"etcd.database.coreos.com\"],\"resources\":[\"etcdclusters\",\"etcdbackups\",\"etcdrestores\"],\"verbs\":[\"*\"]},{\"apiGroups\":[\"\"],\"resources\":[\"pods\",\"services\",\"endpoints\",\"persistentvolumeclaims\",\"events\"],\"verbs\":[\"*\"]},{\"apiGroups\":[\"apps\"],\"resources\":[\"deployments\"],\"verbs\":[\"*\"]},{\"apiGroups\":[\"\"],\"resources\":[\"secrets\"],\"verbs\":[\"get\"]}],\"serviceAccountName\":\"etcd-operator\"}]},\"strategy\":\"deployment\"},\"keywords\":[\"etcd\",\"key value\",\"database\",\"coreos\",\"open source\"],\"labels\":{\"alm-owner-etcd\":\"etcdoperator\",\"operated-by\":\"etcdoperator\"},\"links\":[{\"name\":\"Blog\",\"url\":\"https://coreos.com/etcd\"},{\"name\":\"Documentation\",\"url\":\"https://coreos.com/operators/etcd/docs/latest/\"},{\"name\":\"etcd Operator Source Code\",\"url\":\"https://github.com/coreos/etcd-operator\"}],\"maintainers\":[{\"email\":\"[email protected]\",\"name\":\"CoreOS, Inc\"}],\"maturity\":\"alpha\",\"provider\":{\"name\":\"CoreOS, Inc\"},\"replaces\":\"etcdoperator.v0.9.0\",\"selector\":{\"matchLabels\":{\"alm-owner-etcd\":\"etcdoperator\",\"operated-by\":\"etcdoperator\"}},\"version\":\"0.9.2\"}}",
    "{\"apiVersion\":\"apiextensions.k8s.io/v1beta1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"name\":\"etcdrestores.etcd.database.coreos.com\"},\"spec\":{\"group\":\"etcd.database.coreos.com\",\"names\":{\"kind\":\"EtcdRestore\",\"listKind\":\"EtcdRestoreList\",\"plural\":\"etcdrestores\",\"singular\":\"etcdrestore\"},\"scope\":\"Namespaced\",\"version\":\"v1beta2\"}}"
  ]
}

operator-registry's People

Contributors

akihikokuroda avatar anik120 avatar ankitathomas avatar awgreene avatar benluddy avatar camilamacedo86 avatar dependabot[bot] avatar dinhxuanvu avatar djzager avatar dmesser avatar ecordell avatar estroz avatar exdx avatar gallettilance avatar grokspawn avatar jamstah avatar joelanford avatar jpeeler avatar jpower432 avatar kevinrizza avatar m1kola avatar njhale avatar openshift-merge-robot avatar perdasilva avatar rashmigottipati avatar stevekuznetsov avatar theishshah avatar timflannagan avatar tkashem avatar yselkowitz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

operator-registry's Issues

Defaults and Flags inconsistent in opm container tool flag

The default container tool for bundle build is docker whereas the default for index add/rm/export is podman. The command line flag is also different (-b vs -c) which is confusing.

Usage:
  opm alpha bundle build [flags]

Flags:
  -b, --image-builder string   Tool to build container images. One of: [docker, podman, buildah] (default "docker")

vs

Usage:
  opm index add [flags]

Flags:
  -c, --container-tool string   tool to interact with container images (save, build, etc.). One of: [docker, podman] (default "podman")

Last version of origin-registry operator complains with could not decode contents of file /registry/bundles.db into package: error converting YAML to JSON: yaml: control characters are not allowed" dir=/registry file=bundles.db load=package

It wasn't an error before but after operator-registry merged 065e51f it's an error.

error:

time="2019-08-08T10:42:49Z" level=info msg="could not decode contents of file /registry/bundles.db into package: error converting YAML to JSON: yaml: control characters are not allowed" dir=/registry file=bundles.db load=package
D

Before commit merged:
https://travis-ci.com/kubevirt/cluster-network-addons-operator/builds/122417674#L1343

After commit merged:
https://travis-ci.com/kubevirt/cluster-network-addons-operator/builds/122546641#L1347

Is this really fatal ?

We are exercising this with latest version of quay.io/openshift/origin-operator-registry

opm: add version command

opm needs a version command

Example:

$ opm version 
opm-1.16.1-builddetails

Right now, to detect the version, one must compare the byte image with that from the releases page to determine which version is used.

Current version of opm fails to create index image ERRO[0000] permissive mode disabled bundles="[docker.sas.com/hoonea/mvd-operator-bundle:v1.0]" error="error loading bundle from image: application/vnd.docker.distribution.manifest.v1+prettyjws not supported"

While attempting to create an index image (https://github.com/operator-framework/operator-registry#building-an-index-of-operators-using-opm) opm produces misleading error message, even though the bundle image passes validation, scorecard, and appears to meet the needed spec.

When using the latest code base an abnormal error is produced from opm that does not occur at a v1.6.1 :

Error: error loading bundle from image: application/vnd.docker.distribution.manifest.v1+prettyjws not supported
Running with current codebase in repo

$user@machine /c/workspace/operator-registry
 % docker manifest inspect  docker.registry.com/$user/mvd-operator-bundle:v1.0
{
        "schemaVersion": 2,
        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
        "config": {
                "mediaType": "application/vnd.docker.container.image.v1+json",
                "size": 3375,
                "digest": "sha256:4cc4e13124f3d7cad7202d4f87c3feccdbe949766bf20f70ff4b4af39823b29f"
        },
        "layers": [
                {
                        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
                        "size": 9741,
                        "digest": "sha256:7cfec0718755d5c30556589e67381271ae6adeb6210f76a5d501e505997694b1"
                },
                {
                        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
                        "size": 295,
                        "digest": "sha256:75511aec0adc9fc1308673dd784df4b9c61f586fb2b8e8f8fd6e0f739a9a480a"
                }
        ]
}
$user@machine /c/workspace/operator-registry
 % ./opm alpha bundle validate -t docker.registry.com/$user/mvd-operator-bundle:v1.0
INFO[0000] Create a temp directory at /tmp/bundle-113043196  container-tool=docker
DEBU[0000] Pulling and unpacking container image         container-tool=docker
INFO[0000] running docker pull                           container-tool=docker
DEBU[0000] [docker pull docker.registry.com/$user/mvd-operator-bundle:v1.0]  container-tool=docker
INFO[0000] running docker save                           container-tool=docker
DEBU[0000] [docker save docker.registry.com/$user/mvd-operator-bundle:v1.0 -o bundle_staging_722488363/bundle.tar]  container-tool=docker
INFO[0000] Unpacked image layers, validating bundle image format & contents  container-tool=docker
DEBU[0000] Found manifests directory                     container-tool=docker
DEBU[0000] Found metadata directory                      container-tool=docker
DEBU[0000] Getting mediaType info from manifests directory  container-tool=docker
DEBU[0000] Validating annotations.yaml                   container-tool=docker
DEBU[0000] Found annotation "operators.operatorframework.io.bundle.package.v1" with value "mvd-operator"  container-tool=docker
DEBU[0000] Found annotation "operators.operatorframework.io.bundle.channels.v1" with value "alpha"  container-tool=docker
DEBU[0000] Found annotation "operators.operatorframework.io.bundle.channel.default.v1" with value "alpha"  container-tool=docker
DEBU[0000] Found annotation "operators.operatorframework.io.bundle.mediatype.v1" with value "registry+v1"  container-tool=docker
DEBU[0000] Found annotation "operators.operatorframework.io.bundle.manifests.v1" with value "manifests/"  container-tool=docker
DEBU[0000] Found annotation "operators.operatorframework.io.bundle.metadata.v1" with value "metadata/"  container-tool=docker
DEBU[0000] Validating bundle contents                    container-tool=docker
DEBU[0000] Validating "operators.coreos.com/v1alpha1, Kind=ClusterServiceVersion" from file "mvd-operator.v0.0.1.clusterserviceversion.yaml"  container-tool=docker
DEBU[0000] Validating "apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition" from file "mvd.registry.com_minviyadeps_crd.yaml"  container-tool=docker
INFO[0000] All validation tests have been completed successfully  container-tool=docker
$user@machine /c/workspace/operator-registry
 % ./opm alpha bundle validate -t docker.registry.com/$user/mvd-operator-bundle:v1.0
$user@machine /c/workspace/operator-registry
 % ./opm index add --bundles docker.registry.com/$user/mvd-operator-bundle:v1.0 --tag docker.registry.com/$user/mvd-operator-index:v1.0 --container-tool docker
INFO[0000] building the index                            bundles="[docker.registry.com/$user/mvd-operator-bundle:v1.0]"
INFO[0000] resolved name: docker.registry.com/$user/mvd-operator-bundle:v1.0 
INFO[0000] fetched                                       digest="sha256:3a8bf379f5a8391459beef293022fc0ca88edf89a895d8f86e3c633cf18722c5"
ERRO[0000] permissive mode disabled                      bundles="[docker.registry.com/$user/mvd-operator-bundle:v1.0]" error="error loading bundle from image: application/vnd.docker.distribution.manifest.v1+prettyjws not supported"
Error: error loading bundle from image: application/vnd.docker.distribution.manifest.v1+prettyjws not supported

RUNNING WITH RELEASE v1.6.1
% ./opm index add --bundles docker.registry.com/$user/mvd-operator-bundle:v1.0 --tag docker.registry.com/$user/mvd-operator-index:v1.0 --container-tool docker
INFO[0000] building the index                            bundles="[docker.registry.com/$user/mvd-operator-bundle:v1.0]"
INFO[0000] running docker pull                           img="docker.registry.com/$user/mvd-operator-bundle:v1.0"
INFO[0000] running docker save                           img="docker.registry.com/$user/mvd-operator-bundle:v1.0"
INFO[0000] loading Bundle docker.registry.com/$user/mvd-operator-bundle:v1.0  img="docker.registry.com/$user/mvd-operator-bundle:v1.0"
INFO[0000] found annotations file searching for csv      dir=bundle_tmp142711949 file=bundle_tmp142711949/metadata load=annotations
INFO[0000] found csv, loading bundle                     dir=bundle_tmp142711949 file=bundle_tmp142711949/manifests load=bundle
INFO[0000] loading bundle file                           dir=bundle_tmp142711949/manifests file=mvd-operator.v0.0.1.clusterserviceversion.yaml load=bundle
INFO[0000] loading bundle file                           dir=bundle_tmp142711949/manifests file=mvd.registry.com_minviyadeps_crd.yaml load=bundle
INFO[0000] Generating dockerfile                         bundles="[docker.registry.com/$user/mvd-operator-bundle:v1.0]"
INFO[0000] writing dockerfile: index.Dockerfile660628551  bundles="[docker.registry.com/$user/mvd-operator-bundle:v1.0]"
INFO[0000] running docker build                          bundles="[docker.registry.com/$user/mvd-operator-bundle:v1.0]"
INFO[0000] [docker build -f index.Dockerfile660628551 -t docker.registry.com/$user/mvd-operator-index:v1.0 .]  bundles="[docker.registry.com/$user/mvd-operator-bundle:v1.0]"

OPM Index Add Error

operator-registry git:(master) ✗ opm index add --bundles quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0 --tag skybig/test-operator-hub:0.0.1 -c docker
INFO[0000] building the index                            bundles="[quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0]"
INFO[0000] running docker pull                           img="quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0"
INFO[0004] running docker save                           img="quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0"
INFO[0004] loading Bundle quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0  img="quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0"
INFO[0004] found annotations file searching for csv      dir=bundle_tmp216872198 file=bundle_tmp216872198/metadata load=annotations
INFO[0004] found csv, loading bundle                     dir=bundle_tmp216872198 file=bundle_tmp216872198/manifests load=bundle
INFO[0004] loading bundle file                           dir=bundle_tmp216872198/manifests file=alertmanager.crd.yaml load=bundle
INFO[0004] loading bundle file                           dir=bundle_tmp216872198/manifests file=prometheus.crd.yaml load=bundle
INFO[0004] loading bundle file                           dir=bundle_tmp216872198/manifests file=prometheusoperator.0.14.0.clusterserviceversion.yaml load=bundle
INFO[0004] loading bundle file                           dir=bundle_tmp216872198/manifests file=prometheusrule.crd.yaml load=bundle
INFO[0004] loading bundle file                           dir=bundle_tmp216872198/manifests file=servicemonitor.crd.yaml load=bundle
INFO[0004] Generating dockerfile                         bundles="[quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0]"
INFO[0004] writing dockerfile: index.Dockerfile544399912  bundles="[quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0]"
INFO[0004] running docker build                          bundles="[quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0]"
INFO[0004] [docker build -f index.Dockerfile544399912 -t skybig/test-operator-hub:0.0.1 .]  bundles="[quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0]"
ERRO[0012] Sending build context to Docker daemon  85.13MB
Step 1/9 : FROM quay.io/operator-framework/upstream-registry-builder AS builder
 ---> ce4635d54532
Step 2/9 : FROM scratch
 --->
Step 3/9 : LABEL operators.operatorframework.io.index.database.v1=./index.db
 ---> Using cache
 ---> 57ecd6dd5d26
Step 4/9 : COPY index_tmp479658371 ./
 ---> c51b6ee667e2
Step 5/9 : COPY --from=builder /build/bin/opm /opm
COPY failed: stat /var/lib/docker/overlay2/ddabf3540babc491ab255558097920492095e772cd1e741b379a5f0d7de1e4e0/merged/build/bin/opm: no such file or directory  bundles="[quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0]"
Error: error building image: Sending build context to Docker daemon  85.13MB
Step 1/9 : FROM quay.io/operator-framework/upstream-registry-builder AS builder
 ---> ce4635d54532
Step 2/9 : FROM scratch
 --->
Step 3/9 : LABEL operators.operatorframework.io.index.database.v1=./index.db
 ---> Using cache
 ---> 57ecd6dd5d26
Step 4/9 : COPY index_tmp479658371 ./
 ---> c51b6ee667e2
Step 5/9 : COPY --from=builder /build/bin/opm /opm
COPY failed: stat /var/lib/docker/overlay2/ddabf3540babc491ab255558097920492095e772cd1e741b379a5f0d7de1e4e0/merged/build/bin/opm: no such file or directory
. exit status 1
Usage:
  opm index add [flags]

Examples:
  # Create an index image from scratch with a single bundle image
  opm index add --bundles quay.io/operator-framework/operator-bundle-prometheus@sha256:a3ee653ffa8a0d2bbb2fabb150a94da6e878b6e9eb07defd40dc884effde11a0 --tag quay.io/operator-framework/monitoring:1.0.0

  # Add a single bundle image to an index image
  opm index add --bundles quay.io/operator-framework/operator-bundle-prometheus:0.15.0 --from-index quay.io/operator-framework/monitoring:1.0.0 --tag quay.io/operator-framework/monitoring:1.0.1

  # Add multiple bundles to an index and generate a Dockerfile instead of an image
  opm index add --bundles quay.io/operator-framework/operator-bundle-prometheus:0.15.0,quay.io/operator-framework/operator-bundle-prometheus:0.22.2 --generate

Flags:
  -i, --binary-image opm        container image for on-image opm command
  -b, --bundles strings         comma separated list of bundles to add
  -c, --container-tool string   tool to interact with container images (save, build, etc.). One of: [docker, podman] (default "podman")
  -f, --from-index string       previous index to add to
      --generate                if enabled, just creates the dockerfile and saves it to local disk
  -h, --help                    help for add
  -d, --out-dockerfile string   if generating the dockerfile, this flag is used to (optionally) specify a dockerfile name
      --permissive              allow registry load errors
  -t, --tag string              custom tag for container image being built

opm index add --from-index results in "Error processing tar file"

[rwsu@mini-fedora node-maintenance-operator]$ opm index add --bundles quay.io/rwsu/node-maintenance-operator-bundle:0.4.0 --from-index quay.io/rwsu/my-index:1.0.0 --tag quay.io/rwsu/my-index:1.0.1
INFO[0000] building the index                            bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
INFO[0000] Pulling previous image quay.io/rwsu/my-index:1.0.0 to get metadata  bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
INFO[0000] running podman pull                           bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
INFO[0002] Getting label data from previous image        bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
INFO[0002] running podman inspect                        bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
INFO[0002] running podman pull                           bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
INFO[0005] running podman save                           bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
INFO[0034] running podman pull                           img="quay.io/rwsu/node-maintenance-operator-bundle:0.4.0"
INFO[0036] running podman save                           img="quay.io/rwsu/node-maintenance-operator-bundle:0.4.0"
INFO[0036] loading Bundle quay.io/rwsu/node-maintenance-operator-bundle:0.4.0  img="quay.io/rwsu/node-maintenance-operator-bundle:0.4.0"
INFO[0036] found annotations file searching for csv      dir=bundle_tmp931915972 file=bundle_tmp931915972/metadata load=annotations
INFO[0036] found csv, loading bundle                     dir=bundle_tmp931915972 file=bundle_tmp931915972/manifests load=bundle
INFO[0036] loading bundle file                           dir=bundle_tmp931915972/manifests file=node-maintenance-operator.v0.4.0.clusterserviceversion.yaml load=bundle
INFO[0036] loading bundle file                           dir=bundle_tmp931915972/manifests file=nodemaintenance_crd.yaml load=bundle
INFO[0037] Generating dockerfile                         bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
INFO[0037] writing dockerfile: index.Dockerfile241362070  bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
INFO[0037] running podman build                          bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
INFO[0037] [podman build -f index.Dockerfile241362070 -t quay.io/rwsu/my-index:1.0.1 .]  bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
ERRO[0082] STEP 1: FROM quay.io/operator-framework/upstream-registry-builder
STEP 2: LABEL operators.operatorframework.io.index.database.v1=/database/index.db
--> Using cache 7247f2c88dfb07dd13cf07654bfcc8a4d234ae5ccc0eb72413a3b73b2ba48ad6
STEP 3: ADD index_tmp904376898 /database
Error: error building at STEP "ADD index_tmp904376898 /database": error copying "/home/rwsu/go/src/github.com/kubevirt/node-maintenance-operator/index_tmp904376898" to "/home/rwsu/.local/share/containers/storage/overlay/111216cca9c9128ee007e3806efb2b6b667c62240563058e7d2762eb02a79639/merged/database": Error processing tar file(exit status 1): open /build/.wh..wh..opq: invalid argument  bundles="[quay.io/rwsu/node-maintenance-operator-bundle:0.4.0]"
Error: error building image: STEP 1: FROM quay.io/operator-framework/upstream-registry-builder
STEP 2: LABEL operators.operatorframework.io.index.database.v1=/database/index.db
--> Using cache 7247f2c88dfb07dd13cf07654bfcc8a4d234ae5ccc0eb72413a3b73b2ba48ad6
STEP 3: ADD index_tmp904376898 /database
Error: error building at STEP "ADD index_tmp904376898 /database": error copying "/home/rwsu/go/src/github.com/kubevirt/node-maintenance-operator/index_tmp904376898" to "/home/rwsu/.local/share/containers/storage/overlay/111216cca9c9128ee007e3806efb2b6b667c62240563058e7d2762eb02a79639/merged/database": Error processing tar file(exit status 1): open /build/.wh..wh..opq: invalid argument
. exit status 125

Steps to reproduce:

git clone https://github.com/rwsu/node-maintenance-operator.git -b bundle-index-images

cd node-maintenance-operator

podman build -f bundle.Dockerfile.v0.3.0 -t quay.io/rwsu/node-maintenance-operator-bundle:0.3.0

podman push quay.io/rwsu/node-maintenance-operator-bundle:0.3.0

opm index add --bundles quay.io/rwsu/node-maintenance-operator-bundle:0.3.0 --tag quay.io/rwsu/my-index:1.0.0

podman push quay.io/rwsu/my-index:1.0.0

podman build -f bundle.Dockerfile.v0.4.0 -t quay.io/rwsu/node-maintenance-operator-bundle:0.4.0

podman push quay.io/rwsu/node-maintenance-operator-bundle:0.4.0

opm alpha bundle validate --tag quay.io/rwsu/node-maintenance-operator-bundle:0.4.0 --image-builder podman

opm index add --bundles quay.io/rwsu/node-maintenance-operator-bundle:0.4.0 --from-index quay.io/rwsu/my-index:1.0.0 --tag quay.io/rwsu/my-index:1.0.1

replaces meta data for operator in CSV will throw fatal error when no previous version of operator is found.

(Moving this issue from OLM operator-framework/operator-lifecycle-manager#831)
If the replace operator is mentioned in CSV file without actually having an older version of the operator in the marketplace, the CatalogSourceConfig fails to run due to a fatal error

If it doesn't find the replacement, it should ignore and continue instead of throwing a fatal error.

Message: time="2019-04-29T17:35:06Z" level=fatal msg="error loading manifest from remote registry - kubevirtoperator.0.17.0-alpha.0 specifies replacement that couldn't be found" port=50051 type=appregistry

In the below example "replaces: kubevirtoperator.0.16.2" is causing the fatal error since there is no operator to replace.

spec:
displayName: KubeVirt
description: |
KubeVirt is a cluster add-on that provides a way to run virtualized workloads alongside containerized workload in a Kubernetes / OpenShift native way.
Minor and patch level updates are supported.
keywords:

  • KubeVirt
  • Virtualization
    version: 0.17.0-alpha.0
    maturity: alpha
    replaces: kubevirtoperator.0.16.2

kubectl get pods -n marketplace
NAME READY STATUS RESTARTS AGE
installed-upstream-community-operators-5cb468-2zfff 0/1 CrashLoopBackOff 8 17m
marketplace-operator-7bb459dd57-chq47 1/1 Running 0 3d2h
upstream-community-operators-7df5b67589-hfhcz 0/1 CrashLoopBackOff 8 19m

Code throwing error:
https://github.com/operator-framework/operator-lifecycle-manager/blob/master/vendor/github.com/operator-framework/operator-registry/pkg/sqlite/load.go#L176.

opm: sign release binaries

Provide a digest and signature for the built binaries:

  1. Clients can verify that the binary is unaltered
  2. Clients can verify that the binary is authentic

Remove replace directive in go.mod

Hi,
We are trying to import the operator-sdk in our repo to be able to run operator-sdk based tests.

Because of the replace directive in go.mod, this breaks our build and we are currently blocked.

Since k8s repos now offer a "native" way to pin versions k8s 1.16.7 maps to a go.mod version of v0.16.7, we can achieve the pinning to a particular kubernetes version without using the replace directive.

I got the operator-registry to build without replace directive in go.mod on my local branch (pointing at the non-pinned k8s version for the api repo, on top of operator-framework/api#17).

Do you want me to create a PR for this work? Not that this would need the PR in api repo to be merged and released as 0.1.1

Note that my overall goal is to not have replace directive for k8s in operator-sdk so that it will be easier to import for ever user

initializer fails to load multi-document YAML manifests

Tested with quay.io/operator-framework/upstream-registry-builder:v1.6.1 with this:

FROM quay.io/operator-framework/upstream-registry-builder:v1.6.1 as builder
COPY upstream-community-operators manifests
RUN ./bin/initializer --permissive true -o ./bundles.db

This fails with:

time="2020-03-19T18:35:25Z" level=warning msg="permissive mode enabled" error="error loading manifests from directory: error checking provided apis in bundle : couldn't find monitoring.coreos.com/v1/PrometheusRule (prometheusrules) in bundle. found: map[]"

The culprit seems to be trailing whitespace and the multi-document split anchor: https://github.com/operator-framework/community-operators/blob/527c747c6b67cb376e088cf9a550b9a2e76aba4f/upstream-community-operators/prometheus/0.37.0/prometheuses.monitoring.coreos.com.crd.yaml#L1-L2

opm 1.7.0: All channels deleted except for latest with opm registry add

When attempting to add additional channels, any previous channels are deleted. Only the latest channel is preserved.

I believe this is a regression introduced in PR #236 as a result of issue #205

  1. Add a bundle to to the BETA channel:
operator-sdk bundle create quay.io/cdjohnson/brokenoperator:v1.0.0 --directory 1.0.0 --package test-operator --channels BETA --default-channel BETA
docker push quay.io/cdjohnson/brokenoperator:v1.0.0
opm registry add -b quay.io/cdjohnson/brokenoperator:v1.0.0 -c docker

sqlite3 bundles.db "select * from channel;"
BETA|test-operator|testoperator.v1.0.0

sqlite3 bundles.db "select * from channel_entry;"
1|BETA|test-operator|testoperator.v1.0.0||0

sqlite3 bundles.db "select name from operatorbundle;"
testoperator.v1.0.0
  1. Add a second bundle to the STABLE channel where olm.skipRange: <1.0.1
operator-sdk bundle create quay.io/cdjohnson/brokenoperator:v1.0.1 --directory 1.0.1 --package test-operator --channels STABLE --default-channel STABLE
docker push quay.io/cdjohnson/brokenoperator:v1.0.1
opm registry add -b quay.io/cdjohnson/brokenoperator:v1.0.1 -c docker


sqlite3 bundles.db "select * from channel;"
STABLE|test-operator|testoperator.v1.0.1

sqlite3 bundles.db "select * from channel_entry;"
1|STABLE|test-operator|testoperator.v1.0.1||0

sqlite3 bundles.db "select name from operatorbundle;"
testoperator.v1.0.0
testoperator.v1.0.1

Result: The BETA channel has been removed from both the channel and channel_entry table.

This SHOULD result in two separate channels and channel_entries.

bundles.zip

A cycle in 'replaces' attributes causes an infinite loop while adding channel packages

To reproduce:

  1. Create a collection of CSVs that contain some cycle in 'replaces' values. A single CSV which replaces itself is enough, but you can chain them together (a replaces b, b replaces c, c replaces a)
  2. Upload the bundle to quay
  3. Run the appregistry-server locally and watch the log output

Expected: an error should be detected, or worst case the server should start serving even if the content is invalid
Actual: the server will never begin serving, the last log message will show it loading packages

For example, using an intentionally broken bundle

$ grep replaces *
seldonoperator.v0.1.2.clusterserviceversion.yaml:  replaces: seldonoperator.v0.1.3
seldonoperator.v0.1.3.clusterserviceversion.yaml:  replaces: seldonoperator.v0.1.2
 sudo bin/appregistry-server -r 'https://quay.io/cnr|tmckayus' -o seldon-operator -k /home/tmckay/.crc/cache/crc_libvirt_4.1.6/kubeconfig --debug
[sudo] password for tmckay: 
time="2019-07-26T16:06:22-04:00" level=info msg="Loading kube client config from path \"/home/tmckay/.crc/cache/crc_libvirt_4.1.6/kubeconfig\"" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="operator source(s) specified are - [https://quay.io/cnr|tmckayus]" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="package(s) specified are - seldon-operator" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="input has been sanitized" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="sources: [https://quay.io/cnr/tmckayus]" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="packages: [seldon-operator]" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="resolved the following packages: [tmckayus/seldon-operator:0.1.3]" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="downloading repository: tmckayus/seldon-operator:0.1.3 from https://quay.io/cnr" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="download complete - 1 repositories have been downloaded" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="decoding the downloaded operator manifest(s)" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="manifest format is - flattened" port=50051 repository="tmckayus/seldon-operator:0.1.3" type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="decoded successfully" port=50051 repository="tmckayus/seldon-operator:0.1.3" type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="merging all flattened manifests into a single configmap 'data' section" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="decoded 1 flattened and 0 nested operator manifest(s)" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="loading flattened operator manifest(s) into sqlite" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="using configmap loader to build sqlite database" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="loading CRDs" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=debug msg="loading CRD" gvk="machinelearning.seldon.io/v1alpha2/SeldonDeployment (seldondeployments)" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="loading Bundles" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=debug msg="loading CSV" csv=seldonoperator.v0.1.2 port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=debug msg="loading CSV" csv=seldonoperator.v0.1.3 port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=info msg="loading Packages" port=50051 type=appregistry
time="2019-07-26T16:06:22-04:00" level=debug msg="loading package" package=seldon-operator port=50051 type=appregistry
... (hold here forever)

How to get these registries' info?

I accessed the node which the appregistry running.

[core@ip-10-0-140-215 ~]$ ./grpcurl -plaintext  10.128.2.11:50051 list api.Registry
api.Registry.GetBundle
api.Registry.GetBundleForChannel
api.Registry.GetBundleThatReplaces
api.Registry.GetChannelEntriesThatProvide
api.Registry.GetChannelEntriesThatReplace
api.Registry.GetDefaultBundleThatProvides
api.Registry.GetLatestChannelEntriesThatProvide
api.Registry.GetPackage
api.Registry.ListPackages

But, how to check the details of api.Registry.ListPackages and grpc.health.v1.Health.Check? @ecordell @njhale

[core@ip-10-0-140-215 ~]$ ./grpcurl -plaintext  10.128.2.11:50051 list api.Registry.ListPackages
Failed to list methods for service "api.Registry.ListPackages": Service not found: api.Registry.ListPackages
[core@ip-10-0-140-215 ~]$ ./grpcurl -plaintext  10.128.2.11:50051 list api.Registry/ListPackages
Failed to list methods for service "api.Registry/ListPackages": Symbol not found: api.Registry/ListPackages
[core@ip-10-0-140-215 ~]$ ./grpcurl -plaintext  10.128.2.11:50051 list api.Registry/ListPackages
Failed to list methods for service "api.Registry/ListPackages": Symbol not found: api.Registry/ListPackages
[core@ip-10-0-140-215 ~]$ ./grpcurl -plaintext  10.128.2.11:50051 list api.Registry/api.Registry.ListPackages
Failed to list methods for service "api.Registry/api.Registry.ListPackages": Symbol not found: api.Registry/api.Registry.ListPackages
[core@ip-10-0-140-215 ~]$ ./grpcurl -plaintext  10.128.2.11:50051 list --help
Failed to list methods for service "--help": Symbol not found: --help
[core@ip-10-0-140-215 ~]$ ./grpcurl -plaintext  10.128.2.11:50051 list 
api.Registry
grpc.health.v1.Health
grpc.reflection.v1alpha.ServerReflection
[core@ip-10-0-140-215 ~]$ ./grpcurl -plaintext  10.128.2.11:50051 list grpc.health.v1.Health
grpc.health.v1.Health.Check

Binaries or containers are not available for different architectures

It would be convenient if either:

  1. The releases had binaries for s390x and ppc64le (linux platform only)
  2. There were architecture-specific container images for s390x and ppc64le. For example, if quay.io/operator-framework/upstream-registry-builder had tags for those two architectures.

Currently in order to produce a registry that I can use on another architecture, I have to compile operator-registry on that platform and then use the resulting executable to produce a registry / container. This is time-consuming and error-prone on a Travis build. On amd64 I can just use upstream-registry-builder to produce the registry, which is much easier and consistently works.

opm index database cannot be read using : quay.io/operator-framework/upstream-community-operators

Issue

The index database pushed within the following image which is currently packaged as CatalogSource from olm release 0.14.1 cannot be read quay.io/operator-framework/upstream-community-operators

The following command returns this error

 ./bin/opm index export -i quay.io/operator-framework/upstream-community-operators:latest -f tmp/packages -c docker -o prometheus
INFO[0000] export from the index                         index="quay.io/operator-framework/upstream-community-operators:latest" package=prometheus
INFO[0000] Pulling previous image quay.io/operator-framework/upstream-community-operators:latest to get metadata  index="quay.io/operator-framework/upstream-community-operators:latest" package=prometheus
INFO[0000] running docker pull                           index="quay.io/operator-framework/upstream-community-operators:latest" package=prometheus
INFO[0001] Getting label data from previous image        index="quay.io/operator-framework/upstream-community-operators:latest" package=prometheus
INFO[0001] running docker inspect                        index="quay.io/operator-framework/upstream-community-operators:latest" package=prometheus
Error: no such table: operatorbundle

because the following SQL request cannot find the operatorbundle table

func (s *SQLQuerier) GetBundlePathsForPackage(ctx context.Context, pkgName string) ([]string, error) {
query := `SELECT DISTINCT bundlepath FROM operatorbundle
INNER JOIN channel_entry ON operatorbundle.name=channel_entry.operatorbundle_name
WHERE channel_entry.package_name=?`

Question About the operator bundle design

This is the operator bundle manifest tree now

 etcd
 ├── manifests
 │   ├── etcdcluster.crd.yaml
 │   └── etcdoperator.clusterserviceversion.yaml
 └── metadata
     └── annotations.yaml

And before this version, we are using an package.yaml file to declare metadata. And the fs tree looks like below

olm-catalog
└── nzk-operator
    ├── 0.1.0
    │   ├── nzk-operator.v0.1.0.clusterserviceversion.yaml
    │   └── nzk_v1alpha1_nzkcluster_crd.yaml
    └── nzk-operator.package.yaml

My question is what's the difference between these two and what are the advantages using annotations.yaml if I'm using the OLM?

appregistry interprets whitespace followed by directives end mark differently than operator-courier

Consider a yaml file that begins like this (whitespace followed a directives end mark:
"

(dash dash dash ---)
"

(TL;DR although this is an edge case ,it really happened to a real company who was stymied trying to run an operator on OpenShift :) )

The YAML spec is not quite clear on how this should be interpreted. In the Python3 standard yaml library, this will be interpreted as a single document. However, in k8s.io/apimachinery/pkg/util/yaml/decoder.go this will be treated as an empty document followed by a second document beginning at the directives end mark (---).

This leads to a scenario where a bundle can be validated by operator-courier and successfully upload but will fail when downloaded by the appregistry because the empty document will occlude the second document, resulting in a missing CRD in the bundle.

This in turn leads to a ClusterServiceVersion stuck in pending, with no clear error to the end user unless they are knowledgeable enough to look at pod logs in the openshift-marketplace and figure out what went wrong.

The solution is to reconcile the handling of the initial whitespace by operator-courier and appregistry. Either make operator-courier flag this as an error, even though standard Python3 treats it unambiguously, or make the appregistry handle the empty document without failing the bundle.

openshift/oc k8s bump to 1.18 requires bump in operator-registry as well

Workloads team is in the middle of rebasing oc to kubernetes 1.18. Oc command imports github.com/operator-framework/operator-registry/pkg/appregistry package which imports client-go. Client-go in 1.18 changed signature of Get, Update, Delete and other methods to have context.Context as their first parameter. This, the following change in the operator-registry code is required by oc:

vendor/github.com/operator-framework/operator-registry/pkg/appregistry/downloader.go 
index 18142d2..191406c 100644
@@ -1,6 +1,7 @@
 package appregistry
 import (
+	"context"
 	"errors"
 	"fmt"
@@ -251,7 +252,7 @@ func (s *secretRegistryOptionsGetter) GetRegistryOptions(source *Source) (*apprc
 	token := ""
 	if source.IsSecretSpecified() {
-		secret, err := s.kubeClient.CoreV1().Secrets(source.Secret.Namespace).Get(source.Secret.Name, metav1.GetOptions{})
+		secret, err := s.kubeClient.CoreV1().Secrets(source.Secret.Namespace).Get(context.TODO(), source.Secret.Name, metav1.GetOptions{})
 		if err != nil {
 			return nil, err
 		}

Which requires to also bump k8s dependencies in operator-registry to 1.18.

Filling the issue to track the progress of its resolution atm. We will carry a patch with the change meantime.

opm 1.8.0: Adding a new channel incorrectly deletes csv and bundle json from other channels

This is related to issue #258

When attempting to add additional channels, the CSV and Bundle JSON JSON data from the csv and bundle fields in the operatorbundle table are deleted for all records except of the head of the last entry in the graph. This information must be retained for the head of each channel.

The side effect of this is that the packagemanifest is missing all channels but the most recent that was modified.

I believe this is a regression introduced in PR #236 as a result of issue #205, and was partially fixed in issue #258

To recreate:

  1. Add a bundle to to the BETA channel:
operator-sdk bundle create quay.io/cdjohnson/brokenoperator:v1.0.0 --directory 1.0.0 --package test-operator --channels BETA --default-channel BETA
docker push quay.io/cdjohnson/brokenoperator:v1.0.0
opm registry add -b quay.io/cdjohnson/brokenoperator:v1.0.0 -c docker

sqlite3 bundles.db "select * from channel;"
BETA|test-operator|testoperator.v1.0.0

sqlite3 bundles.db "select * from channel_entry;"
1|BETA|test-operator|testoperator.v1.0.0||0

sqlite3 bundles.db "select name,length(csv),length(bundle) from operatorbundle;"
testoperator.v1.0.0|1895|1895
  1. Add a second bundle to the STABLE channel where olm.skipRange: <1.0.1
operator-sdk bundle create quay.io/cdjohnson/brokenoperator:v1.0.1 --directory 1.0.1 --package test-operator --channels STABLE --default-channel STABLE
docker push quay.io/cdjohnson/brokenoperator:v1.0.1
opm registry add -b quay.io/cdjohnson/brokenoperator:v1.0.1 -c docker


sqlite3 bundles.db "select * from channel;"
STABLE|test-operator|testoperator.v1.0.1

sqlite3 bundles.db "select * from channel_entry;"
1|STABLE|test-operator|testoperator.v1.0.1||0

sqlite3 bundles.db "select name,length(csv),length(bundle) from operatorbundle;"
testoperator.v1.0.0||
testoperator.v1.0.1|1925|1925

Result: The length of the csv and bundle fields for bundle testoperator.v1.0.0 is now zero.

The csv and bundle json should be retained.

bundles.zip

image containing manifest and metadata cannot be validated

Issue

The image containing manifest and metadata information cannot be validated
by the command opm alpha bundle validate as the json parser complains
that the package yml file dont include a Kind

bundleImage=quay.io/cmoulliard/olm-prometheus:0.22.2
./bin/opm alpha bundle validate -t ${bundleImage} -b docker                                                   
INFO[0000] Create a temp directory at /var/folders/56/dtp67r4n1hv79q2hrh_dbwcc0000gn/T/bundle-614344163  container-tool=docker
DEBU[0000] Pulling and unpacking container image         container-tool=docker
INFO[0000] running docker pull                           container-tool=docker
DEBU[0000] [docker pull quay.io/cmoulliard/olm-prometheus:0.22.2]  container-tool=docker
INFO[0001] running docker save                           container-tool=docker
DEBU[0001] [docker save quay.io/cmoulliard/olm-prometheus:0.22.2 -o bundle_staging_468317158/bundle.tar]  container-tool=docker
INFO[0001] Unpacked image layers, validating bundle image format & contents  container-tool=docker
DEBU[0001] Found manifests directory                     container-tool=docker
DEBU[0001] Found metadata directory                      container-tool=docker
DEBU[0001] Getting mediaType info from manifests directory  container-tool=docker
DEBU[0001] Validating annotations.yaml                   container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.package.v1" with value "prometheus"  container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.channels.v1" with value "preview"  container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.channel.default.v1" with value "preview"  container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.mediatype.v1" with value "registry+v1"  container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.manifests.v1" with value "manifests/"  container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.metadata.v1" with value "metadata/"  container-tool=docker
DEBU[0001] Validating bundle contents                    container-tool=docker
DEBU[0001] Validating "apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition" from file "alertmanager.crd.yaml"  container-tool=docker
DEBU[0001] Validating "apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition" from file "podmonitor.crd.yaml"  container-tool=docker
DEBU[0001] Validating "apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition" from file "prometheus.crd.yaml"  container-tool=docker
DEBU[0001] Validating "operators.coreos.com/v1alpha1, Kind=ClusterServiceVersion" from file "prometheusoperator.0.32.0.clusterserviceversion.yaml"  container-tool=docker
DEBU[0001] Validating "apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition" from file "prometheusrule.crd.yaml"  container-tool=docker
DEBU[0001] Validating "apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition" from file "servicemonitor.crd.yaml"  container-tool=docker
Error: Bundle validation errors: error 
unmarshaling JSON: while decoding JSON: Object 'Kind' is missing in
 '{"channels [{"currentCSV":"prometheusoperator.0.32.0","name":"beta"}],"packageName":"prometheus"}'
Usage:
  opm alpha bundle validate [flags]

Examples:
$ opm alpha bundle validate --tag quay.io/test/test-operator:latest --image-builder docker

Info

Package definition

packageName: prometheus
channels:
- name: beta
  currentCSV: prometheusoperator.0.32.0

opm: lastest build from master produces invalid index image

I pulled down the lastest master today 4/2/2020 built it then tried to create a simple operator index using:

opm index add -c docker --bundles quay.io/huizenga/cpcoc-panamax-operator-bundle:0.0.1 --tag quay.io/huizenga/cpcoc-panamax-index:1.0.new

I pushed it to quay and noticed that there are now many more layers then in the prior version

docker push quay.io/huizenga/cpcoc-panamax-index:1.0.new
The push refers to repository [quay.io/huizenga/cpcoc-panamax-index]
309a597e5c97: Pushed
3b2cf4ccaaf8: Layer already exists
7a22d624b2df: Layer already exists
da9eeb62e989: Layer already exists
9b88464cb333: Layer already exists
552771c7ce5f: Layer already exists
96d2ffb401b5: Layer already exists
732463f8eddf: Layer already exists
82056859c440: Layer already exists
b02509d51f2f: Layer already exists
fbeb8f1a0bb2: Layer already exists
525707600722: Layer already exists
eed8c158e67f: Layer already exists
2033402d2275: Layer already exists
77cae8ab23bf: Layer already exists
1.0.new: digest: sha256:ab77085f2374ceb231c5c5c9466c43ea59ba26df70eb6147049614130510379d size: 3458

I then created my catalogsource pointing to this new image and I get the following info from the pod:

starting container process caused \"exec: \\\"/bin/opm\\\": stat /bin/opm: no such file or directory\""
          container_linux.go:349: starting container process caused "exec: \"/bin/opm\": stat /bin/opm: no such file or directory"
$ oc get po cpcoc-panamax-catalog-jgrr4 -oyaml -nopenshift-marketplace
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.128.0.81"
          ],
          "dns": {},
          "default-route": [
              "10.128.0.1"
          ]
      }]
    openshift.io/scc: anyuid
  creationTimestamp: "2020-04-02T11:46:51Z"
  generateName: cpcoc-panamax-catalog-
  labels:
    olm.catalogSource: cpcoc-panamax-catalog
  name: cpcoc-panamax-catalog-jgrr4
  namespace: openshift-marketplace
  ownerReferences:
  - apiVersion: operators.coreos.com/v1alpha1
    blockOwnerDeletion: false
    controller: false
    kind: CatalogSource
    name: cpcoc-panamax-catalog
    uid: d53e5b4e-f2f8-4823-9c29-ea05f42b556d
  resourceVersion: "10561226"
  selfLink: /api/v1/namespaces/openshift-marketplace/pods/cpcoc-panamax-catalog-jgrr4
  uid: 7c21eb14-db03-4a73-aee0-b21fadd88e31
spec:
  containers:
  - image: quay.io/huizenga/cpcoc-panamax-index@sha256:623a89fe96ebce856f6a865a022a3da02c95ace79aa8dee67ffe226cd201f5c3
    imagePullPolicy: IfNotPresent
    livenessProbe:
      exec:
        command:
        - grpc_health_probe
        - -addr=localhost:50051
      failureThreshold: 3
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: registry-server
    ports:
    - containerPort: 50051
      name: grpc
      protocol: TCP
    readinessProbe:
      exec:
        command:
        - grpc_health_probe
        - -addr=localhost:50051
      failureThreshold: 3
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources:
      limits:
        cpu: 100m
        memory: 100Mi
      requests:
        cpu: 10m
        memory: 50Mi
    securityContext:
      capabilities:
        drop:
        - MKNOD
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-84shq
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: default-dockercfg-ll2nt
  nodeName: ip-10-0-158-114.us-east-2.compute.internal
  nodeSelector:
    beta.kubernetes.io/os: linux
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    seLinuxOptions:
      level: s0:c12,c9
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - operator: Exists
  volumes:
  - name: default-token-84shq
    secret:
      defaultMode: 420
      secretName: default-token-84shq
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-04-02T11:46:51Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-04-02T11:46:51Z"
    message: 'containers with unready status: [registry-server]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-04-02T11:46:51Z"
    message: 'containers with unready status: [registry-server]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-04-02T11:46:51Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - image: quay.io/huizenga/cpcoc-panamax-index@sha256:623a89fe96ebce856f6a865a022a3da02c95ace79aa8dee67ffe226cd201f5c3
    imageID: ""
    lastState: {}
    name: registry-server
    ready: false
    restartCount: 0
    started: false
    state:
      waiting:
        message: |
          container create failed: time="2020-04-02T11:48:35Z" level=error msg="container_linux.go:349: starting container process caused \"exec: \\\"/bin/opm\\\": stat /bin/opm: no such file or directory\""
          container_linux.go:349: starting container process caused "exec: \"/bin/opm\": stat /bin/opm: no such file or directory"
        reason: CreateContainerError
  hostIP: 10.0.158.114
  phase: Pending
  podIP: 10.128.0.81
  podIPs:
  - ip: 10.128.0.81
  qosClass: Burstable
  startTime: "2020-04-02T11:46:51Z"

opm index image error="attempt to write a readonly database"

Utilizing the v1.6.1 release of opm binary we were successfully building "index" images of our operator's bundle image utilizing the opm add command over the last 20+ days...

ex. command

opm index add --bundles quay.io/open-cluster-management/multiclusterhub-operator-bundle@sha256:0dbfaf6f23ffbfff4e3ed9b38e725dfcaaac9f2b371f299a08cc27063b40f61a --tag quay.io/open-cluster-management/multiclusterhub-operator-index:1.0.0-SNAPSHOT-2020-04-03-00-18-56 -c docker

Recently, as in last 24 hrs, we've seen some weird behavior that we cannot explain... When we use the index images built within the last day and run them on ocp 4.3 or 4.4.0-rc4 we see this error in the logs:

time="2020-04-03T02:51:17Z" level=warning msg="couldn't migrate db" database=/database/index.db error="attempt to write a readonly database" port=50051
time="2020-04-03T02:51:17Z" level=info msg="serving registry" database=/database/index.db port=50051

The index image reports as "RUNNING":

kind: Pod
apiVersion: v1
metadata:
  generateName: open-cluster-management-registry-6fc5c7ccd8-
  annotations:
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.129.2.50"
          ],
          "dns": {},
          "default-route": [
              "10.129.2.1"
          ]
      }]
    openshift.io/scc: restricted
  selfLink: >-
    /api/v1/namespaces/open-cluster-management/pods/open-cluster-management-registry-6fc5c7ccd8-fknw8
  resourceVersion: '1545333'
  name: open-cluster-management-registry-6fc5c7ccd8-fknw8
  uid: 4398a1e1-d86f-4b0f-a4ca-4af785a9a6cd
  creationTimestamp: '2020-04-03T02:51:14Z'
  namespace: open-cluster-management
  ownerReferences:
    - apiVersion: apps/v1
      kind: ReplicaSet
      name: open-cluster-management-registry-6fc5c7ccd8
      uid: 595f6493-aff5-4e05-97ea-961f8fdc3325
      controller: true
      blockOwnerDeletion: true
  labels:
    app: open-cluster-management-registry
    pod-template-hash: 6fc5c7ccd8
spec:
  nodeSelector:
    computenode: 'true'
  restartPolicy: Always
  serviceAccountName: default
  imagePullSecrets:
    - name: multiclusterhub-operator-pull-secret
  priority: 0
  schedulerName: default-scheduler
  enableServiceLinks: true
  terminationGracePeriodSeconds: 30
  nodeName: ip-10-0-184-100.ec2.internal
  securityContext:
    seLinuxOptions:
      level: 's0:c30,c5'
    fsGroup: 1000880000
  containers:
    - resources: {}
      terminationMessagePath: /dev/termination-log
      name: multiclusterhub-operator-index
      securityContext:
        capabilities:
          drop:
            - KILL
            - MKNOD
            - SETGID
            - SETUID
        runAsUser: 1000880000
      ports:
        - containerPort: 50051
          protocol: TCP
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - name: default-token-n6gfz
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePolicy: File
      image: >-
        quay.io/open-cluster-management/multiclusterhub-operator-index:1.0.0-SNAPSHOT-2020-04-03-00-18-56
  serviceAccount: default
  volumes:
    - name: default-token-n6gfz
      secret:
        secretName: default-token-n6gfz
        defaultMode: 420
  dnsPolicy: ClusterFirst
  tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
status:
  phase: Running
  conditions:
    - type: Initialized
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-04-03T02:51:15Z'
    - type: Ready
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-04-03T02:51:17Z'
    - type: ContainersReady
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-04-03T02:51:17Z'
    - type: PodScheduled
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-04-03T02:51:14Z'
  hostIP: 10.0.184.100
  podIP: 10.129.2.50
  podIPs:
    - ip: 10.129.2.50
  startTime: '2020-04-03T02:51:15Z'
  containerStatuses:
    - restartCount: 0
      started: true
      ready: true
      name: multiclusterhub-operator-index
      state:
        running:
          startedAt: '2020-04-03T02:51:17Z'
      imageID: >-
        quay.io/open-cluster-management/multiclusterhub-operator-index@sha256:6d361694c46429473c58819cb5b72c7f6d2ce655c57f3368fc2a8608fc40e13e
      image: >-
        quay.io/open-cluster-management/multiclusterhub-operator-index:1.0.0-SNAPSHOT-2020-04-03-00-18-56
      lastState: {}
      containerID: 'cri-o://f16e4f3252202dda13da716bad1ef79aced298ae15ad26f5f2e7101b923e386d'
  qosClass: BestEffort

However our subscription reports that there are unhealthy catalog sources:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"operators.coreos.com/v1alpha1","kind":"Subscription","metadata":{"annotations":{},"name":"multiclusterhub-operator-bundle","namespace":"open-cluster-management"},"spec":{"channel":"alpha","installPlanApproval":"Automatic","name":"multiclusterhub-operator-bundle","source":"open-cluster-management","sourceNamespace":"open-cluster-management","startingCSV":"multiclusterhub-operator.v0.0.1"}}
  creationTimestamp: '2020-04-03T02:51:15Z'
  generation: 1
  name: multiclusterhub-operator-bundle
  namespace: open-cluster-management
  resourceVersion: '1545322'
  selfLink: >-
    /apis/operators.coreos.com/v1alpha1/namespaces/open-cluster-management/subscriptions/multiclusterhub-operator-bundle
  uid: 9458d203-7dd8-48d2-86b3-39f8692ef761
spec:
  channel: alpha
  installPlanApproval: Automatic
  name: multiclusterhub-operator-bundle
  source: open-cluster-management
  sourceNamespace: open-cluster-management
  startingCSV: multiclusterhub-operator.v0.0.1
status:
  catalogHealth:
    - catalogSourceRef:
        apiVersion: operators.coreos.com/v1alpha1
        kind: CatalogSource
        name: open-cluster-management
        namespace: open-cluster-management
        resourceVersion: '1545305'
        uid: cc6869bf-04cd-4cb2-8185-ef684878df9a
      healthy: true
      lastUpdated: '2020-04-03T02:51:15Z'
    - catalogSourceRef:
        apiVersion: operators.coreos.com/v1alpha1
        kind: CatalogSource
        name: certified-operators
        namespace: openshift-marketplace
        resourceVersion: '1541832'
        uid: eee9f5ce-ba55-4f56-87d6-dc89e4a6b9d8
      healthy: true
      lastUpdated: '2020-04-03T02:51:15Z'
    - catalogSourceRef:
        apiVersion: operators.coreos.com/v1alpha1
        kind: CatalogSource
        name: community-operators
        namespace: openshift-marketplace
        resourceVersion: '1541850'
        uid: a6726b4d-1e92-46a3-84de-335710e76386
      healthy: true
      lastUpdated: '2020-04-03T02:51:15Z'
    - catalogSourceRef:
        apiVersion: operators.coreos.com/v1alpha1
        kind: CatalogSource
        name: redhat-marketplace
        namespace: openshift-marketplace
        resourceVersion: '1541857'
        uid: 16c507de-f1ea-4259-85cd-4172958998d4
      healthy: true
      lastUpdated: '2020-04-03T02:51:15Z'
    - catalogSourceRef:
        apiVersion: operators.coreos.com/v1alpha1
        kind: CatalogSource
        name: redhat-operators
        namespace: openshift-marketplace
        resourceVersion: '1541833'
        uid: 867da3ae-72cf-482d-af87-b2ab43632446
      healthy: true
      lastUpdated: '2020-04-03T02:51:15Z'
  conditions:
    - lastTransitionTime: '2020-04-03T02:51:15Z'
      message: all available catalogsources are healthy
      reason: AllCatalogSourcesHealthy
      status: 'False'
      type: CatalogSourcesUnhealthy
  lastUpdated: '2020-04-03T02:51:15Z'

We cannot make heads or tails of this situation and are looking for any help/advice we can get on how to resolve the problem.

opm should work with `/etc/containers/registries.conf`

Please note this is a specific example, and the image repository locations can be substituted to any locations.

If I configure a /etc/containers/registries.conf file like the following:

unqualified-search-registries = ["cp.icr.io"]

[[registry]]
prefix = "cp.icr.io/cp"
insecure = false
blocked = false
location = "cp.stg.icr.io/cp"

[[registry.mirror]]
location = "quay.io/nathanbrophy"
insecure = false

I expect client image mirroring to work. For example the following command # podman pull cp.icr.io/cp/testoperator:v1.0.0 should use the registries configuration file to overwrite the image location with the mirror specified in the file. This is the expected and observed behavior for tools like podman and skopeo.

In release 1.8, 1.7 of opm this is the observed behavior

# ./bin/opm index add -b "cp.icr.io/cp/testoperator:v1.0.0" -c podman --tag "quay.io/nathanbrophy/testcatalog:latest"
INFO[0000] building the index                            bundles="[cp.icr.io/cp/testoperator:v1.0.0]"
ERRO[0000] permissive mode disabled                      bundles="[cp.icr.io/cp/testoperator:v1.0.0]" error="error loading bundle from image: cp.icr.io/cp/testoperator:v1.0.0: not found"

In release 1.6 of opm this is the observed behavior -- the image mirror is successfully located and the bundle image is pulled

# ./bin/opm index add -b "cp.icr.io/cp/testoperator:v1.0.0" -c podman --tag "quay.io/nathanbrophy/testcatalog:latest"
INFO[0000] building the index                            bundles="[cp.icr.io/cp/testoperator:v1.0.0]"
INFO[0000] running podman pull                           img="cp.icr.io/cp/testoperator:v1.0.0"
INFO[0002] running podman save                           img="cp.icr.io/cp/testoperator:v1.0.0"
INFO[0003] loading Bundle cp.icr.io/cp/testoperator:v1.0.0  img="cp.icr.io/cp/testoperator:v1.0.0"
INFO[0003] found annotations file searching for csv      dir=bundle_tmp552529018 file=bundle_tmp552529018/metadata load=annotations
INFO[0003] found csv, loading bundle                     dir=bundle_tmp552529018 file=bundle_tmp552529018/manifests load=bundle
INFO[0003] loading bundle file                           dir=bundle_tmp552529018/manifests file=testoperator-crd.yaml load=bundle
INFO[0003] loading bundle file                           dir=bundle_tmp552529018/manifests file=testoperator.v1.0.0.clusterserviceversion.yaml load=bundle
INFO[0003] Generating dockerfile                         bundles="[cp.icr.io/cp/testoperator:v1.0.0]"
INFO[0003] writing dockerfile: index.Dockerfile269233084  bundles="[cp.icr.io/cp/testoperator:v1.0.0]"
INFO[0003] running podman build                          bundles="[cp.icr.io/cp/testoperator:v1.0.0]"
INFO[0003] [podman build -f index.Dockerfile269233084 -t quay.io/nathanbrophy/testcatalog:latest .]  bundles="[cp.icr.io/cp/testoperator:v1.0.0]"

Release 1.8:

if err := populate(context.TODO(), dbLoader, graphLoader, dbQuerier, reg, image.SimpleReference(ref), request.Mode); err != nil {

Release 1.6:

if err := loader.Populate(); err != nil {

It appears the logic to pull the bundle image changed from using the command line container engine tool specified by the -c flag to using the image.Registry.Pull method in the 1.7+ releases of opm

The observed functionality in release 1.6 of opm where image mirroring specifications in /etc/containers/registries.conf file are honored should be added back into opm version 1.7+.

opm: Adding bundle to registry with new channel fails and deletes previous bundles

Using opm 1.5.11

I'm trying to create a scenario where I add a second channel to a registry:

Version Replaces Default Channel Channels
1.0.0 BETA BETA
1.0.1 1.0.0 STABLE BETA,STABLE

Commands in order:

  1. operator-sdk bundle create quay.io/cdjohnson/testoperator:v1.0.0 --directory ../deploy/olm-catalog/testoperator/1.0.0 --package test-operator --channels BETA --default-channel BETA
  2. docker push quay.io/cdjohnson/testoperator:v1.0.0
  3. opm registry add -b quay.io/cdjohnson/testoperator:v1.0.0 -c docker

The bundles.db shows:
--channel-entry--
BETA|test-operator|testoperator.v1.0.0
1|BETA|test-operator|testoperator.v1.0.0||0
--operatorbundle--
testoperator.v1.0.0|quay.io/cdjohnson/testoperator:v1.0.0

  1. operator-sdk bundle create quay.io/cdjohnson/testoperator:v1.0.1 --directory ../deploy/olm-catalog/testoperator/1.0.1 --package test-operator --channels BETA,STABLE --default-channel BETA
  2. docker push quay.io/cdjohnson/testoperator:v1.0.1
  3. opm registry add -b quay.io/cdjohnson/testoperator:v1.0.1 -c docker
INFO[0000] adding to the registry                        bundles="[quay.io/cdjohnson/testoperator:v1.0.1]"
DEBU[0000] couldn't rollback - this is expected if the transaction committed  error="sql: transaction has already been committed or rolled back"
INFO[0000] running docker pull                           img="quay.io/cdjohnson/testoperator:v1.0.1"
DEBU[0000] [docker pull quay.io/cdjohnson/testoperator:v1.0.1]  img="quay.io/cdjohnson/testoperator:v1.0.1"
INFO[0001] running docker save                           img="quay.io/cdjohnson/testoperator:v1.0.1"
DEBU[0001] [docker save quay.io/cdjohnson/testoperator:v1.0.1 -o bundle_staging_537743893/bundle.tar]  img="quay.io/cdjohnson/testoperator:v1.0.1"
INFO[0001] loading Bundle quay.io/cdjohnson/testoperator:v1.0.1  img="quay.io/cdjohnson/testoperator:v1.0.1"
INFO[0001] found annotations file searching for csv      dir=bundle_tmp016250894 file=bundle_tmp016250894/metadata load=annotations
INFO[0001] found csv, loading bundle                     dir=bundle_tmp016250894 file=bundle_tmp016250894/manifests load=bundle
INFO[0001] loading bundle file                           dir=bundle_tmp016250894/manifests file=testoperator.v1.0.1.clusterserviceversion.yaml load=bundle
FATA[0001] permissive mode disabled                      bundles="[quay.io/cdjohnson/testoperator:v1.0.1]" error="error loading bundle from image: Error adding package error loading bundle into db: testoperator.v1.0.1 specifies a replacement testoperator.v1.0.0 that cannot be found"

The bundles.db shows:
--channel-entry--
--operatorbundle--
testoperator.v1.0.0|quay.io/cdjohnson/testoperator:v1.0.0

CSV Files:
bundles.tar.zip

Issue : Error adding package error loading bundle into db: prometheusoperator.0.22.2 specifies replacement that couldn't be found

Issue

I build using the master branch - 0652a72 of this project the olm tool and created a bundle that I pushed on quay

./opm alpha bundle build -d manifests/prometheus/0.22.2 --tag quay.io/cmoulliard/olm-prometheus:0.22.2 -p prometheus -c preview -e preview
INFO[0000] Building annotations.yaml                    
INFO[0000] An annotations.yaml already exists in directory 
INFO[0000] Validating existing annotations.yaml         
INFO[0000] Building Dockerfile                          
INFO[0000] Building bundle image                        
Sending build context to Docker daemon  359.9kB
Step 1/9 : FROM scratch
 ---> 
Step 2/9 : LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1
 ---> Using cache
 ---> 9a1a43532027
Step 3/9 : LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/
 ---> Using cache
 ---> 1e7a1f544437
Step 4/9 : LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/
 ---> Using cache
 ---> e1d6fa13ad0f
Step 5/9 : LABEL operators.operatorframework.io.bundle.package.v1=prometheus
 ---> Using cache
 ---> 5dca8ee60ad4
Step 6/9 : LABEL operators.operatorframework.io.bundle.channels.v1=preview
 ---> Using cache
 ---> 2bc58968e68c
Step 7/9 : LABEL operators.operatorframework.io.bundle.channel.default.v1=preview
 ---> Using cache
 ---> fa74f3581eef
Step 8/9 : COPY /*.yaml /manifests/
 ---> Using cache
 ---> da1536f4ea60
Step 9/9 : COPY /metadata/annotations.yaml /metadata/annotations.yaml
 ---> Using cache
 ---> f0151117de1b
Successfully built f0151117de1b
Successfully tagged quay.io/cmoulliard/olm-prometheus:0.22.2

docker push  quay.io/cmoulliard/olm-prometheus:0.22.2
The push refers to repository [quay.io/cmoulliard/olm-prometheus]
3b4589df344b: Pushed 
2e27d9292f04: Pushed 
0.22.2: digest: sha256:55bb192fbec86d93b35059a4cb279961e32f58d1877ff306f433a0dcb9c9523a size: 733

but when I try to add the index to the image quay.io/cmoulliard/olm-index , then I get this error

bundleImage=quay.io/cmoulliard/olm-prometheus
./opm index add -b="$bundleImage:0.22.2" -t quay.io/cmoulliard/olm-index:0.1.0 -c=docker
INFO[0000] building the index                            bundles="[quay.io/cmoulliard/olm-prometheus:0.22.2]"
INFO[0000] running docker pull                           img="quay.io/cmoulliard/olm-prometheus:0.22.2"
INFO[0001] running docker save                           img="quay.io/cmoulliard/olm-prometheus:0.22.2"
INFO[0002] loading Bundle quay.io/cmoulliard/olm-prometheus:0.22.2  img="quay.io/cmoulliard/olm-prometheus:0.22.2"
INFO[0002] found annotations file searching for csv      dir=bundle_tmp419156138 file=bundle_tmp419156138/metadata load=annotations
INFO[0002] found csv, loading bundle                     dir=bundle_tmp419156138 file=bundle_tmp419156138/manifests load=bundle
INFO[0002] loading bundle file                           dir=bundle_tmp419156138/manifests file=alertmanager.crd.yaml load=bundle
INFO[0002] loading bundle file                           dir=bundle_tmp419156138/manifests file=prometheus.crd.yaml load=bundle
INFO[0002] loading bundle file                           dir=bundle_tmp419156138/manifests file=prometheusoperator.0.22.2.clusterserviceversion.yaml load=bundle
INFO[0002] loading bundle file                           dir=bundle_tmp419156138/manifests file=prometheusrule.crd.yaml load=bundle
INFO[0002] loading bundle file                           dir=bundle_tmp419156138/manifests file=servicemonitor.crd.yaml load=bundle
FATA[0002] permissive mode disabled                      bundles="[quay.io/cmoulliard/olm-prometheus:0.22.2]" error="error loading bundle from image: Error adding package error loading bundle into db: prometheusoperator.0.22.2 specifies replacement that couldn't be found"

Info

OS: MacOS
Go version: 1.13.7

README error

Bundle images
Using OCI spec container images as a method of storing the manifest and metadata contents of individual bundles, opm interacts directly with these images to generate and incrementally update the database. Once you have your manifests defined and have created a directory in the format defined above, building the image is as simple as defining a Dockerfile and building that image:

podman build -t quay.io/my-container-registry-namespace/my-manifest-bundle:latest -f bundle.Dockerfile .
Once you have built the container, you can publish it like any other container image:

podman push quay.io/my-container-registry-namespace/my-manifest-bundle:latest
Of course, this build step can be done with any other OCI spec container tools like docker, buildah, libpod, etc.

bundle.Dockerfile file not exist!!!

Document the setup of a ConfigMap Operator Registry

I'm looking through the code to figure out how to setup a ConfigMap Operator Registry. I figured I'd drop what I found which seems like it would be good to document here. I'd be thankful for any corrections.

Arguments to configmap-server

--debug Default: false, "enable debug logging"
--kubeconfig -k Required (or use ENV?) "absolute path to kubeconfig file"
--database -d Default: bundles.db "name of db to output"
--configMapName -c Required "name of a configmap"
--configMapNamespace -n Required "namespace of a configmap"
--port -p Default: 50051 "port number to serve on"

ENVs for configmap-server:

KUBERNETES_SERVICE_HOST
KUBERNETES_SERVICE_PORT

The config map needs to be in the following form

apiVersion: v1
kind: ConfigMap
metadata: 
  name: operator-registry
  namespace: default
data:
  customResourceDefinitions: |-
    Paste Your CRDs Here
  clusterServiceVersions: |-
    Paste Your CSVs Here
  packages: |-
    Paste Your Package YAML here

Relevant Documents:
https://github.com/operator-framework/operator-registry/blob/master/configmap-registry.Dockerfile
https://github.com/operator-framework/operator-registry/blob/master/configmap.example.yaml
https://github.com/operator-framework/operator-registry/tree/master/cmd/configmap-server

Issue: UNIQUE constraint failed: operatorbundle.name, error loading package into db

I've been trying to smoke test an update for my operator in OLM, but I keep hitting this issue in the origin-operator-registry pod:

time="2020-03-10T14:26:23Z" level=warning msg="strict mode disabled" error="error loading manifests from appregistry: error loading operator manifests: [error adding operator bundle : UNIQUE constraint failed: operatorbundle.name, error loading package into db: ibm-spectrum-scale-csi-operator.v1.1.0 specifies replacement that couldn't be found]" port=50051 type=appregistry

If I'm reading this error correctly when attempting to specify the spec.replaces for my new bundle the pod can't find my old CSV version, but the version name matches 1:1.

# Old Package
--
channels:
  - currentCSV: ibm-spectrum-scale-csi-operator.v1.0.1
    name: stable
defaultChannel: stable
packageName: ibm-spectrum-scale-csi-operator

# New Package
---
channels:
  - currentCSV: ibm-spectrum-scale-csi-operator.v1.1.0
    name: stable
defaultChannel: stable
packageName: ibm-spectrum-scale-csi-operator

# CSV Replaces Field for v1.1.0
  maturity: alpha
  provider:
    name: IBM
  version: 1.1.0
  replaces: ibm-spectrum-scale-csi-operator.v1.0.1

I'm confused as to what I'm doing wrong with the replaces, because it looks identical to releases with upgrades in Community Operators.

Environment

k8s: 1.16.3

# origin-operator-registry image version
Image ID:      docker-pullable://quay.io/openshift/origin-operator-registry@sha256:965297925da5b7395718592fe84292bc61530f892087fe53d22a779b10a21772
# kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

Document how to start the registry locally, ...

Questions

  • Is it possible to document how to start the registry locally as it is mentioned with the README.md file ?
Using the catalog locally
grpcurl is a useful tool for interacting with the example catalog server.

$ grpcurl -plaintext  localhost:50051 list api.Registry
  • Is it possible to include a How To step within the README.md file ?
  • Is the MANIFEST format used to create an image of a catalog already supported by OLM ? Since which version ?

Failed to run `make build` on MacOS

When I run the make build, got errors below:

mac:operator-registry jianzhang$ make build
go build -mod=vendor  -o bin/appregistry-server ./cmd/appregistry-server
build flag -mod=vendor only valid when using modules
make: *** [bin/appregistry-server] Error 1

And, then I disable this sentence: # MOD_FLAGS := $(shell (go version | grep -q -E "1\.(11|12)") && echo -mod=vendor) in Makefile. It works although there are some warinnings.

mac:operator-registry jianzhang$ make build
go build   -o bin/appregistry-server ./cmd/appregistry-server
# github.com/operator-framework/operator-registry/cmd/appregistry-server
ld: warning: building for macOS, but linking in object file (/var/folders/2c/4whhm34n7892mf9l2j02cf480000gn/T/go-link-526286923/go.o) built for 
go build   -o bin/configmap-server ./cmd/configmap-server
# github.com/operator-framework/operator-registry/cmd/configmap-server
ld: warning: building for macOS, but linking in object file (/var/folders/2c/4whhm34n7892mf9l2j02cf480000gn/T/go-link-794575298/go.o) built for 
go build   -o bin/initializer ./cmd/initializer
# github.com/operator-framework/operator-registry/cmd/initializer
ld: warning: building for macOS, but linking in object file (/var/folders/2c/4whhm34n7892mf9l2j02cf480000gn/T/go-link-715865138/go.o) built for 
go build   -o bin/opm ./cmd/opm
# github.com/operator-framework/operator-registry/cmd/opm
ld: warning: building for macOS, but linking in object file (/var/folders/2c/4whhm34n7892mf9l2j02cf480000gn/T/go-link-739889228/go.o) built for 
go build   -o bin/registry-server ./cmd/registry-server
# github.com/operator-framework/operator-registry/cmd/registry-server
ld: warning: building for macOS, but linking in object file (/var/folders/2c/4whhm34n7892mf9l2j02cf480000gn/T/go-link-843786153/go.o) built for 

Generated binaries:

mac:operator-registry jianzhang$ ls -l ./bin/
total 511016
-rwxr-xr-x  1 jianzhang  staff  61171656 Nov  6 16:13 appregistry-server
-rwxr-xr-x  1 jianzhang  staff  60132808 Nov  6 16:14 configmap-server
-rwxr-xr-x  1 jianzhang  staff  37980232 Nov  6 16:14 initializer
-rwxr-xr-x  1 jianzhang  staff  60896576 Nov  6 16:14 opm
-rwxr-xr-x  1 jianzhang  staff  39089336 Nov  6 16:14 registry-server

Version:

mac:operator-registry jianzhang$ git log
commit 5d342b2dfb015f682eb7b563790c6126e453099c (HEAD -> master, origin/master, origin/HEAD)
Merge: f30cfde b9e0f7d
Author: OpenShift Merge Robot <[email protected]>
Date:   Tue Nov 5 21:47:03 2019 +0100

    Merge pull request #118 from ecordell/fix-upstream-configmap
    
    fix(configmap): bump base image for configmap registry image

bundle validate fails w/ openapi field minimum set

opm alpha bundle validate fails if the CRD contains field validation.

Error:

opm alpha bundle validate --tag quay.io/johnstrunk/snapscheduler.v1.1.0:latest
INFO[0000] Create a temp directory at /tmp/bundle-248706423  container-tool=docker
DEBU[0000] Pulling and unpacking container image         container-tool=docker
INFO[0000] running docker pull                           container-tool=docker
DEBU[0000] [docker pull quay.io/johnstrunk/snapscheduler.v1.1.0:latest]  container-tool=docker
INFO[0001] running docker save                           container-tool=docker
DEBU[0001] [docker save quay.io/johnstrunk/snapscheduler.v1.1.0:latest -o bundle_staging_059888746/bundle.tar]  container-tool=docker
INFO[0001] Unpacked image layers, validating bundle image format & contents  container-tool=docker
DEBU[0001] Found manifests directory                     container-tool=docker
DEBU[0001] Found metadata directory                      container-tool=docker
DEBU[0001] Getting mediaType info from manifests directory  container-tool=docker
DEBU[0001] Validating annotations.yaml                   container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.mediatype.v1" with value "registry+v1"  container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.manifests.v1" with value "manifests/"  container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.metadata.v1" with value "metadata/"  container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.package.v1" with value "snapscheduler"  container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.channels.v1" with value "stable"  container-tool=docker
DEBU[0001] Found annotation "operators.operatorframework.io.bundle.channel.default.v1" with value "stable"  container-tool=docker
DEBU[0001] Validating bundle contents                    container-tool=docker
DEBU[0001] Validating "operators.coreos.com/v1alpha1, Kind=ClusterServiceVersion" from file "snapscheduler.v1.1.0.clusterserviceversion.yaml"  container-tool=docker
DEBU[0001] Validating "apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition" from file "snapshotschedules.snapscheduler.backube.crd.yaml"  container-tool=docker
Error: Bundle validation errors: cannot convert int64 to float64

The offending portion of CRD seems to be the "minimum" line in the below snip. If minimum: 1 is removed from the CRD validation passes.

                maxCount:
                  format: int32
                  minimum: 1
                  type: integer

Full CRD:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: snapshotschedules.snapscheduler.backube
spec:
  additionalPrinterColumns:
    - JSONPath: .spec.schedule
      name: Schedule
      type: string
    - JSONPath: .spec.retention.expires
      name: Max age
      type: string
    - JSONPath: .spec.retention.maxCount
      name: Max num
      type: integer
    - JSONPath: .spec.disabled
      name: Disabled
      type: boolean
    - JSONPath: .status.nextSnapshotTime
      name: Next snapshot
      type: string
  group: snapscheduler.backube
  names:
    kind: SnapshotSchedule
    listKind: SnapshotScheduleList
    plural: snapshotschedules
    singular: snapshotschedule
  scope: Namespaced
  subresources:
    status: {}
  validation:
    openAPIV3Schema:
      description: SnapshotSchedule is the Schema for the snapshotschedules API
      properties:
        apiVersion:
          description: >-
            APIVersion defines the versioned schema of this representation of an
            object. Servers should convert recognized schemas to the latest
            internal value, and may reject unrecognized values. More info:
            https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
          type: string
        kind:
          description: >-
            Kind is a string value representing the REST resource this object
            represents. Servers may infer this from the endpoint the client
            submits requests to. Cannot be updated. In CamelCase. More info:
            https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
          type: string
        metadata:
          type: object
        spec:
          description: SnapshotScheduleSpec defines the desired state of SnapshotSchedule
          properties:
            claimSelector:
              description: >-
                ClaimSelector selects which PVCs will be snapshotted according
                to this schedule.
              properties:
                matchExpressions:
                  description: >-
                    matchExpressions is a list of label selector requirements.
                    The requirements are ANDed.
                  items:
                    description: >-
                      A label selector requirement is a selector that contains
                      values, a key, and an operator that relates the key and
                      values.
                    properties:
                      key:
                        description: key is the label key that the selector applies to.
                        type: string
                      operator:
                        description: >-
                          operator represents a key's relationship to a set of
                          values. Valid operators are In, NotIn, Exists and
                          DoesNotExist.
                        type: string
                      values:
                        description: >-
                          values is an array of string values. If the operator
                          is In or NotIn, the values array must be non-empty. If
                          the operator is Exists or DoesNotExist, the values
                          array must be empty. This array is replaced during a
                          strategic merge patch.
                        items:
                          type: string
                        type: array
                    required:
                      - key
                      - operator
                    type: object
                  type: array
                matchLabels:
                  additionalProperties:
                    type: string
                  description: >-
                    matchLabels is a map of {key,value} pairs. A single
                    {key,value} in the matchLabels map is equivalent to an
                    element of matchExpressions, whose key field is "key", the
                    operator is "In", and the values array contains only
                    "value". The requirements are ANDed.
                  type: object
              type: object
            disabled:
              description: Disabled determines whether this schedule is currently disabled.
              type: boolean
            retention:
              description: >-
                Retention determines how long this schedule's snapshots will be
                kept.
              properties:
                expires:
                  description: >-
                    Expires is the length of time (time.Duration) after which a
                    given Snapshot will be deleted.
                  type: string
                maxCount:
                  format: int32
                  minimum: 1
                  type: integer
              type: object
            schedule:
              description: >-
                Schedule is a Cronspec specifying when snapshots should be
                taken. See https://en.wikipedia.org/wiki/Cron for a description
                of the format.
              type: string
            snapshotTemplate:
              description: >-
                SnapshotTemplate is a template description of the Snapshots to
                be created.
              properties:
                labels:
                  additionalProperties:
                    type: string
                  description: >-
                    Labels is a list of labels that should be added to each
                    Snapshot created by this schedule.
                  type: object
                snapshotClassName:
                  description: >-
                    SnapshotClassName is the name of the VSC to be used when
                    creating Snapshots.
                  type: string
              type: object
          type: object
        status:
          description: >-
            SnapshotScheduleStatus defines the observed state of
            SnapshotSchedule
          properties:
            conditions:
              description: >-
                Conditions is a list of conditions related to operator
                reconciliation.
              items:
                description: >-
                  Condition represents the state of the operator's
                  reconciliation functionality.
                properties:
                  lastHeartbeatTime:
                    format: date-time
                    type: string
                  lastTransitionTime:
                    format: date-time
                    type: string
                  message:
                    type: string
                  reason:
                    type: string
                  status:
                    type: string
                  type:
                    description: >-
                      ConditionType is the state of the operator's
                      reconciliation functionality.
                    type: string
                required:
                  - status
                  - type
                type: object
              type: array
            lastSnapshotTime:
              description: >-
                LastSnapshotTime is the time of the most recent set of snapshots
                generated by this schedule.
              type: string
            nextSnapshotTime:
              description: >-
                NextSnapshotTime is the time when this schedule should create
                the next set of snapshots.
              type: string
          type: object
      type: object
  version: v1
  versions:
    - name: v1
      served: true
      storage: true

Unable to pull form insecure registry

Since upgrade to v1.9.0 of the cli I am unable to add images to an index because its complaining about an insecure registry. It looks like the break occurred in v.1.7.0 because versions prior to this worked without issue.

[Robs-MBP]$ opm index add --container-tool docker --permissive --bundles myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5 --tag myInsecureRegistry.com/openshift-marketplace/ibm-myproduct-index:0.0.5
INFO[0000] building the index                             bundles="[myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5]"
ERRO[0001] permissive mode disabled                       bundles="[myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5]" error="error loading bundle from image: failed to do request: Head https://myInsecureRegistry.com/v2/openshift-marketplace/myproduct-operator-bundle/manifests/0.0.5: x509: certificate signed by unknown authority"

I have already updated the docker configuration to allow pull/pushing to the insecure registry and can manually pull the image without issue.

Using v1.6.1 produces the following

[Robs-MBP]$ opm index add --container-tool docker --bundles myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5 --tag myInsecureRegistry.com/openshift-marketplace/ibm-myproduct-index:0.0.5
INFO[0000] building the index                            bundles="[myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5]"
INFO[0000] running docker pull                           img="myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5"
INFO[0002] running docker save                           img="myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5"
INFO[0002] loading Bundle myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5  img="myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5"
INFO[0002] found annotations file searching for csv      dir=bundle_tmp011253059 file=bundle_tmp011253059/metadata load=annotations
INFO[0002] found csv, loading bundle                     dir=bundle_tmp011253059 file=bundle_tmp011253059/manifests load=bundle
INFO[0002] loading bundle file                           dir=bundle_tmp011253059/manifests file=oidc.security.ibm.com_clients_crd.yaml load=bundle
INFO[0002] Generating dockerfile                         bundles="[myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5]"
INFO[0002] writing dockerfile: index.Dockerfile230944877  bundles="[myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5]"
INFO[0002] running docker build                          bundles="[myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5]"
INFO[0002] [docker build -f index.Dockerfile230944877 -t myInsecureRegistry.com/openshift-marketplace/ibm-myproduct-index:0.0.5 .]  bundles="[myInsecureRegistry.com/openshift-marketplace/myproduct-operator-bundle:0.0.5]"

opm: index catalog size doubles when adding bundles from index

Related to issue #218
When running the following commands it looks like the size of the index catalog image is doubling:

opm index add --bundles quay.io/jondockter/memcached-operator-bundle:0.0.1 --tag quay.io/jondockter/test-catalog-index:1.0.0
opm index add --bundles quay.io/jondockter/memcached-operator-bundle:0.0.2 --from-index quay.io/jondockter/test-catalog-index:1.0.0 --tag quay.io/jondockter/test-catalog-index:1.0.1

podman images

REPOSITORY                                             TAG      IMAGE ID       CREATED         SIZE
<none>                                                 <none>   4ffe9f324d60   4 minutes ago   2.52 GB
quay.io/jondockter/test-catalog-index                  1.0.1    228e1b258e23   4 minutes ago   2.52 GB
<none>                                                 <none>   427d23d38fe5   6 minutes ago   1.26 GB
quay.io/jondockter/test-catalog-index                  1.0.0    345aec45d18f   6 minutes ago   1.26 GB
quay.io/jondockter/memcached-operator-bundle           0.0.3    46f7586bbaaa   6 minutes ago   17.5 kB
quay.io/jondockter/memcached-operator-bundle           0.0.2    799af2276c92   6 minutes ago   17.5 kB
quay.io/jondockter/memcached-operator-bundle           0.0.1    76f66e95a108   7 minutes ago   17.5 kB
quay.io/operator-framework/upstream-registry-builder   latest   53136ea017f3   23 hours ago    1.26 GB

fyi... @lcarva

RFE: mirror an index subset based on Subscriptions

I'm seeking initial feedback on this idea and am happy to write up a more formal proposal if it seems like a reasonable direction.

Use case

I am a customer with a disconnected environment. I need to make specific OLM-based operators, which I receive from a vendor and have chosen in advance, available for installation inside my network. I have many clusters inside my disconnected network, and I run my own container registry inside that network.

When managing many clusters, either gitops or some other management approach and tooling will be used to define what Subscriptions exist on which clusters. Assuming I want a specific operator installed on all my clusters, I will define a Subscription for it that references at a minimum:

  • package name
  • channel
  • reference to my customized CatalogSource, which of course is an index image present in my internal container registry.

This Subscription will be stored in a git repo or some other place from which it can be applied to many clusters.

Proposal

Subscriptions can be used as the source of truth for curating a subset of the vendor's index image.

I propose an opm command that will take as input:

  • list of Subscriptions, either as yaml files on disk, or piped to stdin
  • reference to a vendor's index image

The command will parse the Subscriptions and for each:

  • find the head of that package's channel in the vendor's index image
  • add that bundle to a new registry db

Perhaps something like this:

opm registry add --from-subscriptions=./ --index=quay.io/somevendor/amazingoperators:latest

The resulting database file should enable me to build an index image that I can mirror with oc adm catalog mirror, and thus provide updates to my disconnected clusters.

single-cluster use

For a small number of clusters, or single cluster, generating an index of updates could be as simple as:

oc get subscriptions -o yaml | opm registry add --from-subscriptions - --index=quay.io/somevendor/amazingoperators:latest

Alternatives

We could make the customer define in a simpler format which operators they want to mirror, avoiding the need to make Subscriptions just to mirror content. That would involve listing pairs of package name and channel, perhaps in an index file of some kind.

But on day 2, when they just want updates, the source of truth for which operators they need to update is their Subscriptions. Even on day 1, in order to install operators, they will need to make Subscriptions anyway. Using a Subscription on day 1 to define desired state enables a series of tooling, including the mirroring of content and creation of Subscriptions in clusters, to make the desired state a reality.

Pros:

  • removes the need for opm or other mirroring-related tooling to understand what a Subscription is.

Cons:

  • requires the customer to duplicate the same data in both their Subscriptions and their definition of their curated index image.

Run into the error "not a valid repository/tag: invalid reference format" when building a catalog of operators

I run the master branch on OCP3.11 node. hit such below error:

$ docker build -t example-registry:latest -f upstream-example.Dockerfile .
Sending build context to Docker daemon 83.92 MB
Step 1/10 : FROM quay.io/operator-framework/upstream-registry-builder as builder
Error parsing reference: "quay.io/operator-framework/upstream-registry-builder as builder" is not a valid repository/tag: invalid reference format

docker version:

$ docker version
Client:
 Version:         1.13.1
 API version:     1.26
 Package version: docker-1.13.1-109.gitcccb291.el7_7.x86_64
 Go version:      go1.10.3
 Git commit:      cccb291/1.13.1
 Built:           Thu Jan 30 06:20:45 2020
 OS/Arch:         linux/amd64

Server:
 Version:         1.13.1
 API version:     1.26 (minimum version 1.12)
 Package version: docker-1.13.1-109.gitcccb291.el7_7.x86_64
 Go version:      go1.10.3
 Git commit:      cccb291/1.13.1
 Built:           Thu Jan 30 06:20:45 2020
 OS/Arch:         linux/amd64
 Experimental:    false

misleading error message when replaces chain is broken

Using quay.io/openshift/origin-operator-registry:latest
when I have a chain of CSVs, for example 0.0.2, 0.0.3, 0.0.4, .... 1.0.0
where 0.0.2 is the first version and 1.0.0 is the latest
and each version replaces the previous
If the replaces chain is broken somewhere in the middle, the error message is misleading -- it tells me that the latest one specifies a replacement that doesn't exists, even though the latest is fine -- it's much earlier in the chain where I had a broken replaces.

time="2020-04-09T23:53:49Z" level=fatal msg="permissive mode disabled" error="error loading manifests from directory: error loading package into db: my-operator.v1.0.0 specifies replacement that couldn't be found"

It should indicate the actual one missing the replacement, for example if 0.0.3 references 0.0.1 but that doesn't exist, the message should say 0.0.3 specifies replacement that couldn't be found.

Here:

errs = append(errs, fmt.Errorf("%s specifies replacement that couldn't be found", c.CurrentCSVName))

Maybe CurrentCSVName is the one referenced in the package? Which in my case is the latest. I think that should be changed to print the first broken one in the replaces chain.

appregistry always report warning when download package

I created three CSV for one application and pushed to quay.io, after the operatorsource was created, the appregistry always report warning as follows, but I can still create the subscription successfully and 0.0.3 CSV and operators can be created successfully.

time="2019-12-05T09:04:35Z" level=info msg=directory dir=downloaded file=0.0.1 load=package
time="2019-12-05T09:04:35Z" level=info msg=directory dir=downloaded file=0.0.2 load=package
time="2019-12-05T09:04:35Z" level=info msg=directory dir=downloaded file=0.0.3 load=package
time="2019-12-05T09:04:35Z" level=info msg="serving registry" port=50051 type=appregistry
time="2019-12-05T09:04:35Z" level=warning msg="strict mode disabled" error="error loading manifests from appregistry: error loading operator manifests: error loading package into db: memcached-operator.v0.0.3 specifies replacement that couldn't be found" port=50051 type=appregistry

The attachment are my csv bundles.
memcache.tar.gz

CSV uniqueness error is unclear

The error returned when loading bundles that contain duplicate CSVs is unclear:

error adding operator bundle : UNIQUE constraint failed: operatorbundle.csv

The error should identify the file path, resource type, and possibly the package of the duplicate resource. Adding this info will help users identify and resolve duplicate manifest issues more quickly.

upstream-registry-builder docker image size very large

I was running opm index add for a couple of operator bundles and got to noticing that pushing the image to quay was taking a long time. Looking at the docker image for the upstream-registry-builder seem very large, see below. Is that expected? Can it be reduced?

Using opm version 1.61 (also would be nice to have a opm version command)

$ docker images | grep upstream-registry-builder
quay.io/operator-framework/upstream-registry-builder                                                              latest              3daa00e6293c        2 days ago          1.19GB
$ docker history 3daa00e6293c
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
3daa00e6293c        2 days ago          /bin/sh -c cp /build/bin/opm /bin/opm &&    …   170MB               
<missing>           2 days ago          /bin/sh -c GRPC_HEALTH_PROBE_VERSION=v0.2.1 …   9.16MB              
<missing>           2 days ago          /bin/sh -c make static                          339MB               
<missing>           2 days ago          /bin/sh -c #(nop) COPY file:d75227caa89fd08c…   1.1kB               
<missing>           2 days ago          /bin/sh -c #(nop) COPY file:0bc9fbe4f1539ad5…   1.27kB              
<missing>           2 days ago          /bin/sh -c #(nop) COPY dir:b4a193c23165519e0…   5.06MB              
<missing>           2 days ago          /bin/sh -c #(nop) COPY dir:7a9ddf40b76c90e4f…   47.6kB              
<missing>           2 days ago          /bin/sh -c #(nop) COPY dir:9129d08e7a54b3cec…   45.5MB              
<missing>           3 weeks ago         /bin/sh -c #(nop) WORKDIR /build                0B                  
<missing>           3 weeks ago         /bin/sh -c apk update && apk add sqlite buil…   264MB               
<missing>           4 weeks ago         /bin/sh -c #(nop) WORKDIR /go                   0B                  
<missing>           4 weeks ago         /bin/sh -c mkdir -p "$GOPATH/src" "$GOPATH/b…   0B                  
<missing>           4 weeks ago         /bin/sh -c #(nop)  ENV PATH=/go/bin:/usr/loc…   0B                  
<missing>           4 weeks ago         /bin/sh -c #(nop)  ENV GOPATH=/go               0B                  
<missing>           4 weeks ago         /bin/sh -c set -eux;  apk add --no-cache --v…   353MB               
<missing>           4 weeks ago         /bin/sh -c #(nop)  ENV GOLANG_VERSION=1.13.8    0B                  
<missing>           8 weeks ago         /bin/sh -c [ ! -e /etc/nsswitch.conf ] && ec…   17B                 
<missing>           8 weeks ago         /bin/sh -c apk add --no-cache   ca-certifica…   553kB               
<missing>           8 weeks ago         /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B                  
<missing>           8 weeks ago         /bin/sh -c #(nop) ADD file:e69d441d729412d24…   5.59MB     

Generated Docker file that comes from opm index add --generate, seem like it's change from what is in the documentation here https://github.com/operator-framework/operator-registry/blob/master/docs/design/opm-tooling.md#add-1

FROM quay.io/operator-framework/upstream-registry-builder
LABEL operators.operatorframework.io.index.database.v1=/database/index.db
ADD database /database
EXPOSE 50051
ENTRYPOINT ["/bin/opm"]
CMD ["registry", "serve", "--database", "/database/index.db"]

Vendor directory is checked into github

Hi, I'm currently working on removing the un-necessary pinned versions from go.mod (see #201)

I have the code working on my local branch but because the vendor directory is checked-in, the size of my PR would be 3k lines.

I believe the best practices recommend to have the vendor directory outside of the repo as go.mod and go.sum provide the locked-in version without having to maintain the vendor directory in the repo

opm: `index add` requires image be present remotely

Building an index image with a local-only image results in an authorization error, resulting from an image not being present remotely for pulling. The container tool used does not matter.

To reproduce:

$ operator-sdk new memcached-operator
$ operator-sdk add api --api-version example.com/v1 --kind Memcached
$ operator-sdk generate csv --csv-version 0.0.1
$ opm alpha bundle build \
	--tag quay.io/example/memcached-operator:v0.1.0 \
    --directory ./deploy/olm-catalog/memcached-operator \
    --package memcached-operator \
    --channels stable \
    --default stable
$ opm index add \
	--bundles quay.io/example/memcached-operator:v0.1.0 \
	--tag quay.io/example/memcached-operator-index:v1.0.0 \
	--container-tool docker
INFO[0000] building the index                            bundles="[quay.io/example/memcached-operator:v0.1.0]"
INFO[0000] running docker pull                           img="quay.io/example/memcached-operator:v0.1.0"
ERRO[0001] Error response from daemon: unauthorized: access to the requested resource is not authorized  img="quay.io/example/memcached-operator:v0.1.0"
FATA[0001] permissive mode disabled                      bundles="[quay.io/example/memcached-operator:v0.1.0]" error="error loading bundle from image: error pulling image: Error response from daemon: unauthorized: access to the requested resource is not authorized\n. exit status 1"

/kind bug

Latest upstream-registry-builder contains a bad opm executable

Recently our operator builds that are generated by operator-sdk (image/bundle) and opm (index) have been failing due the following error: panic: generate flag redefined: package

I was able to trace the problem down to the opm binary contained within the quay.io/operator-framework/upstream-registry-builder:latest image that was pushed a day ago. If you pull that image and run the opm command, you will see the failure. The failure doesn't occur with the 1.6.1 version of the builder.

https://quay.io/repository/operator-framework/upstream-registry-builder/manifest/sha256:9c3fabac7d8813d9c9df9988bb08d703846ed3ddb964244bbe9e156a73ba0a86

docker run -it quay.io/operator-framework/upstream-registry-builder:latest sh
/build # ls
Makefile  bin       cmd       go.mod    pkg       vendor
/build # cd bin
/build/bin # ls
appregistry-server  configmap-server    initializer         opm                 registry-server
/build/bin # ./opm
panic: generate flag redefined: package

goroutine 1 [running]:
github.com/spf13/pflag.(*FlagSet).AddFlag(0xc0000cb400, 0xc000122f00)
	/build/vendor/github.com/spf13/pflag/flag.go:848 +0x807
github.com/spf13/pflag.(*FlagSet).VarPF(0xc0000cb400, 0x1b05b40, 0x26e4000, 0x18aff2d, 0x7, 0x18aac4e, 0x1, 0x1920db1, 0x86, 0xc0000c9cc0)
	/build/vendor/github.com/spf13/pflag/flag.go:831 +0x10b
github.com/spf13/pflag.(*FlagSet).VarP(...)
	/build/vendor/github.com/spf13/pflag/flag.go:837
github.com/spf13/pflag.(*FlagSet).StringVarP(0xc0000cb400, 0x26e4000, 0x18aff2d, 0x7, 0x18aac4e, 0x1, 0x0, 0x0, 0x1920db1, 0x86)
	/build/vendor/github.com/spf13/pflag/string.go:42 +0xad
github.com/operator-framework/operator-registry/cmd/opm/alpha/bundle.newBundleGenerateCmd(0x18a8aa0)
	/build/cmd/opm/alpha/bundle/generate.go:36 +0x21f
github.com/operator-framework/operator-registry/cmd/opm/alpha/bundle.NewCmd(0x18a8aa0)
	/build/cmd/opm/alpha/bundle/cmd.go:14 +0x9f
github.com/operator-framework/operator-registry/cmd/opm/alpha.NewCmd(0xc000449400)
	/build/cmd/opm/alpha/cmd.go:15 +0x82
main.main()
	/build/cmd/opm/main.go:27 +0xbb
/build/bin # ls -l
total 168272
-rwxr-xr-x    1 root     root      38997816 Mar 27 15:59 appregistry-server
-rwxr-xr-x    1 root     root      38230080 Mar 27 15:59 configmap-server
-rwxr-xr-x    1 root     root      25554128 Mar 27 15:59 initializer
-rwxr-xr-x    1 root     root      43325392 Mar 27 15:59 opm
-rwxr-xr-x    1 root     root      26195824 Mar 27 15:59 registry-server
/build/bin # 

Latest version of operator-registry image complains about v1alpha1 API version in CRDs

After pulling the latest operator-registry image and pointing it at a previously-good directory, the log contains tons of messages of this form:

time="2019-05-22T13:23:35Z" level=info msg="could not decode contents of file manifests/cluster-federation/0.0.10/cluster-registry.crd.yaml into package: error unmarshaling JSON: v1alpha1 is not in dotted-tri fo
rmat" dir=manifests file=cluster-registry.crd.yaml load=bundles  

At a minimum, this is a confusing red herring. The docker build this was part of completed fine, adding to the confusion.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.