Coder Social home page Coder Social logo

3scale-operator's People

Contributors

adam-cattermole avatar akostadinov avatar austincunningham avatar bmozaffa avatar carlkyrillos avatar davidor avatar dependabot[bot] avatar eguzki avatar guicassolato avatar hallelujah avatar jjkiely avatar jmprusi avatar kevfan avatar krishvoor avatar kryanbeane avatar macejmic avatar mayorova avatar miguelsorianod avatar mikz avatar mledoze avatar mstokluska avatar patryk-stefanski avatar poojachandak avatar raelga avatar roivaz avatar sergioifg94 avatar slopezz avatar thomasmaas avatar valerymo avatar wengkee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3scale-operator's Issues

[system] Add PROMETHEUS_EXPORTER_PORT environment

We are adding more prometheus exporter endpoint to System component.

  • system-app
  • system-sidekiq

Each container of each pod should have different PROMETHEUS_EXPORTER_PORT value

Container Port Service
system-developer 9394 web server
system-master 9395 web server
system-provider 9396 web server
system-sidekiq 9394 sidekiq

The endpoint are accessible from http://{server}:{port}/metrics where server is normally 0.0.0.0 locally

Cannot Specify Route Certificates

My OpenShift install does not have a wildcard cert defined, and I need the ability to setup custom certificates for each of the routes. There is no way to do this currently and the operator resets any changes I make to the routes.

The private base URL is used incorrectly for the public production base URL

Current behavior

When an API is created with such definition:

$ oc get api -o yaml
apiVersion: v1
items:
- apiVersion: capabilities.3scale.net/v1alpha1
  kind: API
  metadata:
    creationTimestamp: 2019-03-15T14:55:54Z
    generation: 1
    labels:
      environment: prod
    name: echo-api
    namespace: 3scale-operator
    resourceVersion: "102115862"
    selfLink: /apis/capabilities.3scale.net/v1alpha1/namespaces/3scale-operator/apis/echo-api
    uid: 6ec8bdc3-4732-11e9-a489-b28be4f0161b
  spec:
    description: Echo API
    integrationMethod:
      apicastOnPrem:
        apiTestGetRequest: /
        authenticationSettings:
          credentials:
            apiKey:
              authParameterName: api-key
              credentialsLocation: headers
          errors:
            authenticationFailed:
              contentType: text/plain; charset=us-ascii
              responseBody: Authentication failed
              responseCode: 403
            authenticationMissing:
              contentType: text/plain; charset=us-ascii
              responseBody: Authentication Missing
              responseCode: 401
          hostHeader: ""
          secretToken: MySecretTokenBetweenApicastAndMyBackend_1237120312
        mappingRulesSelector:
          matchLabels:
            api: echo-api
        privateBaseURL: https://echo-api.3scale.net:443
        productionPublicBaseURL: http://apicast-production-3scale-operator.app.itix.fr
        stagingPublicBaseURL: http://apicast-staging-3scale-operator.app.itix.fr
    metricSelector:
      matchLabels:
        api: echo-api
    planSelector:
      matchLabels:
        api: echo-apo
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

The service created in 3scale has a wrong public production base URL:

Screen Shot 2019-03-15 at 16 12 43

Expected behavior

In the former example, the public production base url is set to: http://apicast-production-3scale-operator.app.itix.fr.

DeploymentConfigs do not rollout

OpenShift 4.1.11
Kubernetes v1.13.4+df9cebc

Steps to reproduce:

  • Create project (3scale)
  • Install 3scale operator via OperatorHub
    • OpenShift > Catalog > OperatorHub > 3scale Operator > Install
  • Create APIManager instance
    • OpenShift > Catalog > Installed Operators > 3scale Operator > API Manager > Create APIManager

API Manager created with the following details:

apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
name: apimgr
namespace: 3scale
spec:
wildcardDomain: <>

No pods are actually ever created. When checking the logs for the ImageStream referenced by one of the DeploymentConfig the status returned is:

status:
dockerImageRepository: 'image-registry.openshift-image-registry.svc:5000/3scale/amp-apicast'
tags:
- tag: '2.6'
items: null
conditions:
- type: ImportSuccess
status: 'False'
lastTransitionTime: '2019-08-26T16:18:14Z'
reason: InternalError
message: >-
Internal error occurred: Get
https://registry.redhat.io/v2/3scale-amp26/apicast-gateway/manifests/latest:
unauthorized: Please login to the Red Hat Registry using your
Customer Portal credentials. Further instructions can be found here:
https://access.redhat.com/articles/3399531
generation: 2

In order to resolve this the sample-registry-credentials secret from the openshift namespace was copied to the 3scale registry.

A potential solution would be to have the operator during install grab the secret and store it as a secret or configmap for use when the operator resources (API Manager in this case) are instantiated within the namespace the operator is run in.

Document 2.5 to 2.6 upgrade path for templates

Write documentation to upgrade from 2.5 to 2.6 release in templates.

For the moment the following should be documented:

  • Migrate all DCs to use the new ServiceAccounts and migrate to be able to use RedHat registry #124

  • Update existing ImageStream images

  • How to move DBs DCs to ImageStreams, also including the modification of the "postgresql" imagestream to use the tag formatting that we have in all the other ones #122 and #155

  • Maybe some PostgreSQL template addition related change? #129 --> No specific changes needed for upgrade

  • Remove WildcardRouter #135

  • Add/Modify new Zync-related changes #145 . What is observed as template changes related to this commit is the addition of the new elements and removal of default system routes (system-master, system-provider, system-developer, apicast-production, apicast-staging). The addition of new elements should be documented in the upgrade procedure but the removal of old routes probably not because the user might want to keep them

  • Redis Enterprise Support #114 related changes

  • Redis Sentinel Support #137 related changes

  • Apply Redis Sentinel environment variable fixes in backend-cron and backend-worker #133

Cannot configure custom storage class for backend-redis-storage , mysql-storage , system-redis-storage

TL;DR:
How can I change the storage class for all PVCs created by the apimanager without changing the default storage class. If this is possible, please add documentation for it, if it's not possible, please add the necessary code to make this configurable.


Cannot configure custom storage class for backend-redis-storage , mysql-storage , system-redis-storage

When creating the APIManager in step 2.6.1. Deploying the APIManager custom resource of the 3scale installation guide [0], I add the following to the spec:

  system:
    fileStorage:
      persistentVolumeClaim:
        storageClassName: '<storage class name>'

For example, if the wildcardDomain is example.com and the storage class name as seen in oc get storageclass is aws-efs create the API manager with the following definition:

$ cat apimanager.yaml 
apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  productVersion: "2.7"
  wildcardDomain: example.com
  resourceRequirementsEnabled: true
  system:
    fileStorage:
      persistentVolumeClaim:
        storageClassName: 'aws-efs'

However, this will only update the StorageClass for system-storage:

$ oc get pvc
NAME                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
backend-redis-storage   Pending                                                                        gp2            14s
mysql-storage           Pending                                                                        gp2            14s
system-redis-storage    Pending                                                                        gp2            14s
system-storage          Bound     pvc-d59027a0-7d8c-4427-b2fe-1800af13aed8   100Mi      RWX            aws-efs        14s

At the moment, I work around this by changing the default storage class:

Set the current default storage class to false:

$ oc patch sc gp2 -p '{"metadata":{"annotations": {"storageclass.kubernetes.io/is-default-class":"false"}}}'

Set the desired storage class as the default class:

$ oc patch sc aws-efs -p '{"metadata":{"annotations": {"storageclass.kubernetes.io/is-default-class":"true"}}}'

Verify:

$ oc get sc
NAME                PROVISIONER             AGE
aws-efs (default)   openshift.org/aws-efs   24h
gp2                 kubernetes.io/aws-ebs   28h

Create ApiManager and verify:

$ cat apimanager.yaml 
apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
  name: example-apimanager
spec:
  productVersion: "2.7"
  wildcardDomain: example.net
  resourceRequirementsEnabled: true
$ oc create -f apimanager.yaml 
$ oc get pvc
NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
backend-redis-storage   Bound    pvc-d8e75d6c-8814-4ac7-9ff8-604ca56116a8   1Gi        RWO            aws-efs        2s
mysql-storage           Bound    pvc-cc8d0f37-bf62-4897-8bf1-95ef77bb1e0c   1Gi        RWO            aws-efs        2s
system-redis-storage    Bound    pvc-bc67130c-4213-4214-bdba-fbd143b03154   1Gi        RWO            aws-efs        2s
system-storage          Bound    pvc-fc4a68e9-2fe8-4c02-b848-2883c4d704e6   100Mi      RWX            aws-efs        2s

[0] https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.7/html/installing_3scale/install-threescale-on-openshift-guide#deploying-apimanager-custom-resource


Thanks,

Andreas

Installation: Functionality definition, characteristics and scenarios

The installation part of the 3scale-operator is meant to deploy a functional AMP platform.

The operator should also be able to "maintain" the platform configuration respecting the contents defined in the Operator and will provide some configurability options to the users.

The initial idea I have is to have a single Controller that will manage the AMP platform. Doing this has the consequence that all changes to the AMP platform will go inside the same reconciliation loop.

In case the installation operator also should be able to deploy apicast standalone I would create another different CRD.

CRDs

ThreeScale: Represents a 3scale AMP deployment
ApicastStandalone (in case we want operator to manage this): Represents a 3scale Apicast Standalone deployment

Each CRD will deploy all the elements that form it.

A "standard" ThreeScale AMP deployment will deploy what's currently defined
in the AMP template

Requirements

  • The operator should be able to deploy a standard AMP platform with sane defaults and allow some customization scenarios
  • The initial desired customization scenarios once the standard AMP platform is working is having the same functionality as the evaluation, ha, and s3 AMP templates.
  • Ideally it should be able to react to changes in the CRD/s definition/s with the minimum possible downtime or no downtime (if possible), making those changes effective in the deployed resources.

Possible desired functionalities

  • Ability to have an HA scenario where application pods have an increased redundancy in replicas and critical databases are externalized
  • Ability to have an Evaluation scenario where resource limits and requests are removed
  • There will be an option to choose the main system database software,
    being the allowed values: "mysql", "oracle", being "mysql" by default
  • There will be an option to choose the shared storage class for system,
    being automatically selected by default
  • There will be an option to select whether the images should be obtained
    from source code or from DockerImages, being the default option from
    DockerImages
  • There will be an option to select whether we want resource limits or not
    (evaluation version)
  • There will be an option to select whether we want critical-databases externally
    or not (HA version)

Possible AMP CRD representations

Here are some rough ideas on what an AMP CRD might look:

Specifying the scenario names as a key of the 'spec' of the CRD. Each scenario will have its own options. There will also be "general" options:

apiVersion: amp.3scale.net/v1alpha1
kind: ThreeScale
metadata:
  name: ApiManagementPlatform
spec:
  ampVersion: # this would control the AMP images in case Docker Images are used (also source code tags???) 
  includeResourceLimits:
  ha:
    externalDatabaseConnectionDefinitions:
      system:
      apicast:
      backend:
      zync:
  evaluation:
    <scenarioOptions>
  <otherPossibleScenarios>

Another possible scenario is defining keys for each "subsystem" on the AMP CRD:

apiVersion: amp.3scale.net/v1alpha1
kind: ThreeScale
metadata:
  name: ApiManagementPlatform
spec:
  ampVersion: # this would control the AMP images in case Docker Images are used (also source code tags???) 
  includeResourceLimits:
  apicast:
  backend:
  system:
    sharedStorageClass:
    mainDatabaseType:
    applicationsPodReplicas: # does not modify database pods
    wildcardDomain:
    wildcardPolicy:
  zync:
  imageOrigin:
    docker:
      <options>
    sourceCode:
      <options>
    allowInsecureTransport:

By looking at the previous ways of organizing the CRDs I see several levels of configurability that can exist (some of them might not exist depending on how we decide to organize the CRDs):

  • GlobalModifiers
  • ScenarioModifiers
  • SubsystemModifiers
  • ComponentModifiers

Current open questions

  • How are we going to manage the Operator at versioning level? and the 3scale releases with it?
  • What's the desired way we want to structure the CRD without being too much complex but flexible enough to add new future functionality?
  • Will apicast standalone have a separate operator different than the "installation" operator? notice that here I just talked about
  • How are we going to manage the customizability of the images used in the infrastructure? The options are by adding customization options in the CRDs or at versioning level
  • At wich level do we want to apply modifications? Global level, Subsystem level,
    Component level? We have to be careful with this because too much configurability
    would be hard to maintain whereas too few configurability will not be prepared
    to new needs
  • When specifying sensitive data, now parameters cannot be used because we don't
    have templates, how would we handle the specification of secrets?
    • Directly on the CRD definition
    • Referencing a previously existing Secret that has to be created permanently
      and before deploying the Operator
  • Is it worth to explore the option of having a CRD for each subsystem in AMP instead of a single CRD? (having a 'system' CRD, a 'backend' CRD, etc...). This would make them independent between them but would require some coordination
  • How are we going to manage boot dependencies between elements? Will we control it through the Operator via fields in the status map?
  • How are we going to manage changes that imply a redeployment of the resource? There might be service loss or data loss depending on the change.
  • Are there some changes that might not make sense with Operator? For example, what should be done if a standard deployment is already deployed with the operator and it is changed to an HA deployment? In the current context this means that the databases are externalized, thus the data would be lost (unless migrated by the user, which should take care it is consistent, up-to-date, etc...)
  • Are we going to convert all the current template Parameters to CRD fields?
  • How are we going to manage deletions/addition of elements when scenarios/configuration change?

First steps

  • Migrate and try to adapt the current code we have to autogenerate OpenShift templates in this repository (or in a standalone repo?) and make the needed refactors to be able to reuse it for the installation operator. We need to maintain the ability of being able to keep generating templates from the data model because operator and templates will coexist for a time.
  • Deploy a standard AMP platform using the installation operator. As a first step it will not maintain changes. It will just perform an initial deployment if the elements do not previously exist.

Port 443 is automatically appended to the Admin Portal URL, even when there is already one

Current behavior

When I create a secret as explained in the documentation:

oc create -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: prod-tenant
type: Opaque
stringData:
  adminURL: https://nmasse-redhat-admin.3scale.net:443
  token: "[REDACTED]"
EOF

and a binding and an API, the operator fails with:

{"level":"error","ts":1552661970.1553664,"logger":"controller_binding","msg":"Error getting current state from binding status","Request.Namespace":"3scale-operator","Request.Name":"prod-binding","error":"Get https://nmasse-redhat-admin.3scale.net:443:443/admin/api/services.xml:  invalid URL port \"443:443\"","stacktrace":"github.com/3scale/3scale-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/3scale/3scale-operator/pkg/controller/binding.ReconcileBindingFunc\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/pkg/controller/binding/binding_controller.go:168\ngithub.com/3scale/3scale-operator/pkg/controller/binding.(*ReconcileBinding).Reconcile\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/pkg/controller/binding/binding_controller.go:129\ngithub.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
--
  | {"level":"error","ts":1552661970.1554487,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"binding-controller","request":"3scale-operator/prod-binding","error":"Get https://nmasse-redhat-admin.3scale.net:443:443/admin/api/services.xml:  invalid URL port \"443:443\"","stacktrace":"github.com/3scale/3scale-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\ngithub.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

Expected behavior

The API is provisioned in Porta and the port is taken from the adminURL field of the secret.

Consider documenting CRDs for kubectl explain

I see you've already got docs that explain all fields of your CRDs in the docs folder here:
https://github.com/3scale/3scale-operator/tree/master/doc

Did you know that you can add these field explanations as code comments in your types, and then the CRD generation provided through the operator-sdk will add those descriptions to the CRD itself? That would mean that users would be able to use kubectl explain to get the docs on the CRDs.

e.g. from another operator (with some but not all fields documented...):

$ kubectl explain unifiedpushserver.spec
KIND:     UnifiedPushServer
VERSION:  push.aerogear.org/v1alpha1

RESOURCE: spec <Object>

DESCRIPTION:

FIELDS:
   backups	<[]Object>
     Backups is an array of configs that will be used to create CronJob resource
     instances

   database	<Object>

   externalDB	<boolean>
     ExternalDB can be set to true to use details from Database and connect to
     external db

   oAuthResourceRequirements	<map[string]>

   postgresPVCSize	<string>
     PVC size for Postgres service

   postgresResourceRequirements	<map[string]>

   unifiedPushResourceRequirements	<map[string]>

   useMessageBroker	<boolean>
     UseMessageBroker can be set to true to use managed queues, if you are using
     enmasse. Defaults to false.

If you think this would be useful here, I'm happy to create a PR for it if you want! 🙂

Capabilities: CRDS and relationships

Proposal

Let's try to define CRDs and how to handle relationships between them.

CRD Model

Let's use those primary domain objects as our base:

  • Tenant: Represents a tenant account. After processing this object, if successful, new credentials secret will be created in the same namespace. Containing accessToken + admin url.
  • Credentials: This is a secret. Contains the accessToken and admin url to access a 3scale account.
  • Binding: References a Credentials secret and using a selector, points to API objects services. Those services will use the credentials secret.
  • API: An API created inside a tenant account
  • Plan: Plan refers to Limits and has some configuration around cost
  • Limit: Defines a limit for a metric given a max value and a period
  • Metrics: A single metric, name and a unit (hit, bytes, request...)
  • IntegrationMethod: How the API is enforced, via Apicast, CodePlugin...
  • MappingRules: Mapping Rules tells the integration method, how to increment metrics based on path

This diagram explains the hierarchy between them, and 1-to-1 or 1-to-many relations :

                               ┌───────────────┐          ┌ ─ ─ ─ ─ ─ ─ ─ ─                     
                               │               │                           │                    
                               │               │ Creates  │  Credentials                        
                               │    Tenant     │─ ─ ─ ─ ─▶      Secret     │                    
                               │               │          │                                     
                               │               │           ─ ─ ─ ─ ─ ─ ─ ─ ┘                    
                               └───────────────┘                   ▲                            
                                                                   │                            
                                                                   │                            
                                                               ObjectRef                        
                                                                   │                            
                                                           ┌───────────────┐                    
                                                           │               │                    
                                                           │    Binding    │                    
                                                           │               │                    
                                                           └───────────────┘                    
                                                                Selector                        
                                                                   │ └───────────────────┐      
                                                                   ▼                     ▼      
                       ┌─────────────────┐                  ┌────────────┐        ┌────────────┐
                       │                 │                  │            │        │            │
       ┌────Selector───│IntegrationMethod│◀───ObjectRef─────│    API     │        │    API     │
       │               │                 │                  │            │        │            │
       │               └─────────────────┘                  └────────────┘        └────────────┘
       │                        │                               │  │ │                   │      
       │           ┌────────────┘                               Selector             Selector   
       │           │   Selector                ┌────────────────┘  │ └─────────────────┐ │      
       │           ▼                           │                   │                   │ │      
       │   ┌──────────────┐                    ▼                   ▼                   ▼ ▼      
       │   │ ┌────────────┴─┐           ┌─────────────┐     ┌─────────────┐     ┌─────────────┐ 
       │   │ │ ┌────────────┴─┐         │ ┌───────────┴─┐   │             │     │             │ 
       │   │ │ │              ├────────▶│ │             │   │    Plan     │     │    Plan     │ 
       │   └─┤ │ MappingRules │─────────┼▶│   Metric    │   │             │     │             │ 
       │     └─┤              │         └─┤             │   └─────────────┘     └─────────────┘ 
       │       └──────────────┘ObjectRef  └─────────────┘          │                   │        
       │               ▲                       ▲ ▲             Selector             Selector    
       │               │                         │                 │                   │        
       │                                       │ │                 ▼                   ▼        
       │               │                         │        ┌─────────────┐       ┌─────────────┐ 
       ▼                                       │ │        │ ┌───────────┴─┐     │             │ 
┌─────────────┐        │                         │        │ │ ┌───────────┴─┐   │             │ 
│ ┌───────────┴─┐       ─ ─ ─ ─                │ │        │ │ │             │   │             │ 
│ │ ┌───────────┴─┐            │                 │        └─┤ │    Limit    │   │             │ 
│ │ │             │                            │ │          └─┤             │   │     ...     │ 
└─┤ │  Policies   │            │                 │            └─────────────┘   │             │ 
  └─┤             │                            │ │                   │          │             │ 
    └─────────────┘         Creates              │                   │          │             │ 
                                               │ └────ObjectRef──────┘          │             │ 
                               │                                                └─────────────┘ 
                     ┌───────────────────┐     │                                                
                     │                   │  Creates                                             
                     │                   │     │                                                
                     │  OpenAPIDocument  │─ ─ ─                                                 
                     │                   │                                                      
                     │                   │                                                      
                     └───────────────────┘                                                                                                                                               

So, we need to find a way to correctly reference one CRD with another and make it easier to allow the user to create overrides for multiple environments, and add some discoverability to new objects.

Proposal 1: Hierarchy model, with selectors and ObjectReference

Let's try to respect the initial hierarchy model, top-down, and use available kubernetes types as Selectors and ObjecReference.

Using IntegrationMethod, bindings, metrics, limits , plans and APIs:

apiVersion: api.3scale.net/v1alpha1
kind: BINDING
metada:
  name: mySaaSAccount
spec:
  SecretRef:
    name: MySecret
  APISelector:
    matchLabels:
       environment: production
----
apiVersion: api.3scale.net/v1alpha1
kind: API
metadata:
  name: myapi
  labels:
     environment: production
spec:
  description: "3scale service for my awesome api"
  integrationMethodRef: apicastIntegration
  PlanSelector:
    matchLabel: 
      environment: production
      api: myapi
  metricSelector:
    matchLabel: 
      api: myapi
  # This object requires more fields for defining the integration method and other settings.
----
apiVersion: api.3scale.net/v1alpha1
kind: INTEGRATIONMETHOD
metadata:
  name: apicastIntegration
  labels:
    api: myapi
spec:
  apicastOnPrem:
    privateBaseURL: https://echo-api.3scale.net:443
    stagingPublicBaseURL: https://api.testing.com
    productionPublicBaseURL: https://api.testing.com
    apiTestGetRequest: /
    authenticationSettings:
      hostHeader: testing.com
      secretToken: Shared_secret_sent_from_proxy_to_API_backend_9603f637ca51ccfe
      credentials:
        apiKey:
          authParameterName: user_key
          credentialsLocation: header / query
      errors:
        authenticationFailed:
          responseCode: 403
          contentType: "text/plain; charset=us-ascii"
          responseBody: "Authentication failed"
        authenticationMissing:
          responseCode: 404
          contentType: "text/plain; charset=us-ascii"
          responseBody: "No Mapping Rule matched"          
    mappingRulesSelector:
      matchLabels:
        api: myapi
#    policiesSelector:
#      matchLabels:
#       api: myapi
----
apiVersion: api.3scale.net/v1alpha1
kind: MAPPINGRULE
metadata:
  name: list_apps
  labels:
    api: myapi
spec:
  path: /apps/
  method: GET
  increment: 1
  metricRef:
    name: list_apps
----
apiVersion: api.3scale.net/v1alpha1
kind: PLAN
metadata:
  name: basic
  labels:
    environment: production
    api: myapi
spec:
  trialPeriod: 10
  aprovalRequired: true
  costs:
    setupFee: 0
    costMonth: 10
  limitSelector:
    matchLabels:
      plan: basic
----
apiVersion: api.3scale.net/v1alpha1
kind: METRIC
metadata:
  name: list_apps
  labels:
    api: myapi
spec:
  description: "Example method" 
  unit: hit
  incrementHits: true
----
apiVersion: api.3scale.net/v1alpha1
kind: METRIC
metadata:
  name: create_app
  labels:
    api: myapi
spec:
  unit: hit 
  description: "Example metric" 
----
apiVersion: api.3scale.net/v1alpha1
kind: LIMIT
metadata:
  name: list_apps_eternity_100
  labels:
    plan: basic
spec:
  metric:
    name: list_apps
  description: "Limit for list_apps in basic plan"
  period: enternity
  maxValue: 100
---- 
apiVersion: api.3scale.net/v1alpha1
kind: LIMIT
metadata:
  name: list_apps_month_1
  labels:
    api: myapi
    plan: basic
spec:
  metric:
    name: list_apps
  description: "Limit for list_apps in basic plan"
  period: month
  maxValue: 1

The important bits are:

How limits relate to metrics, a 1-to-1 relationship:

  metric:
    name: list_apps

this is using the ObjectReference type from kubernetes.

How the application plan includes all the desired limits:

  limitSelector:
    matchLabels:
      plan: basic

This one, on the other hand, uses a label selector, will include as limits, all the ones that match the desired labels. In this case, the plan label with a value of basic.

We are doing the same to relate applicationPlans and Metrics to a Service:

  applicationPlanSelector:
    matchLabel: 
      environment: production
      api: myapi
  metricSelector:
    matchLabel: 
      api: myapi

Controllers:

Deciding on how many controllers we want to use, will change how the relationship between objects is done...

One Controller for each CRD

One controller for each CRD has a small problem, each controller only sees a piece of the global configuration, not knowing who references the controlled objects, one way to avoid that is to add proper OwnerReferences on each controller run, for example:

  • Tenant Controller: Tenant Object will add itself as the Owner of the Service object based on a tenant selector.
  • API Controller: API will add itself as the owner of metrics, and application plans based on proper selectors.
  • Metric controller: Will trigger and create the metric using the OwnerReference field, to find the required information (Tenant and Service).
  • All the other CRDs ...

This way, each controller can trace up to the tenant level and get the required info.

Two Controllers with an internal "consolidated" CRD.

Trying to simplify relationships between CRDs, and overall complexity, using only two controllers:

  • Consolidating controller: reading the current state from those multiple CRDs, persisting all of them into an internal state CRD, only accessible by the Operator, that integrates all the small CRDs into a single one. (We can use OwnerReferences so we can use the kubernetes tooling to show orphaned objects, or even GC them.)

  • Reconciliation controller: Reading the internal consolidated CRD and reconcile directly with the 3scale API.

This model allows us to control and reduce the number of times we reconcile with the 3scale API.

A consolidated object could look like:

apiVersion: api.internal.3scale.net/v1alpha1
kind: THREESCALECONFIG
metadata:
  name: threescaleConfig
spec:
    tenants:
    - name: myTenant
    APIs:
    - name: myAPI
      description: cacafuti
      credentials:
      - accessToken: AAAAAAAAAAAAAAAAAAAAAAAA
        adminURL: https://my-admin.3scale.net:443/
      integration:
          apicastOnPrem:
            privateBaseURL: https://echo-api.3scale.net:443
            stagingPublicBaseURL: https://api.testing.com
            productionPublicBaseURL: https://api.testing.com
            apiTestGetRequest: /
            authenticationSettings:
              hostHeader: testing.com
              secretToken: Shared_secret_sent_from_proxy_to_API_backend_9603f637ca51ccfe
              credentials:
                  apiKey:
                    authParameterName: user_key
                    credentialsLocation: header / query
            errors:
              authenticationFailed:
                responseCode: 403
                contentType: "text/plain; charset=us-ascii"
                responseBody: "Authentication failed"
              authenticationMissing:
                responseCode: 404
                contentType: "text/plain; charset=us-ascii"
                responseBody: "No Mapping Rule matched"
            mappingRules:
              - name: list_apps
                path: /apps/
                method: GET
                increment: 1
                metric: list_apps
      metrics:
      - name: list_apps
        unit: hit
        description: list apps metrics
      - name: get_apps
        unit: hit
        description: get apps
      Plans:
      - name: basic
        trialPeriodDays: 10
        approvalRequired: true
        costs:
          setupFee: 0
          costMonth: 10
        limits:
          - name: list_apps_month_1
            description: "Limit for list_apps in basic plan"
            period: month
            maxValue: 1
            metric: list_apps
          - name: get_apps_day
            description: "Limit for get_apps in basic plan"
            period: day
            maxValue: 100
            metric: get_apps

Examples:

OpenAPIDocument

apiVersion: api.3scale.net/v1alpha1
kind: OPENAPIDOCUMENT
metadata:
  name: myOpenAPISpec
spec:
  metricLabels:    #Labels to add to the generated Objects
    - api: myApi
  mappingRuleLabels:  #Labels to add to the generated Objects
    - api: myapi
  Definition: |
    openapi: "3.0.0"
    info:
      title: Simple API overview
      version: 2.0.0
    paths:
      /test:
        get:
          operationId: metricsA
          summary: List Objects
        post:
          operationId: metricsB
          summary: Create Object

IntegrationMethod

The integration Method object is "complex", here you can see 3 full examples of the possible 3scale integrations, "Apicast On Prem", "Apicast Hosted" and "Code Plugin":

apiVersion: api.3scale.net/v1alpha1
kind: INTEGRATIONMETHOD
metadata:
  name: apicastIntegration
  labels:
    api: myapi
spec:
  apicastOnPrem:
    privateBaseURL: https://echo-api.3scale.net:443
    stagingPublicBaseURL: https://api.testing.com
    productionPublicBaseURL: https://api.testing.com
    apiTestGetRequest: /
    authenticationSettings:
      hostHeader: testing.com
      secretToken: Shared_secret_sent_from_proxy_to_API_backend_9603f637ca51ccfe
      credentials:
        apiKey:
          authParameterName: user_key
          credentialsLocation: header / query
        appID:
          appIDParameterName: "app_id"
          appKeyParameterName: "app_key"
          credentialsLocation: header / query
        openIDConnector:
          issuer: http://sso.example.com/auth/realms/gateway
          credentialsLocation: header / query
      errors:
        authenticationFailed:
          responseCode: 403
          contentType: "text/plain; charset=us-ascii"
          responseBody: "Authentication failed"
        authenticationMissing:
          responseCode: 404
          contentType: "text/plain; charset=us-ascii"
          responseBody: "No Mapping Rule matched"          
    mappingRulesSelector:
      matchLabels:
        api: myapi
    policiesSelector:
      matchLabels:
        api: myapi
---
apiVersion: api.3scale.net/v1alpha1
kind: INTEGRATIONMETHOD
metadata:
  name: apicastIntegration
  labels:
    api: myapi
spec:
  apicastHosted:
    privateBaseURL: https://echo-api.3scale.net:443
    apiTestGetRequest: /
    authenticationSettings:
      hostHeader: testing.com
      secretToken: Shared_secret_sent_from_proxy_to_API_backend_9603f637ca51ccfe
      credentials:
        apiKey:
          authParameterName: user_key
          credentialsLocation: header / query
        appID:
          appIDParameterName: "app_id"
          appKeyParameterName: "app_key"
          credentialsLocation: header / query
      errors:
        authenticationFailed:
          responseCode: 403
          contentType: "text/plain; charset=us-ascii"
          responseBody: "Authentication failed"
        authenticationMissing:
          responseCode: 404
          contentType: "text/plain; charset=us-ascii"
          responseBody: "No Mapping Rule matched"          
    mappingRulesSelector:
      matchLabels:
        api: myapi
    policiesSelector:
      matchLabels:
        api: myapi
---
apiVersion: api.3scale.net/v1alpha1
kind: INTEGRATIONMETHOD
metadata:
  name: codePlugin
  labels:
    api: myapi
spec:
  codePlugin:
    authenticationSettings:
      credentials:
        apiKey:
        appID:
        openIDConnector:

Changelog:

23/11/18 - Updated using ObjectReferences instead of custom field for limits -> metrics
          - Added new object, service and how it should relate to other components
          - Updated diagram to fix a mistake. Metrics are created at a service level.
26/11/18 - Added new proposals around Controllers. 
27/11/18 - Added new object to represent credentials, `Binding` and `Credentials`, changed the relationship with the `tenant` object. Now this design can work with a 3scale SaaS Account. 
28/11/18 - Added new objects: Mapping Rules, and Integration Method.  
29/11/18 - Added new CRD object "OpenAPIDocument"
03/12/18 - Consolidated object:  Credentials are now part of the API

PVC too small in eval template for cloud deployment

The RWX PersistentVolumeClaim in the eval template is currently set to 100Mi.

While the right choice for local deployment, it's a bit of a problem when trying to deploy the template on a cloud provider where the minimum size may be 1 gb (e.g. see AWS EBS )

Do you think we could change it to 1 gb by default ?

Improve 3Scale Documentation: OperatorHub

When you install the operator from OpenShift, it does not clearly state the prerequisites and install may happen before prerequisites are completed:

Prerequisites

OpenShift Container Platform 4.1
Deploying 3scale using the operator first requires that you follow the steps in quickstart guide about Install the 3scale operator
Some Deployment Configuration Options require OpenShift infraestructure to provide availablity for the following persistent volumes (PV):
    3 RWO (ReadWriteOnce) persistent volumes
    1 RWX (ReadWriteMany) persistent volume

The RWX persistent volume must be configured to be group writable. For a list of persistent volume types that support the required access modes, see the OpenShift documentation

You can see from this screenshot this crucial information is not visible:

01

Capabilities: Initial functionality scenarios

The capabilities part of the 3scale-operator is meant to configure an already existing deployment of 3scale.

The main focus of this part of the operator is to better integrate 3scale in the CI/CD flows, and allow the user to configure and operate the 3scale account in a more cloud-native way.

Requirements

  • Using this operator shouldn't require any other tool than the openshift CLI and some editor

Scenarios

Scenario 1

I'm a hobbyist, and I have an API in a production environment, that I want to deploy using continuous integration to OpenShift Online using 3scale SaaS.
My Git repository includes source code, API management definition for the production environment.

Option Jenkins

If Jenkins is available, we can provide a pipeline that builds the app using build config and updates the API management in k8s. The developer can customise the order of those steps.

Option BuildConfig

BuildConfig can take API management definitions from the source code and push them to k8s or push to an ImageStream, that deploys DeploymentConfig and pushes those objects to k8s.

Scenario 2

I'm a development team that has several APIs in several GitHub repositories deployed in several OpenShift projects using one 3scale deployed in own project.

Each of those projects is going to need a Service in 3scale, as they are publicly exposed APIs.

Scenario 3

Same as scenario 2, but I want to expose them as one facade service. Each API would configure its limits, but the shared configuration would be managed in the 3scale project.

Scenario 4

I'm a development team that has both production and staging environment deployed in one OpenShift project, because of some shared resource. I'd like to have API management for both environments in one project using two different tenants.

Scenario 5

I'm driving my application through several stages (development, staging, production) and want to have API management configuration versioned with it.

The workflow looks like:

  • Open a PR
    • Deploy a "review branch" to a new, isolated OpenShift project
    • Configure API management for that deployed project
  • Merge PR
    • Deploy the app master branch to the staging environment project
    • Configure API management for that deployed project
  • Deploy to production
    • Deploy the last staging revision to the production environment project
    • Configure API management for that deployed project
  • Rollback production deployment
    • Rollback the last deployment to production
    • Rollback the last API management change

The challenge is two-fold:

  • Keep the Gateway configuration up-to-date with changes to the backend between environments
  • Rollback API management configuration (for example removing plan, etc.)

Scenario 6

Like scenario 5, but with different objects in some stages:

  • Development environment has different limits and plans.

  • Staging environment has signup disabled.

Scenario 7

I have a complex plan structure, that has many common limits. I'd like to share those limits between application plans.

Option override

Plans A,B and C share common limits 100 hits/day, but Plan C has 1000 limits a day.

Scenario 8

I'm taking a design-first and want to use OAI spec to drive the metrics definition.
By creating the OIS spec in OpenShift I expect to not have to create Metrics by hand and just create Limits.

Scenario 9

I'm deploying my app to staging and production environment on a different OpenShift clusters, using different 3scale Tenant and using the same upstream host.
I'd like those two Services to be managed by one custom resource.

Option

The staging and production environments have just different upstream, otherwise all settings are the same. Living in different Tenants and Clusters.

Tenant Controller can allow user to create tenants without proper permissions

The tenant object allows the user to control the namespace of the masterCredentialsRef:

    masterCredentialsRef:
      name: system-seed
      namespace: operator-test

By doing so, a user without access to the openshift project where 3scale is deployed can create a new tenant.

Ideally, the masterCredentialRef namespace should default to where the object tenant has been created. So only users with access to the namespace where the masterCredential can create tenants.

"Problematic" code:

// FetchMasterCredentials get secret using k8s client
func FetchMasterCredentials(k8sClient client.Client, tenantR *apiv1alpha1.Tenant) (string, string, error) {
masterCredentialsSecret := &v1.Secret{}
err := k8sClient.Get(context.TODO(),
types.NamespacedName{
Name: tenantR.Spec.MasterCredentialsRef.Name,
Namespace: tenantR.Spec.MasterCredentialsRef.Namespace,

Configure SSO/OAuth integration for API Manager

It would be useful to be able to configure authentication (Authn/Authz) providers from the Operator to fully enable Infrastructure-as-Code. For example, being able to deploy Keycloak and then provision the APIManager to use that KeyCloak instance based on the contents of the CR. Perhaps a new/separate CRD for Auth providers?

Update the operator-sdk version to the latest one

Hi,

Currently the 3scale-operator is using operator-sdk v0.8.0. Since then there have a few new releases of the operator-sdk, and a bunch of new features and bug fixes are introduced.

There is one particular new feature we (Integreatly) would like to use: get the metrics for each custom resource.

However, this means we need to upgrade the version of the operator-sdk for 3scale-operator. We have done this already for some other operators and we would like help to do the same for 3scale.

If I can get agreement that this is something we want to have for 3scale-operator, I can start working on this and create a PR once it's ready.

Thanks.

Mistake when creating multiple products / backends from same yaml file

Cannot declare multiple products and backends at the same tenant from the operator, if you do it, looks like operator-controller update all the Products backend with the last configured, we are trying differents configurations but its changing the BackendUsage when it interpret the product yaml.

With the next example, the product svc2 can be referenced to svc1-backend:

apiVersion: capabilities.3scale.net/v1beta1
kind: Backend
metadata:
  name: dev-svc1-backend
  namespace: 3scale
spec:
  hits:
    description: Number of API hits
    friendlyName: Hits
    unit: hit
  metrics:
    hits:
      description: Number of API hits
      friendlyName: Hits
      unit: hit
  name: dev-svc1-backend
  privateBaseURL: http://svc1.dev-app1.svc.cluster.local:8880
  providerAccountRef:
    name: dev-admin-secret
  systemName: dev-svc1-backend
---
apiVersion: capabilities.3scale.net/v1beta1
kind: Backend
metadata:
  name: dev-svc2-backend
  namespace: 3scale
spec:
  hits:
    description: Number of API hits
    friendlyName: Hits
    unit: hit
  metrics:
    hits:
      description: Number of API hits
      friendlyName: Hits
      unit: hit
  name: dev-svc2-backend
  privateBaseURL: http://svc2.dev-app1.svc.cluster.local:8880
  providerAccountRef:
    name: dev-admin-secret
  systemName: dev-svc2-backend
---
apiVersion: capabilities.3scale.net/v1beta1
kind: Product
metadata:
  name: dev-svc1
  namespace: 3scale
spec:
  applicationPlans:
    plan01:
      name: unlimited-dev-svc1
      setupFee: "00.00"
      trialPeriod: 0
  backendUsages:
    dev-svc1-backend:
      path: /
  deployment:
    apicastSelfManaged:
      productionPublicBaseURL: https://api-dev-production-svc1.example.com
      stagingPublicBaseURL: https://api-dev-staging-svc1.example.com
  mappingRules:
  - httpMethod: GET
    increment: 1
    metricMethodRef: hits
    pattern: /
  metrics:
    hits:
      description: Number of API hits
      friendlyName: Hits
      unit: hit
  name: dev-svc1
  providerAccountRef:
    name: dev-admin-secret
  systemName: dev-svc1
---
apiVersion: capabilities.3scale.net/v1beta1
kind: Product
metadata:
  name: dev-svc2
  namespace: 3scale
spec:
  applicationPlans:
    plan01:
      name: unlimited-dev-svc2
      setupFee: "00.00"
      trialPeriod: 0
  backendUsages:
    dev-svc2-backend:
      path: /
  deployment:
    apicastSelfManaged:
      productionPublicBaseURL: https://api-dev-production-svc2.example.com
      stagingPublicBaseURL: https://api-dev-staging-svc2.example.com
  mappingRules:
  - httpMethod: GET
    increment: 1
    metricMethodRef: hits
    pattern: /
  metrics:
    hits:
      description: Number of API hits
      friendlyName: Hits
      unit: hit
  name: dev-svc2
  providerAccountRef:
    name: dev-admin-secret
  systemName: dev-svc2

image

tenant cr creation fails

When trying to create a new tenant on OpenShift 4.5 using the Operator capabilities I see the following in the operator log:

{"level":"error","ts":1599494078.7106006,"logger":"controller_tenant","msg":"Error in tenant reconciliation","error":"secrets is forbidden: User \"system:serviceaccount:appdev-api-management:3scale-operator\" cannot create resource \"secrets\" in API group \"\" in the namespace \"operator-test\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/remote-source/deps/gomod/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/3scale/3scale-operator/pkg/controller/tenant.(*ReconcileTenant).Reconcile\n\t/remote-source/app/pkg/controller/tenant/tenant_controller.go:133\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"}

looks like somewhere we left a operator-test namespace?

Allow specifying s3 url in CR

I wish to deploy 3scale using an s3 compliant API as the s3 storage solution. Unfortunately at the moment I cannot specify a URL for 3scale to use for s3:

S3SecretAWSAccessKeyIdFieldName: s3.Options.awsAccessKeyId,
S3SecretAWSSecretAccessKeyFieldName: s3.Options.awsSecretAccessKey,

I would like to open a PR to add this functionality so that I can proceed with using minio as the backend. Is this something you would be likely to approve and merge?

Tenant controller cannot create/get admin access token from an existing tenant

As part of the tenant controller workflow, it needs to create (if it does not exist) tenant and it needs to create (if it does not exist) tenant's admin user access_token secret. In the scenario where tenant is already created, but admin access token secret needs to be created, currently there is no way to get a valid access token for that admin user. Available credentials are: Global 3scale master access_token and admin's user/pass. 3scale API does not allow a way to create/fetch access token for that tenant's admin user. Issue opened on Porta https://github.com/3scale/porta/issues/582

When 3scale API allows getting/creating access_token for that admin user, it needs to be implemented in tenant's controller workflow for the scenario: tenant was already created and admin access token secret does not exist.

v0.0.2 does not sync mapping rules and plans

Hello,

I tested the v0.0.2 and master images on quay.io and could not succeed in making them provision a full service in 3scale: the plans and mapping rules are not synced.

I created a secret, a binding and then:

oc create -f - <<EOF
apiVersion: capabilities.3scale.net/v1alpha1
kind: API
metadata:
  labels:
    environment: prod
  name: echo-api
spec:
  planSelector:
    matchLabels:
      api: echo-apo
  description: Echo API
  integrationMethod:
    apicastOnPrem:
      apiTestGetRequest: /
      authenticationSettings:
        credentials:
          apiKey:
            authParameterName: api-key
            credentialsLocation: headers
        errors:
          authenticationFailed:
            contentType: text/plain; charset=us-ascii
            responseBody: Authentication failed
            responseCode: 403
          authenticationMissing:
            contentType: text/plain; charset=us-ascii
            responseBody: Authentication Missing
            responseCode: 401
        hostHeader: ""
        secretToken: MySecretTokenBetweenApicastAndMyBackend_1237120312
      mappingRulesSelector:
        matchLabels:
          api: echo-api
      privateBaseURL: https://echo-api.3scale.net:443
      productionPublicBaseURL: http://$public_production_hostname
      stagingPublicBaseURL: http://$public_staging_hostname
  metricSelector:
    matchLabels:
      api: echo-api
EOF

oc create -f - <<EOF
apiVersion: capabilities.3scale.net/v1alpha1
kind: Metric
metadata:
  labels:
    api: echo-api
  name: get-slash
spec:
  description: GET /
  unit: hit
  incrementHits: true
EOF

oc create -f - <<EOF
apiVersion: capabilities.3scale.net/v1alpha1
kind: Metric
metadata:
  labels:
    api: echo-api
  name: post-slash
spec:
  description: POST /
  unit: hit
  incrementHits: true
EOF

oc create -f - <<EOF
apiVersion: capabilities.3scale.net/v1alpha1
kind: MappingRule
metadata:
  labels:
    api: echo-api
  name: get-slash
spec:
  increment: 1
  method: GET
  metricRef:
    name: get-slash
  path: /
EOF

oc create -f - <<EOF
apiVersion: capabilities.3scale.net/v1alpha1
kind: MappingRule
metadata:
  labels:
    api: echo-api
  name: post-slash
spec:
  increment: 1
  method: POST
  metricRef:
    name: post-slash
  path: /
EOF

oc create -f - <<EOF
apiVersion: capabilities.3scale.net/v1alpha1
kind: Plan
metadata:
  labels:
    api: echo-api
  name: silver
spec:
  default: true
  aprovalRequired: false
  costs:
    costMonth: 0
    setupFee: 0
  limitSelector:
    matchLabels:
      plan: silver
  trialPeriod: 0
EOF

oc create -f - <<EOF
apiVersion: capabilities.3scale.net/v1alpha1
kind: Plan
metadata:
  labels:
    api: echo-api
  name: gold
spec:
  default: true
  aprovalRequired: false
  costs:
    costMonth: 0
    setupFee: 0
  limitSelector:
    matchLabels:
      plan: gold
  trialPeriod: 0
EOF

oc create -f - <<EOF
apiVersion: capabilities.3scale.net/v1alpha1
kind: Limit
metadata:
  labels:
    api: echo-api
    plan: silver
  name: get-slash-silver
spec:
  description: Limit for get-slash in silver
  maxValue: 10
  metricRef:
    name: get-slash
  period: day
EOF

oc create -f - <<EOF
apiVersion: capabilities.3scale.net/v1alpha1
kind: Limit
metadata:
  labels:
    api: echo-api
    plan: gold
  name: get-slash-gold
spec:
  description: Limit for get-slash in gold
  maxValue: 5
  metricRef:
    name: get-slash
  period: day
EOF

current behavior

The service is created in 3scale as well as the metrics. The plans and mapping rules are left out.

In the operator logs, I have:

{"level":"error","ts":1552666067.1225028,"logger":"controller_binding","msg":"Error Reconciling APIs","Request.Namespace":"3scale-operator","Request.Name":"prod-binding","error":"error calling 3scale system - reason: decoding error - expected element type <error> but have <errors> - code: 422","stacktrace":"github.com/3scale/3scale-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/3scale/3scale-operator/pkg/controller/binding.ReconcileBindingFunc\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/pkg/controller/binding/binding_controller.go:235\ngithub.com/3scale/3scale-operator/pkg/controller/binding.(*ReconcileBinding).Reconcile\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/pkg/controller/binding/binding_controller.go:129\ngithub.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/home/msoriano/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

expected behavior

The service, metrics, mapping rules and plans are created.

Incorrect permissions on zync-que service

I have followed the getting started guide using the quay.io/3scale/3scale-operator:latest image.

The expected routes do no appear, just the backend route is created.

The zync-que logs have this error:

W, [2019-07-10T13:16:50.882844 #1]  WARN -- K8s::Transport<https://172.30.0.1:443>: GET /apis/apps.3scale.net/v1alpha1/namespaces/openshift-3scale/apimanagers/3scale => HTTP 403 Forbidden in 0.003s
D, [2019-07-10T13:16:50.882887 #1] DEBUG -- K8s::Transport<https://172.30.0.1:443>: Response: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"apimanagers.apps.3scale.net \"3scale\" is forbidden: User \"system:serviceaccount:openshift-3scale:zync-que-sa\" cannot get resource \"apimanagers\" in API group \"apps.3scale.net\" in the namespace \"openshift-3scale\"","reason":"Forbidden","details":{"name":"3scale","group":"apps.3scale.net","kind":"apimanagers"},"code":403}

and if I add permissions to the zync-que-role that allow access (before the services are ready) I do see the correct routes.

  - verbs:
      - '*'
    apiGroups:
      - apps.3scale.net
    resources:
      - '*'

The operator doesn't process the API capability CR

I have the 3scale-operator installed on OpenShift and I would like to create a new API using the API capabilities.3scale.net/v1alpha1 CRD, but after creating the CR nothing happen, the 3scale-operator doesn't report any error or logs related to it, the CR isn't updated with the Status, and I can't see it if I open the 3scale console

CR

apiVersion: capabilities.3scale.net/v1alpha1
kind: API
metadata:
  creationTimestamp: '2020-05-22T10:01:53Z'
  generation: 1
  labels:
    environment: testing
  name: example-api
  namespace: redhat-rhmi-3scale
  resourceVersion: '69884'
  selfLink: >-
    /apis/capabilities.3scale.net/v1alpha1/namespaces/redhat-rhmi-3scale/apis/example-api
  uid: 43e992c3-df76-4f43-b381-14a95406f91b
spec:
  description: api01
  integrationMethod:
    apicastHosted:
      apiTestGetRequest: /
      authenticationSettings:
        credentials:
          apiKey:
            authParameterName: user-key
            credentialsLocation: headers
        errors:
          authenticationFailed:
            contentType: text/plain; charset=us-ascii
            responseBody: Authentication failed
            responseCode: 403
          authenticationMissing:
            contentType: text/plain; charset=us-ascii
            responseBody: Authentication Missing
            responseCode: 403
        hostHeader: ''
        secretToken: MySecretTokenBetweenApicastAndMyBackend_1237120312
      mappingRulesSelector:
        matchLabels:
          api: api01
      privateBaseURL: 'https://echo-api.3scale.net:443'

Version
3scale-operator: 0.5.0

Operator Logs:
3scale-operator-75b858757c-qdzgk-3scale-operator.log

Am I doing something wrong? Any idea how could I debug this problem?

MySQL database schema for HA deployments not automatically created

Description

When deploying 3scale using the operators HA custom resource, an external MySQL database connection string must be provided, in the following format:

mysql2://root:<password>@<host-name>:<port>/<database-name>

Request

For external database sources, the database schema must be created manually before the operator is run. The host name must then be updated to include the newly created database name before the operator can be deployed.

Perhaps it might be possible for the 3scale operator to create the database schema automatically?

Rename Binding CR

The name "binding" shouldn't be used as it clashes with other resources.

Failed on multiple backend declaration

Cannot declare multiple backendUsages in Product definition as the next doc: https://github.com/3scale/3scale-operator/blob/master/doc/operator-capabilities.md#product-backend-usages

When i declare 2 or more backends in the same Product, it failed with:

reconcile product spec: Task failed SyncBackendUsage: Error sync product [backendX] backendusages: product [backendX] update backendusage: error calling 3scale system - reason: {"backend_api_id":["has already been taken"]} - code: 422

it looks like try to asign the same ID to all the backends in backendUsages

OpenShift 4.1.3 Operator Installation Failures

OpenShift 4.1.3 via OpenShift Installer running on AWS
6 node cluster - 3 master 3 worker
Kubernetes v1.13.4+

In file pkg/3scale/amp/component/system.go at line 907 there is a attempt to use ReadWriteMany for the system-storage PVC which the default StorageClass does not support; system-app pod fails but changing to ReadWriteOnce allows the pod to complete.

Backend pods fail (backend-cron|redis|worker) with error - "error: update acceptor rejected backend-worker-1: pods for rc '3scale-operator/backend-worker-1' took longer than 1200 seconds to become available".

OpenAPI Support & Discovery

OpenAPI Support

We should be able to relate one API with its OpenAPI Document, so we can create several objects and extract some useful information:

  • Base Path (Path routing?) : extracted from the annotation: discovery.3scale.net/path: /v1/api
  • Metrics: Extracted from OperationIDs or, if empty, by joining the "Path+Verb".
  • Mapping Rules: Based on path , http verb, Operation ID and an increment of 1.

Example path definition extracted from Example OpenAPI Document:

/v2:  <-- HTTP Path
    get:   <-- Verb
      operationId: getVersionDetailsv2 <-- Metric
      summary: Show API version details <-- Metric Description? 

Options

Let's see some of the options, on how to relate the API object with the OpenAPI Document:

1. Using the "Service Discovery Spec"

The operator should be able to detect if there is an "Openshift Service" with the same name as the value of privateBaseURL, then, read that "Openshift Service" and look for the label: discovery.3scale.net: "true", if that label is set, then proceed to extract all the possible annotations defined in the "Service Discovery Spec" as:

    discovery.3scale.net/scheme: http
    discovery.3scale.net/port: 80
    discovery.3scale.net/path: /v1/api
    discovery.3scale.net/description-path: http://example.svc:8080/doc/swagger.json

This option doesn't require any change to the OpenAPI objects.

2. Extending the API Object with: OpenAPIRef

This proposal extends the actual API Object. Adding a new OpenAPIRef to the integration Methods that apply (Apicast, Istio integration):

      openAPIRef:
           URL: http://example.svc:8080/doc/swagger.json
           ConfigMapRef:
             name: MyOpenAPIDocument
           Document: |
             openapi: "3.0.0"
             info:
               title: Simple API overview
               version: 2.0.0
             paths:
               /:
                 get:
                   operationId: listVersionsv2
                   summary: List API versions
                   responses:
                       '200':
                           description: |-
                                200 response
                          content:
                              application/json:
                  (..)

This supports:

  • URL: A remote URL from where to fetch the OpenAPI Document
  • ConfigMapRef: A k8s ObjectReference that points to a ConfigMap that contains the OpenAPI Document (defaulting to openapi.yaml or openapi.json)
  • Document: Inline OpenAPI Document.

Giving the user several ways of specifying the source of the OpenAPI Document.

Example:

apiVersion: capabilities.3scale.net/v1alpha1
kind: API
metadata:
  labels:
    environment: staging
  name: api01
spec:
  planSelector:
    matchLabels:
      api: api01
  description: api01
  integrationMethod:
    apicastOnPrem:
      apiTestGetRequest: /
      authenticationSettings:
        credentials:
          apiKey:
            authParameterName: user-key
            credentialsLocation: headers
        errors:
          authenticationFailed:
            contentType: text/plain; charset=us-ascii
            responseBody: Authentication failed
            responseCode: 403
          authenticationMissing:
            contentType: text/plain; charset=us-ascii
            responseBody: Authentication Missing
            responseCode: 403
        hostHeader: ""
        secretToken: MySecretTokenBetweenApicastAndMyBackend_1237120312
      mappingRulesSelector:
        matchLabels:
          api: api01
      privateBaseURL: https://echo-api.3scale.net:443
      productionPublicBaseURL: https://api.testing.com:443
      stagingPublicBaseURL: https://api.testing.com:443
      openAPIRef:
           Document: |
             openapi: "3.0.0"
             info:
               title: Simple API overview
               version: 2.0.0
             paths:
               /:
                 get:
                   operationId: listVersionsv2
                   summary: List API versions
                   responses:
                       '200':
                           description: |-
                                200 response
                          content:
                              application/json:
                  (..)
  metricSelector:
    matchLabels:
      api: api01

3. A combination of both approaches, allowing to override the discovered OpenAPI Doc.

This third option, combines previous options: discover if the target privateBaseURL is an openshift service, and if it contains the label discovery.3scale.net: "true" check for the OpenAPI Document, but also check if the user has specified another OpenAPI Document by using the openAPIRef extension of the API object.

The user specified openAPIRef takes precedence over the auto-discovery of the privateBaseURL service, so openAPIRef overrides the values of the service annotations.

Proposal

I think the combination of both options is the best way to go as this gives the user more freedom on how to import the OpenAPI document.

We should implement option number 2 first and then, option number 1.

Update 3scale labels used on AMP templates to be prometheus compliant

At the moment we are using the following labels on openshift objects created by 3scale AMP templates (example at https://github.com/3scale/3scale-amp-openshift-templates/blob/master/amp/amp-eval-tech-preview.yml#L3200) :

    labels:
      3scale.component: apicast
      3scale.component-element: staging

Which are correct from kubernetes point of view: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set

Labels are key/value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (/). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), underscores (_), dots (.), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (.), not longer than 253 characters in total, followed by a slash (/).

But are not valid from prometheus point of view: https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels

The metric name specifies the general feature of a system that is measured (e.g. http_requests_total - the total number of HTTP requests received). It may contain ASCII letters and digits, as well as underscores and colons. It must match the regex [a-zA-Z_:][a-zA-Z0-9_:]*.

Label names may contain ASCII letters, numbers, as well as underscores. They must match the regex [a-zA-Z_][a-zA-Z0-9_]*. Label names beginning with __ are reserved for internal use.

And it is advisable using kubernetes labels that both complies kubernetes and prometheus syntax.

What test I have done to validate it

I added a service with label "3scale.component: system" (starting by digit "3" and containing a point "."):

apiVersion: v1
kind: Service
metadata:
  name: system-sphinx-test-regex-exporter
  namespace: prometheus-exporters
  labels:
    3scale.component: system
    app: sphinx-exporter
    environment: test
    name: system-sphinx-test-regex-exporter
    role: service
    template: sphinx-exporter
    tier: sphinx
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
    prometheus.io/path: /metrics
    prometheus.io/port: '9247'
    prometheus.io/scrape: 'true'
spec:
  ports:
    - protocol: TCP
      port: 9247
      targetPort: 9247
  selector:
    app: system-sphinx-exporter
  clusterIP: 172.30.205.220
  type: ClusterIP
  sessionAffinity: None

This service is pointing to a sphinx prometheus exporter monitoring system sphinx instance, and this service is being scraped by internal cluster prometheus server using official job to auto discover services with prometheus metrics: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/prometheus/prometheus-configmap.yaml#L60

- job_name: 'kubernetes-service-endpoints'
  kubernetes_sd_configs:
  - role: endpoints
  relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
      action: replace
      target_label: __scheme__
      regex: (https?)
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
      action: replace
      target_label: __address__
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: $1:$2
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      action: replace
      target_label: kubernetes_name

Which relabels obtained metrics with all openshift service labels (3scale.component among others). If I check that metric on the internal prometheus PromQL:

sphinx_up{3scale_component="system",app="sphinx-exporter",environment="test",instance="10.1.7.110:9247",job="kubernetes-service-endpoints",kubernetes_name="system-sphinx-test-regex-exporter",kubernetes_namespace="prometheus-exporters",name="system-sphinx-test-regex-exporter",role="service",template="sphinx-exporter",tier="sphinx"}

Among other labels, sphinx metrics are being relabelled with "3scale_component="system" (replacing "." by "_").

The problem comes with prometheus federation. In that specific case, there is a main prometheus server which is federated with internal production prometheus service running on production cluster (the one scraping openshift services), with the following job:

- job_name: k8s_prod
  honor_labels: true
  params:
    match[]:
    - '{job="kubernetes-nodes"}'
    - '{job="kubernetes-apiservers"}'
    - '{job="kubernetes-service-endpoints"}'
    - '{job="kubernetes-pods"}'
  scrape_interval: 2m
  scrape_timeout: 1m
  metrics_path: /federate
  scheme: http
  static_configs:
  - targets:
    - production-cluster-internal-prometheus-service.3scale.net:9090
  relabel_configs:
  - separator: ;
    regex: (.*)
    target_label: cluster
    replacement: prod

Which receives all metrics from internal production prometheus service "production-cluster-internal-prometheus-service.3scale.net:9090", and adds a new label "cluster=prod" to all obtained metrics (to know from which cluster arrives).

Once these labelled metrics "3scale_component" arrives to main prometheus server, it gives the whole job as NodeDown (so all metrics that comes from that internal prometheus service are not received on main prometheus server).

NodeDown production-cluster-internal-prometheus-service.3scale.net:9090 (prod k8s_prod http://main-prometheus-service.3scale.net:9093 critical)

And we see the following errors on prometheus logs:

level=warn ts=2019-02-11T16:15:15.684940964Z caller=scrape.go:686 component="scrape manager" scrape_pool=k8s_prod target="http://production-cluster-internal-prometheus-service.3scale.net:9090/federate?match%5B%5D=%7Bjob%3D%22kubernetes-nodes%22%7D&match%5B%5D=%7Bjob%3D%22kubernetes-apiservers%22%7D&match%5B%5D=%7Bjob%3D%22kubernetes-service-endpoints%22%7D&match%5B%5D=%7Bjob%3D%22kubernetes-pods%22%7D" msg="append failed" err="no token found"

If we investigate that error "msg="append failed" err="no token found" " (for example on that google group https://groups.google.com/forum/#!msg/prometheus-users/5aGq7STP8TA/jn8QiCtmBwAJ), we will see that the problem is having metrics with not valid prometheus label names, which is confirmed by official documentation https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels

My advise would be to use on Openshift, labels that are both kubernetes and prometheus compliant, so instead of using labels:

  • 3scale.component
  • 3scale.component-element

Use:

  • threescale_component
  • theescale_component_element

Refactor cross-referenced secrets

Some secrets/secret field names created in one component are referenced from other components. Currently this references between components are done via a hardcoded string.
This should be refactored. At least should reference the existing defined constants for those secrets/secret field names

Add PodDisruptionBudget when HA is enabled

Why?

When 3scale is deployed in HA mode, if the underline k8s or OpenShift is being upgraded, depending how the upgrade is done, there is a chance that all replicas of 3scale deployments will be evicted from current node and move to a new node. This will cause downtime.

PodDisruptionBudget is a feature provided by K8S to help solve this problem. It can help limit the number of pod that are down simultaneously. However, it will not work if there is only 1 replica deployed.

How?

  1. When HA mode is enabled, the operator will automatically create corresponding PDB objects for all the deployments that have more than 1 replica.
  2. We also need to watch the ReplicaSets and if a deployment is scaled down to 1, we will have to delete the PDB automatically to prevent any possible problems, and create the PDB again if it's scaled up.

We are happy to help progress the work. Would appreciate some thoughts and feedbacks before we get started.

Thanks.

Investigate possibility to use master access token between installation and capabilities

Currently the installation operator provides/generates an password that is what is created and configured in system (named MASTER_PASSWORD in the generator/templates code).
This seems to not be revokable (is a provider key?)

Investigate whether a master access token could be provided. There's a parameter named MASTER_ACCESS_TOKEN. Find out what does exactly do in System(Porta) and if it could be used by the Capabilities operator and also set into a Secret (it currently is used directly as an OpenShift parameter but not configured inside a Secret. It is directly referenced in an OpenShift pre-hook pod and we have to find out if the pre-hook pod could reference an environment variable that is created from a secret.

Add SOPs/runbooks for all Prometheus critical alerts

Enabling monitoring in 2.9 adds a number of new Prometheus alerts, many of which are of critical severity. Before we (the RHMI team) can enable monitoring, we have to ensure that these critical alerts meet the acceptance criteria outlined by CS-SRE, one of which states that all critical alerts must have an accompanying SOP/run-book (linked via an annotation in the alert) that includes detailed steps on how to resolve/investigate the alert.

We have many examples of SOP's that were created for the critical alerts within RHMI. If needed, these could be shared to show the kind of detail each one contains, and how they are structured.

Document 2.4 to 2.5 upgrade path for templates

Write documentation to upgrade from 2.4 to 2.5 release in templates.

For the moment the following should be documented:

  • How to manually upgrade installation to use the new fields in the system-database secret from system-mysql: #43
  • How to upgrade PostgreSQL 9 to 10 on the templates side. Coordination with Zync owners should be done. They should document how to perform the data migration whereas the templates die should document how to create the new elements

Capabilitites: Error on API (Service) reconciliation

Steps to reproduce

  • Create api
apiVersion: capabilities.3scale.net/v1alpha1
kind: API
metadata:
  creationTimestamp: 2019-01-25T13:28:41Z
  generation: 1
  labels:
    environment: testing
  name: api01
spec:
  planSelector:
    matchLabels:
      api: api01
  description: api01
  integrationMethod:
    apicastHosted:
      apiTestGetRequest: /
      authenticationSettings:
        credentials:
          apiKey:
            authParameterName: user-key
            credentialsLocation: headers
        errors:
          authenticationFailed:
            contentType: text/plain; charset=us-ascii
            responseBody: Authentication failed
            responseCode: 403
          authenticationMissing:
            contentType: text/plain; charset=us-ascii
            responseBody: Authentication Missing
            responseCode: 403
        hostHeader: ""
        secretToken: Shared_secret_sent_from_proxy_to_API_backend_9603f637ca51ccfe
      mappingRulesSelector:
        matchLabels:
          api: api01
      privateBaseURL: https://echo-api.3scale.net:443
  metricSelector:
    matchLabels:
      api: api01

New service should be created successfully.

  • Delete service from dashboard

Then controller cannot reconcile and throws error:

{“level”:“info”,“ts”:1557219779.4535804,“logger”:“controller_binding”,“msg”:“Reconciling Binding”,“Request.Namespace”:“operator-test2”,“Request.Name”:“_NonBinding”}
2019/05/07 09:02:59 API is missing from 3scale: api01
{“level”:“info”,“ts”:1557219779.487402,“logger”:“controller_binding”,“msg”:“State is not in sync, reconciling APIs”,“Request.Namespace”:“operator-test2",“Request.Name”:“_NonBinding”}
{“level”:“error”,“ts”:1557219779.517338,“logger”:“controller_binding”,“msg”:“Error Reconciling APIs”,“Request.Namespace”:“operator-test2”,“Request.Name”:“_NonBinding”,“error”:“error calling 3scale system - reason: decoding error - expected element type <error> but have <errors> - code: 422”,“stacktrace”:“github.com/3scale/3scale-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/3scale/3scale-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/3scale/3scale-operator/pkg/controller/binding.ReconcileBindingFunc\n\t/go/src/github.com/3scale/3scale-operator/pkg/controller/binding/binding_controller.go:237\ngithub.com/3scale/3scale-operator/pkg/controller/binding.(*ReconcileBinding).Reconcile\n\t/go/src/github.com/3scale/3scale-operator/pkg/controller/binding/binding_controller.go:111\ngithub.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/3scale/3scale-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/3scale/3scale-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88”}
2019/05/07 09:02:59 API is missing from 3scale: api01

Service is not created by the controller

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.