Coder Social home page Coder Social logo

redhat-cop / group-sync-operator Goto Github PK

View Code? Open in Web Editor NEW
109.0 20.0 59.0 929 KB

Synchronizes groups from external providers into OpenShift

License: Apache License 2.0

Dockerfile 0.47% Shell 3.83% Go 87.90% Makefile 6.05% Smarty 0.53% Mustache 1.23%
k8s-operator container-cop

group-sync-operator's Issues

`controller-gen` syntax error during deploy

I am attempting to deploy this operator in a 4.6.9 disconnected cluster in AWS GovCloud. I am getting the following error during the make deploy step.

oc new-project group-sync-operator

Now using project "group-sync-operator" on server "https://api.ocp4.example.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

➜  ~ oc project
Using project "group-sync-operator" on server "https://api.ocp4.example.com:6443".
➜  ~ git clone https://github.com/redhat-cop/group-sync-operator.git

Cloning into 'group-sync-operator'...
remote: Enumerating objects: 34, done.
remote: Counting objects: 100% (34/34), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 808 (delta 5), reused 21 (delta 4), pack-reused 774
Receiving objects: 100% (808/808), 419.25 KiB | 9.75 MiB/s, done.
Resolving deltas: 100% (365/365), done.
➜  ~ cd group-sync-operator
➜  group-sync-operator git:(master) make deploy
which: no controller-gen in (/home/etsmith/.zinit/polaris/bin:/home/etsmith/google-cloud-sdk/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/etsmith/bin:/home/etsmith/.composer/vendor/bin:/var/lib/snapd/snap/bin)
which: no kustomize in (/home/etsmith/.zinit/polaris/bin:/home/etsmith/google-cloud-sdk/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/etsmith/bin:/home/etsmith/.composer/vendor/bin:/var/lib/snapd/snap/bin)
go: creating new go.mod: module tmp
go: downloading sigs.k8s.io/controller-tools v0.3.0
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.3.0
go: downloading github.com/spf13/cobra v0.0.5
go: downloading sigs.k8s.io/yaml v1.2.0
go: downloading golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72
go: downloading k8s.io/apimachinery v0.18.2
go: downloading github.com/gobuffalo/flect v0.2.0
go: downloading k8s.io/apiextensions-apiserver v0.18.2
go: downloading k8s.io/api v0.18.2
go: downloading github.com/fatih/color v1.7.0
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/inconshreveable/mousetrap v1.0.0
go: downloading gopkg.in/yaml.v2 v2.2.8
go: downloading github.com/gogo/protobuf v1.3.1
go: downloading k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89
go: downloading github.com/google/gofuzz v1.1.0
go: downloading gopkg.in/yaml.v3 v3.0.0-20190905181640-827449938966
go: downloading github.com/mattn/go-colorable v0.1.2
go: downloading k8s.io/klog v1.0.0
go: downloading sigs.k8s.io/structured-merge-diff/v3 v3.0.0
go: downloading gopkg.in/inf.v0 v0.9.1
go: downloading github.com/mattn/go-isatty v0.0.8
go: downloading golang.org/x/net v0.0.0-20191004110552-13f9640d40b9
go: downloading golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7
go: downloading github.com/json-iterator/go v1.1.8
go: downloading github.com/modern-go/reflect2 v1.0.1
go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd
go: downloading golang.org/x/text v0.3.2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   620  100   620    0     0   6391      0 --:--:-- --:--:-- --:--:--  6391
100 6113k  100 6113k    0     0  5993k      0  0:00:01  0:00:01 --:--:-- 7521k
/home/etsmith/go/bin/controller-gen "crd:trivialVersions=true,crdVersions=v1beta1,preserveUnknownFields=false" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
Error: go [list -e -json -compiled=true -test=false -export=false -deps=true -find=false -tags ignore_autogenerated -- ./...]: exit status 1: go: github.com/openshift/[email protected]+incompatible: invalid pseudo-version: preceding tag (v3.9.0) not found

Usage:
  controller-gen [flags]

Examples:
        # Generate RBAC manifests and crds for all types under apis/,
        # outputting crds to /tmp/crds and everything else to stdout
        controller-gen rbac:roleName=<role name> crd paths=./apis/... output:crd:dir=/tmp/crds output:stdout

        # Generate deepcopy/runtime.Object implementations for a particular file
        controller-gen object paths=./apis/v1beta1/some_types.go

        # Generate OpenAPI v3 schemas for API packages and merge them into existing CRD manifests
        controller-gen schemapatch:manifests=./manifests output:dir=./manifests paths=./pkg/apis/...

        # Run all the generators for a given project
        controller-gen paths=./apis/...

        # Explain the markers for generating CRDs, and their arguments
        controller-gen crd -ww


Flags:
  -h, --detailed-help count   print out more detailed help
                              (up to -hhh for the most detailed output, or -hhhh for json output)
      --help                  print out usage and a summary of options
      --version               show version
  -w, --which-markers count   print out all markers available with the requested generators
                              (up to -www for the most detailed output, or -wwww for json output)


Options


generators

+webhook                                                                                                  package  generates (partial) {Mutating,Validating}WebhookConfiguration objects.
+schemapatch:manifests=<string>[,maxDescLen=<int>]                                                        package  patches existing CRDs with new schemata.
+rbac:roleName=<string>                                                                                   package  generates ClusterRole objects.
+object[:headerFile=<string>][,year=<string>]                                                             package  generates code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
+crd[:crdVersions=<[]string>][,maxDescLen=<int>][,preserveUnknownFields=<bool>][,trivialVersions=<bool>]  package  generates CustomResourceDefinition objects.


generic

+paths=<[]string>  package  represents paths and go-style path patterns to use as package roots.


output rules (optionally as output:<generator>:...)

+output:artifacts[:code=<string>],config=<string>  package  outputs artifacts to different locations, depending on whether they're package-associated or not.
+output:dir=<string>                               package  outputs each artifact to the given directory, regardless of if it's package-associated or not.
+output:none                                       package  skips outputting anything.
+output:stdout                                     package  outputs everything to standard-out, with no separation.

run `controller-gen crd:trivialVersions=true,crdVersions=v1beta1,preserveUnknownFields=false rbac:roleName=manager-role webhook paths=./... output:crd:artifacts:config=config/crd/bases -w` to see all available markers, or `controller-gen crd:trivialVersions=true,crdVersions=v1beta1,preserveUnknownFields=false rbac:roleName=manager-role webhook paths=./... output:crd:artifacts:config=config/crd/bases -h` for usage
make: *** [Makefile:91: manifests] Error 1

AD/LDAP upgrade from 0.0.6 to.. ?

Hi,

I'm using v0.0.6 to add AD to my OCP 4.x clusters. I'm struggling to figure out how to adapt the instructions to do the same on upstream without using a 'make deploy'.

On 0.0.6, this was as simple as:

cd ${PATH_SCRIPT}/group-sync-operator
oc ${OC_ARGS} apply -f deploy/crds/redhatcop.redhat.io_groupsyncs_crd.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/service_account.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/clusterrole.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/clusterrole_binding.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/role.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/role_binding.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/operator.yaml

oc ${OC_ARGS} apply -f ${PATH_SCRIPT}/krynn-ad-oauth-group-sync-operator.yaml

Is 'make deploy' the only supported way to do this?

Empty displayName in Azure AD group causes operator to crash

Group records in Azure AD with empty displayName cause operator to crash with panic: "invalid memory address or nil pointer dereference".

Step to reproduce:

  1. Create a group in Azure AD with empty displayName
  2. Deploy operator with 'groups' list to limit set of groups to be synchronized.

Example yaml file:

kind: GroupSync
metadata:
  name: test-azure-groupsync
  namespace: group-sync-operator
spec:
  providers:
  - name: azure
    azure:
      groups:
      - Operate-OP
      - Operate-SF
      credentialsSecret:
        name: azure-group-sync
        namespace: group-sync-operator

Logs:

I0804 09:40:30.919240       1 request.go:655] Throttling request took 1.024563386s, request: GET:https://172.30.0.1:443/apis/networking.k8s.io/v1?timeout=32s
2021-08-04T09:40:32.735Z    INFO    controller-runtime.metrics  metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
2021-08-04T09:40:32.735Z    INFO    setup   starting manager
I0804 09:40:32.736303       1 leaderelection.go:243] attempting to acquire leader lease group-sync-operator/085c249a.redhat.io...
2021-08-04T09:40:32.736Z    INFO    controller-runtime.manager  starting metrics server {"path": "/metrics"}
I0804 09:40:50.499031       1 leaderelection.go:253] successfully acquired lease group-sync-operator/085c249a.redhat.io
2021-08-04T09:40:50.499Z    DEBUG   controller-runtime.manager.events   Normal  {"object": {"kind":"ConfigMap","namespace":"group-sync-operator","name":"085c249a.redhat.io","uid":"f453e8b1-bd90-4432-8e25-320ff4e04b16","apiVersion":"v1","resourceVersion":"35832254"}, "reason": "LeaderElection", "message": "group-sync-operator-controller-manager-7bfc577b89-x52dc_b11e872b-eca1-409e-a54b-bfdaa5595bd4 became leader"}
2021-08-04T09:40:50.499Z    INFO    controller-runtime.manager.controller.groupsync Starting EventSource    {"reconciler group": "redhatcop.redhat.io", "reconciler kind": "GroupSync", "source": "kind source: /, Kind="}
2021-08-04T09:40:50.600Z    INFO    controller-runtime.manager.controller.groupsync Starting Controller {"reconciler group": "redhatcop.redhat.io", "reconciler kind": "GroupSync"}
2021-08-04T09:40:50.600Z    INFO    controller-runtime.manager.controller.groupsync Starting workers    {"reconciler group": "redhatcop.redhat.io", "reconciler kind": "GroupSync", "worker count": 1}
2021-08-04T09:41:10.233Z    INFO    controllers.GroupSync   Beginning Sync  {"groupsync": "group-sync-operator/test-azure-groupsync", "Provider": "azure"}
E0804 09:41:30.160042       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 332 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x2250a00, 0x3d58750)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x86
panic(0x2250a00, 0x3d58750)
    /usr/local/go/src/runtime/panic.go:965 +0x1b9
github.com/redhat-cop/group-sync-operator/pkg/syncer.(*AzureSyncer).Sync(0xc0007c9710, 0x0, 0x0, 0x2, 0x2, 0xc000978000)
    /workspace/pkg/syncer/azure.go:194 +0x5ae
github.com/redhat-cop/group-sync-operator/controllers.(*GroupSyncReconciler).Reconcile(0xc0004b6000, 0x29cb758, 0xc000c95410, 0xc0000e3f08, 0x13, 0xc0000e3ef0, 0x14, 0xc000c95410, 0xc000030000, 0x247c060, ...)
    /workspace/controllers/groupsync_controller.go:111 +0xf37
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0005b3c20, 0x29cb6b0, 0xc00059a000, 0x2400bc0, 0xc000c96820)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:298 +0x30d
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0005b3c20, 0x29cb6b0, 0xc00059a000, 0x656242c600000000)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2(0x29cb6b0, 0xc00059a000)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216 +0x4a
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0x37
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00066c750)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000e9ff50, 0x2991b80, 0xc0005bf2c0, 0xc00059a001, 0xc0005a82a0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00066c750, 0x3b9aca00, 0x0, 0x1, 0xc0005a82a0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x29cb6b0, 0xc00059a000, 0xc00056f6e0, 0x3b9aca00, 0x0, 0x1)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0xa6
k8s.io/apimachinery/pkg/util/wait.UntilWithContext(0x29cb6b0, 0xc00059a000, 0xc00056f6e0, 0x3b9aca00)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99 +0x57
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:213 +0x40d
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
    panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1ee9a0e]
goroutine 332 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x109
panic(0x2250a00, 0x3d58750)
    /usr/local/go/src/runtime/panic.go:965 +0x1b9
github.com/redhat-cop/group-sync-operator/pkg/syncer.(*AzureSyncer).Sync(0xc0007c9710, 0x0, 0x0, 0x2, 0x2, 0xc000978000)
    /workspace/pkg/syncer/azure.go:194 +0x5ae
github.com/redhat-cop/group-sync-operator/controllers.(*GroupSyncReconciler).Reconcile(0xc0004b6000, 0x29cb758, 0xc000c95410, 0xc0000e3f08, 0x13, 0xc0000e3ef0, 0x14, 0xc000c95410, 0xc000030000, 0x247c060, ...)
    /workspace/controllers/groupsync_controller.go:111 +0xf37
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0005b3c20, 0x29cb6b0, 0xc00059a000, 0x2400bc0, 0xc000c96820)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:298 +0x30d
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0005b3c20, 0x29cb6b0, 0xc00059a000, 0x656242c600000000)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2(0x29cb6b0, 0xc00059a000)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216 +0x4a
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0x37
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00066c750)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000e9ff50, 0x2991b80, 0xc0005bf2c0, 0xc00059a001, 0xc0005a82a0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00066c750, 0x3b9aca00, 0x0, 0x1, 0xc0005a82a0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x29cb6b0, 0xc00059a000, 0xc00056f6e0, 0x3b9aca00, 0x0, 0x1)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0xa6
k8s.io/apimachinery/pkg/util/wait.UntilWithContext(0x29cb6b0, 0xc00059a000, 0xc00056f6e0, 0x3b9aca00)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99 +0x57
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:213 +0x40d

metadata.labels: Invalid value: \"github-groupsync_\"

I get this error [*] when syncing from gh.
v0.0.12

[*]

Group.user.openshift.io \"architecture\" is invalid: metadata.labels: Invalid value: \"github-groupsync_\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')Failed to Create or Update OpenShift Group" source="groupsync_controller.go:169"
2021-08-06T12:20:03.811Z        DEBUG   controller-runtime.manager.events       Warning {"object": {"kind":"GroupSync","namespace":"group-sync-operator","name":"github-groupsync","uid":"e5fa947a-8b9c-4b47-b54a-06f27b6b9258","apiVersion":"redhatcop.redhat.io/v1alpha1","resourceVersion":"1082750161"}, "reason": "ProcessingError", "message": "Group.user.openshift.io \"architecture\" is invalid: metadata.labels: Invalid value: \"github-groupsync_\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')"}
2021-08-06T12:20:03.817Z        ERROR   controller-runtime.manager.controller.groupsync Reconciler error        {"reconciler group": "redhatcop.redhat.io", "reconciler kind": "GroupSync", "name": "github-groupsync", "namespace": "group-sync-operator", "error": "Group.user.openshift.io \"architecture\" is invalid: metadata.labels: Invalid value: \"github-groupsync_\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')"}

Secret permissions too wide ?

It seems the operator has "get, watch, list" on the secrets of all namespaces. I assume that it's because it's possible to use secrets from any namespace. But is that necessary ? Wouldn't be wiser to allow only to read secrets from its own namespace ?

Azure - groups filter being ignored

When deploying group-sync-operator 0.0.5 against using Azure provider per below config and 'groups' specification is being ignored and all groups are being synced from AzureAD rather than the one(s) specified

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
name: azure-groupsync
spec:
providers:

  • name: azure
    azure:
    credentialsSecret:
    name: azure-group-sync
    namespace: group-sync-operator
    groups:
    - testgroupname
    schedule: '* * * * *'

Regex filter on groups.

Is it possible when using a groups filter to also use a regex as a filter.
When having a large number of test groups, where the groups alway begins with "test-" it is nice to have an regex filter to
have one entry of "test-*" that will include all the test- groups.
This is in favour of listing all separate test- groups individually.

And if a regex can be used, it is also nice to "negate" a single group so that it can be excluded from a regex.

We currently using keycloack as our identity provider.

An error occurred "Not Found" on 0.0.9

Installed the the operator 0.0.9
When trying to configure keycloak I always get the message: an error occurred Not Found
It does not matter if I use the From view or Yaml

My groupsync is defined as:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: groupsync
spec:
  providers:
    - keycloak:
        credentialsSecret:
          name: keycloak-group-sync
          namespace: group-sync-operator
        realm: <realm>
        loginRealm: <login-realm>
        scope: sub
        url: https://xxx.xxx.xxx

When trying LDAP rfc2307 synchronization with whitelist parameter, I get the error when syncing "Expected Label":"<metadata.name>_<spec_providers_ldap:name>,"Found Label":""}

I am trying to run the operator for ldap, specifically the rfc2307 configuration, and using the whitelist parameter for an AD group. However, I receive the following error in the log from the resulting pod:

{"level":"info","ts":1606189207.9799204,"logger":"controller_groupsync","msg":"Group Provider Label Did Not Match Expected Provider Label","Group Name":"<AD_GROUP>","Expected Label":"ldap-groupsync_ldap","Found Label":""}

The underlying YML code to deploy the operator would be:

`apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
name: ldap-groupsync
spec:
providers:

  • ldap:
    credentialsSecret:
    name: ldap-group-sync
    namespace: group-sync-operator
    insecure: true
    whitelist:
    • <AD_GROUP>
      rfc2307:
      groupMembershipAttributes: [ member ]
      groupNameAttributes: [ cn ]
      groupUIDAttribute: cn
      groupsQuery:
      baseDN: "OU=,OU=,OU=,OU=,OU=,DC=,DC=,DC=,DC=net"
      derefAliases: never
      filter: (objectClass=group)
      scope: sub
      pageSize: 1000
      tolerateMemberNotFoundErrors: true
      tolerateMemberOutOfScopeErrors: true
      userNameAttributes: [ sAMAccountName ]
      userUIDAttribute: dn
      usersQuery:
      baseDN: "DC=,DC=,DC=,DC=net"
      derefAliases: never
      scope: sub
      pageSize: 1000
      url: <ldap_url>
      name: ldap
      `

In the error above, I can see that when it says "Expected Label":"ldap-groupsync_ldap", this value is a concatenation of

<metadata.name>_<spec_providers_ldap:name>

from the YML above, which I even proved before by adjusting their values and seeing their results. I am unsure where these "label" values come from, as I don't know if this type of parameter exists in LDAP. I'm wondering if it is an underlying issue with the operator itself?

Please let me know if more information is required.

Thanks,

Michael

Delete groups in openshift

If a group gets deleted in one of the external provides (in my case Azure) currently the group will keep existing in openshift.

Could it be possible to also check if a group got deleted and then also delete it in openshift?

Add documentation to setup RHSSO user with minimum required permissions

For security purposes, it is better to use a permission-limited user in RHSSO for the group-sync-operator.

To do so (mostly from https://stackoverflow.com/questions/56743109/keycloak-create-admin-user-in-a-realm):

  1. Open RHSSO admin console and select realm of your choice (realm selection box on top left side)
  2. Go to users (sidebar) -> add user (button on the right side)
  3. Fill in required fields and press save button.
  4. Open the Credentials tab and set a password; be sure to disable the "Temporary" switch
  5. Open Role Mapping tab
  6. Select realm-management under Client Roles.
  7. Select query-groups and query-users roles

You can then use this account by creating the asscotiated secret in OCP and use it for the GroupSync CR:

$ oc create secret generic keycloak-group-sync --from-literal=username=group-sync-user --from-literal=password=group-sync-password
$ cat groupsync.yaml
  [...]
  - keycloak:
      credentialsSecret:
        name: keycloak-group-sync
        namespace: < groupsync-operator-namespace >
      loginRealm: < the selected realm >
      realm: < the selected realm >
   [...]

Cannot upgrade from 0.0.6 to 0.0.7 because ServiceAccount does not have RBAC policy rules

Hi Andrew,

Thank you very much for your help earlier in troubleshooting my problems with this. However, we are now experiencing a problem during the automatic upgrade from 0.0.6 to 0.0.7.

oc get csv yields

NAME DISPLAY VERSION REPLACES PHASE
group-sync-operator.v0.0.6 Group Sync Operator 0.0.6 Replacing
group-sync-operator.v0.0.7 Group Sync Operator 0.0.7 group-sync-operator.v0.0.6 Pending

When issuing the command

**oc describe csv group-sync-operator.v0.0.7 **

we receive the following error:

Requirement Status:
Group: apiextensions.k8s.io
Kind: CustomResourceDefinition
Message: CRD is present and Established condition is true
Name: groupsyncs.redhatcop.redhat.io
Status: Present
Uuid: aba35d5c-3d49-4f2d-8bac-433b3e238ace
Version: v1beta1
Dependents:
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: namespaced rule:{"verbs":["get","list","watch","create","update","patch","delete"],"apiGroups":[""],"resources":["configmaps"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: namespaced rule:{"verbs":["get","update","patch"],"apiGroups":[""],"resources":["configmaps/status"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: namespaced rule:{"verbs":["create","patch"],"apiGroups":[""],"resources":["events"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["get","list","watch"],"apiGroups":[""],"resources":["secrets"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["redhatcop.redhat.io"],"resources":["groupsyncs"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["get","patch","update"],"apiGroups":["redhatcop.redhat.io"],"resources":["groupsyncs/status"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["user.openshift.io"],"resources":["groups"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["create"],"apiGroups":["authentication.k8s.io"],"resources":["tokenreviews"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["create"],"apiGroups":["authorization.k8s.io"],"resources":["subjectaccessreviews"]}
Status: NotSatisfied
Version: v1beta1
Group:
Kind: ServiceAccount
Message: Policy rule not satisfied for service account
Name: default
Status: PresentNotSatisfied
Version: v1
Events:

Can you suggest a possible resolution to this?

Thank you very much,

Michael

Distribute via hub?

It would be nice to have versions distributed via operator-hub instead of having users having to compile etc themselves. WDYT?

Document required graph-api permissions

What graph-api permissions are required for the operator to work correctly with AAD?
Version 0.0.7.

I have these grants:

email
Group.Read.All
GroupMember.Read.All
openid
profile
User.Read

Getting:

ger 2020-12-11T19:33:38.869Z        DEBUG   controller-runtime.manager.events       Warning {"object": {"kind":"GroupSync","namespace":"group-sync-operator","name":"azure-groupsync","uid":"42ca321c-9d87-404e-a8f5-17ad28315530","apiVersion":"redhatcop.redhat.io/v1alpha1","resourceVersion":"94637800"}, "reason": "GroupSyncError", "message": "403 Forbidden: {\"error\":{\"code\":\"Authorization_RequestDenied\",\"message\":\"Insufficient privileges to complete the operation.\",\"innerError\":{\"client-request-id\":\"20d8fa8f-f796-4f33-b2ad-81f242a3a45f\",\"date\":\"2020-12-11T19:33:38\",\"request-id\":\"20d8fa8f-f796-4f33-b2ad-81f242a3a45f\"}}}"}
group-sync-operator-controller-manager-7db84b854f-hccw7 manager time="2020-12-11T19:33:38Z" level=error msg="<nil>unable to update status" source="groupsync_controller.go:231"
group-sync-operator-controller-manager-7db84b854f-hccw7 manager 2020-12-11T19:33:38.876Z        ERROR   controller      Reconciler error        {"reconcilerGroup": "redhatcop.redhat.io", "reconcilerKind": "GroupSync", "controller": "groupsync", "name": "azure-groupsync", "namespace": "group-sync-operator", "error": "403 Forbidden: {\"error\":{\"code\":\"Authorization_RequestDenied\",\"message\":\"Insufficient privileges to complete the operation.\",\"innerError\":{\"client-request-id\":\"20d8fa8f-f796-4f33-b2ad-81f242a3a45f\",\"date\":\"2020-12-11T19:33:38\",\"request-id\":\"20d8fa8f-f796-4f33-b2ad-81f242a3a45f\"}}}"}
group-sync-operator-controller-manager-7db84b854f-hccw7 manager github.com/go-logr/zapr.(*zapLogger).Error
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
group-sync-operator-controller-manager-7db84b854f-hccw7 manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:237
group-sync-operator-controller-manager-7db84b854f-hccw7 manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209
group-sync-operator-controller-manager-7db84b854f-hccw7 manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188
group-sync-operator-controller-manager-7db84b854f-hccw7 manager k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155
group-sync-operator-controller-manager-7db84b854f-hccw7 manager k8s.io/apimachinery/pkg/util/wait.BackoffUntil
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156
group-sync-operator-controller-manager-7db84b854f-hccw7 manager k8s.io/apimachinery/pkg/util/wait.JitterUntil
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
group-sync-operator-controller-manager-7db84b854f-hccw7 manager k8s.io/apimachinery/pkg/util/wait.Until
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90

also - did/do you consider supporting using client-certificates for authenticating towards AAD?

[keycloak] users synchronization

Hi,

I have encountered a bug:

Info
I have installed this operator v0.0.12 and keylcoak/ RHSSO operators on a OCPv 4.X.
The keycloak syncs with an AD server and gets the needed users and groups.
group-sync operator syncs all the groups from keycloak on the OCP.

Bug
Only the first 16-22 groups are populated with users.
I have run the same manifest many times. On the first run, it synced 22 groups with users. Then, I deleted everything and reran the manifest. It imported less than 20 groups with users. I have more than 100 groups with users in each group (not empty groups). Although, I would like to point out that the groups were created on every run/test, I made.
In addition, the object status was marked as successful on every run/test.

Option to leave group metadata unchanged on update

I've been testing the group-sync-operator in a PoC with Azure AD integration.
The requirement I've been working to requires the groups created by the group-sync-operator to be augmented with metadata e.g. labels. Once these groups are augmented we are then using the namespace-configuration-operator create namespaces and RBAC based on a GroupConfig object.

The problem is, when the group-sync-operator runs again (I have this running on a cron schedule) it removes any metadata added.
I've found a workaround which is to change the "group-sync-operator.redhat-cop.io/sync-provider" label, which causes the group-sync-operator to ignore this group when updating. Problem with this though is any user additions / subtractions from this group are also ignored.

So I'm proposing a change to then group-sync-operator to leave the group untouched if it already exists, and just update users associated with the group.

Okta provider only supports 20 groups

The Okta provider will not sync more than 20 groups at a time. Due to the way the OktaClient is configured the request returns by hitting the groups API returns a paginated list. This causes only the first page of groups to be added which by default is 20.

[Keycloak] Groups not populated with users

EDIT: of course, shortly after the creation of this issue, I stumbled upon the probable solution: I added realm-admin to the sync user's role mappings in RH-SSO, and users were added to groups in OpenShift. From what little playing I have done, it appears the sync user also requires view-users besides query-users and query-groups. Maybe this is something recent?


Using the keycloak provider to link with RedHat SSO, groups within the configured realm are synchronised as expected, however group memberships is not.

We have users and groups in RH-SSO. Users are part of one or more groups. The user used for synchronisation has query-groups and query-users as assigned roles.

From what I understand from people we consulted, users that login through RH-SSO should automatically be added to a group by the operator (I assume when it runs depending on its schedule, or when does this happen exactly?)

However, the groups that get synchronised by the operator stay empty.
Even when manually adding a user to the group within OpenShift with oc adm groups add-users $group $user1, whenever the operator synchronises again, it will just remove the user, even though the user is part of that group within RH-SSO.

Looking at the relevant code, and confirming via gocloak, users should be added to groups automatically within OpenShift.

The operator logs do not appear to be very verbose on this, and there doesn't appear to be a debug mode or verbosity option.

You can find our GroupSync and OAuth objects here. This is what we use to oc apply. We have tried with all the available scope's, they don't make a difference in regards to this problem.

What are we missing?

add support for synching group labels/annotations

currently group created by the sync operator are to not carry metadata from the source IDP.
add support for carrying over additional metadata in the shape of annotations and/or labels/
This is probably going to be a provider-dependent mapping.

Upgrade fails

error validating existing CRs agains new CRD's schema: groupsyncs.redhatcop.redhat.io: error validating custom resource against new schema &apiextensions.CustomResourceValidation{OpenAPIV3Schema:(*apiextensions.JSONSchemaProps)(0xc00365e200)}: [].status.conditions.reason: Invalid value: "": status.conditions.reason in body should match '^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$'

from 0.0.7 to 0.0.8

Error with LDAP group sync

The below error with "util unable to update status" always reports from group sync operator , running with version 0.0.11

2021-03-22T14:22:35.060Z INFO controllers.GroupSync Beginning Sync {"groupsync": "group-sync-operator/ldap-groupsync", "Provider": "ldap"}

2021-03-22T14:30:05.664Z INFO controllers.GroupSync Sync Completed Successfully {"groupsync": "group-sync-operator/ldap-groupsync", "Provider": "ldap", "Groups Created or Updated": 7274}

2021-03-22T14:30:05.668Z ERROR util unable to update status {"error": "GroupSync.redhatcop.redhat.io "ldap-groupsync" is invalid: status.conditions.reason: Invalid value: "": status.conditions.reason in body should match 'A-Za-z?$'"}github.com/go-logr/zapr.(*zapLogger).Error /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132github.com/redhat-cop/operator-utils/pkg/util.(*ReconcilerBase).ManageSuccess /go/pkg/mod/github.com/redhat-cop/[email protected]/pkg/util/reconciler.go:398github.com/redhat-cop/group-sync-operator/controllers.(*GroupSyncReconciler).Reconcile /workspace/controllers/groupsync_controller.go:179sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:263sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1 /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:198k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1 /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1 /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155k8s.io/apimachinery/pkg/util/wait.BackoffUntil /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156k8s.io/apimachinery/pkg/util/wait.JitterUntil /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185k8s.io/apimachinery/pkg/util/wait.UntilWithContext /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:992021-03-22T14:30:05.668Z ERROR controller-runtime.manager.controller.groupsync Reconciler error {"reconciler group": "redhatcop.redhat.io", "reconciler kind": "GroupSync", "name": "ldap-groupsync", "namespace": "group-sync-operator", "error": "GroupSync.redhatcop.redhat.io "ldap-groupsync" is invalid: status.conditions.reason: Invalid value: "": status.conditions.reason in body should match 'A-Za-z?$'"}github.com/go-logr/zapr.(*zapLogger).Error /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:267sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1 /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:198k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1 /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1 /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155k8s.io/apimachinery/pkg/util/wait.BackoffUntil /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156k8s.io/apimachinery/pkg/util/wait.JitterUntil /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185k8s.io/apimachinery/pkg/util/wait.UntilWithContext /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99

add supported for nested groups

when groups are nested add a standard annotation to the child group indicating the parent group, for example: parent: <parent-group>

Provide a clear single status when the operator is installed (automation)

For automation purpose, when the operator is installed and ready to create instances of GroupSyncs, provide an easily fetchable status of the operator.

For example, you create and install the operator in an automatic process. Before making groupsync instances, you want to make sure the operator is clean, installed and ready to be used. Therefore, you fetch the status in the yaml seeking for it to be 'Installed' or something more meaningful.

[azure-ad] usermemebers show as nil

When I sync the groups, the members are all nil:

users:
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>

Add unit testing

To improve the overall quality of the operator, add unit testing

Add support for GitHub oauth

For clusters where there is an identity provider tied to a github organization, it would be nice to be able to sync the teams under that organization to groups in openshift.

[Bug Keycloak/RHSSO] No. users > 100

We have been testing Group sync operator v0.0.6 with OpenShift 4.5, and it seems there is a bug with Keycloak provider. When the number of users is higher than 100, only the first 100 are shown in OpenShift.

Expose Runtime Prometheus Metrics

The operator currently exposes a default set of Prometheus metrics provided by kubebuilder/Operator SDK. Introduce metrics pertaining to runtime operation to provide insights into the the current state of the operator.

Possible metrics worth exposing

  • Number of groups synchronized per provider
  • Time of next scheduled synchronization
  • Time of last success per provider
  • Time of last failure per provider
  • Number of successes
  • Number of failures

List of existing related issues:

#72

[Azure] Change from Azure AD Graph API to Microsoft Graph API

The Azure Active Directory Graph API has been deprecated with the Microsoft Graph API as its replacement. The new API also provides more granular permissions (i.e. GroupMember.Read.All instead of requiring Directory.Read.All)

This issue is to track changing the Azure syncer to use the new API.

gh provider, map users via scim username

Our usecase:

  1. we use AAD OIDC for user auth in OCP (using upn as userid)
  2. we do not auth with GH as OIDC provider, as we want users to have singular identities, and the user-audience of OCP is wider than GH.
  3. we have various teams in GH, which would map nicely over to groups in OCP and hence roles

However, syncing GH teams as groups with the current impl. would use the gh login id, while we'd like to list team-members, find their linked SCIM-identity, and then use the Username attribute, which would match upn.

Would you accept such an implementation? Could fit into the same package, and be toggled via a flag?

sync from openshift

im trying to sync groups to openshift from rhsso.
i deployed the oprator from helm chart.
here is the configuration file:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: keycloak-groupsync
spec:
  providers:
  - name: keycloak
    keycloak:
      realm: ocp
      credentialsSecret:
        name: keycloak-group-sync
        namespace: group-sync-operator
      caSecret:
        name: rhsso-group-sync
        namespace: group-sync-operator
        key: root-ca.crt
      url: https://rhsso.apps.cluster.fqdn
      loginRealm: ocp
      scope: sub
  schedule: "*/15 * * * *"

from the logs of the container "group-sync-operator" i get the error :

ERROR          controller-runtime.manager.controller.groupsync Reconciler error      {"reconciler group":  "redhatcop.redhat.io", "reconciler kind": "GroupSync",  "name":   "rhsso-groupsync",  "namespace":  "group-sync-operator",  "error":  "could not get token"} 
github.com/go-logr/zapr.(*zapLogger).Error
               /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*controller.)reconcileHandler
              /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:267
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*controller).processNextWorkItem
             /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*controller).Start.func1.1
             /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContex.func1
            /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
           /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
           /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
           /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContex
          /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185
k8s.io/apimachinery/pkg/util/wait.UntilWithContex
           /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99

also my environment is restricted.

Support pruning

Add support for pruning of groups when they are removed from the provider

  • Should be provider specific
  • Pruning should be disabled by default

Authorization management !

First of all thanks for that great tool, it makes RedHat admins life easier ,i just completed a PoC with that Operator for sync LDAP groups from AD and all was good, however not sure if there is something missing there or maybe it was me who missed this part ,as after the sync completed i got the correct groups on OpenShift side with the right users inside and that was great, however not sure how we can manage the Authorization here ! like binding OCP roles like admin,view,read with the groups that just got created from the sync ! as without this step users can login and being in the right group but they cannot do anything as they do not have any permission on the cluster , Thanks in advance guys i really appreciate the great work done to bring that operator to OCP admins .

Console -- Providers->LDAP Provider->"Secret Containing the Credential" improper label class of blacklist

In the Web console for LDAP -> Provider the input box presented as "Secret Containing the Credential" is actually the input class for the blacklist resulting in a secret name being placed into the blacklist config.

<label class="" for="root_spec_providers_0_ldap_blacklist_accordion-content">Secret Containing the Credentials</label>
<input class="pf-c-form-control" id="root_spec_providers_0_ldap_blacklist_0" required="" type="text" value="">

Resulting in:

spec:
  providers:
    - ldap:
        blacklist:
          - testbadconfig

I'm not sure where console code is so opening the issue.

error retrieving resource lock

The following error occurs when deploying v0.0.8

E1231 06:31:47.702125       1 leaderelection.go:325] error retrieving resource lock group-sync-operator/085c249a.redhat.io: leases.coordination.k8s.io "085c249a.redhat.io" is forbidden: User "system:serviceaccount:group-sync-operator:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "group-sync-operator"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.