Coder Social home page Coder Social logo

redhat-cop / group-sync-operator Goto Github PK

View Code? Open in Web Editor NEW
109.0 20.0 59.0 929 KB

Synchronizes groups from external providers into OpenShift

License: Apache License 2.0

Dockerfile 0.47% Shell 3.83% Go 87.90% Makefile 6.05% Smarty 0.53% Mustache 1.23%
k8s-operator container-cop

group-sync-operator's Introduction

Group Sync Operator

Build Status Docker Repository on Quay

Synchronizes groups from external providers into OpenShift

Overview

The OpenShift Container Platform contains functionality to synchronize groups found in external identity providers into the platform. Currently, the functionality that is included in OpenShift is limited to synchronizing LDAP only. This operator is designed to integrate with external providers in order to provide new solutions.

Group Synchronization is facilitated by creating a GroupSync resource. The following describes the high level schema for this resource:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: example-groupsync
spec:
  providers:
    - <One or more providers to synchronize>

Deploying the Operator

Use the following steps to deploy the operator to an OpenShift cluster

  1. Clone the project locally and changed into the project . .
git clone https://github.com/redhat-cop/group-sync-operator.git
cd group-sync-operator
  1. Deploy the Operator
make deploy IMG=quay.io/redhat-cop/group-sync-operator:latest

Note: The make deploy command will execute the manifests target that will require additional build tools to be made available. This target can be skipped by including the -o manifests in the command above.

Authentication

In most cases, authentication details must be provided in order to communicate with providers. Authentication details are provider specific with regards to the required values. In supported providers, the secret can be referenced in the credentialsSecret by name and namespace where it has been created as shown below:

credentialsSecret:
  name: <secret_name>
  namespace: <secret_namespace>

Providers

Integration with external systems is made possible through a set of pluggable external providers. The following providers are currently supported:

The following sections describe the configuration options available for each provider

Azure

Groups contained within Azure Active Directory can be synchronized into OpenShift. The following table describes the set of configuration options for the Azure provider:

Name Description Defaults Required
authorityHost Azure Active Directory Endpoint https://login.microsoftonline.com No
baseGroups List of groups to start searching from instead of listing all groups in the directory No
credentialsSecret Name of the secret containing authentication details (See below) Yes
filter Graph API filter No
groups List of groups to filter against No
userNameAttributes Fields on a user record to use as the User Name userPrincipalName No
prune Prune Whether to prune groups that are no longer in Azure false No

The following is an example of a minimal configuration that can be applied to integrate with a Azure provider:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: azure-groupsync
spec:
  providers:
  - name: azure
    azure:
      credentialsSecret:
        name: azure-group-sync
        namespace: group-sync-operator

Authenticating to Azure

Authentication to Azure can be performed using Application Registration with access to query group information in Azure Active Directory.

The App Registration must be granted access to the following Microsoft Graph API's:

  • Group.Read.All
  • GroupMember.Read.All
  • User.Read.All

A secret must be created in the same namespace that contains the GroupSync resource:

The following keys must be defined in the secret

  • AZURE_TENANT_ID - Tenant ID
  • AZURE_CLIENT_ID - Client ID
  • AZURE_CLIENT_SECRET - Client Secret

The secret can be created by executing the following command:

oc create secret generic azure-group-sync --from-literal=AZURE_TENANT_ID=<AZURE_TENANT_ID> --from-literal=AZURE_CLIENT_ID=<AZURE_CLIENT_ID> --from-literal=AZURE_CLIENT_SECRET=<AZURE_CLIENT_SECRET>

GitHub

Teams stored within a GitHub organization can be synchronized into OpenShift. The following table describes the set of configuration options for the GitHub provider:

Name Description Defaults Required
ca Reference to a resource containing a SSL certificate to use for communication (See below) No
caSecret DEPRECATED Reference to a secret containing a SSL certificate to use for communication (See below) No
credentialsSecret Reference to a secret containing authentication details (See below) Yes
insecure Ignore SSL verification false No
organization Organization to synchronize against Yes
teams List of teams to filter against No
url Base URL for the GitHub or GitHub Enterprise host (Must contain a trailing slash) No
prune Prune Whether to prune groups that are no longer in GitHub false No

The following is an example of a minimal configuration that can be applied to integrate with a GitHub provider:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: github-groupsync
spec:
  providers:
  - name: github
    github:
      organization: ocp
      credentialsSecret:
        name: github-group-sync
        namespace: group-sync-operator

Authenticating to GitHub

Authentication to GitHub can be performed using an OAuth Personal Access Token or as an GitHub App, using a secret key and appId. The OAuth Personal Access Token needs to scope of admin:org/read:org. A secret must be created in the same namespace that contains the GroupSync resource:

OAuth

When using an OAuth token, the following key is required:

  • token - OAuth token

The secret can be created by executing the following command:

oc create secret generic github-group-sync --from-literal=token=<token>
As a GitHub app

When authenticating as a Github App, the following keys are required:

  • privateKey and appId
First create a GitHub app

In GitHub, go to developer-settings -> github apps.

  • Create a new app, it does not need webhook callbacks.
  • Generate a private-key and download it
  • Under "permissions and events", the app will need read-only access to the "Members" permission in the "Organization" section. NOTE: If you enable mapByScimId, this permissions needs to be Read & Write, though the operator only does read-only operations. The reason for this is the use of the v4 graphql api-endpoint.
  • Take note of the "App ID" as you need it for later.
  • Install the app to your organization.
Create the secret

The secret can be created by executing the following command:

oc create secret generic github-group-sync --from-literal=appId=<theAppId> --from-file=privateKey=</path/to/thefile>

GitLab

Groups stored within a GitLab can be synchronized into OpenShift. The following table describes the set of configuration options for the GitLab provider:

Name Description Defaults Required
ca Reference to a resource containing a SSL certificate to use for communication (See below) No
caSecret DEPRECATED Reference to a secret containing a SSL certificate to use for communication (See below) No
credentialsSecret Reference to a secret containing authentication details (See below) Yes
insecure Ignore SSL verification false No
groups List of groups to filter against No
prune Prune Whether to prune groups that are no longer in GitLab false No
scope Scope for group synchronization. Options are one for one level or sub to include subgroups sub No
url Base URL for the GitLab instance https://gitlab.com No

The following is an example of a minimal configuration that can be applied to integrate with a GitHub provider:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: gitlab-groupsync
spec:
  providers:
  - name: gitlab
    gitlab:
      credentialsSecret:
        name: gitlab-group-sync
        namespace: group-sync-operator

Authenticating to GitLab

Authentication to GitLab can be performed using a Token or a Username and Password (Note: 2FA not supported). A secret must be created in the same namespace that contains the GroupSync resource:

When using an OAuth token, the following token types are supported:

  • Personal Access Token
  • OAuth Token
  • Job Token

the following key is required:

  • token - OAuth token

Optionally, the tokenType key can be specified to indicate the type of token being provided from the following values:

  • OAuth - oauth
  • Personal Access Token - personal
  • Job Token - job

If no tokenType is provided, oauth is used by default

The secret can be created by executing the following command:

oc create secret generic gitlab-group-sync --from-literal=token=<token>

To specify a token type, such as a Personal Access Token, the following command can be executed:

oc create secret generic gitlab-group-sync --from-literal=token=<token> --from-literal=tokenType=personal

The following keys are required for username and password:

  • username - Username for authenticating with GitLab
  • password - Password for authenticating with GitLab

The secret can be created by executing the following command:

oc create secret generic gitlab-group-sync --from-literal=username=<username> --from-literal=password=<password>

LDAP

Groups stored within an LDAP server can be synchronized into OpenShift. The LDAP provider implements the included features of the Syncing LDAP groups feature and makes use of the libraries from the OpenShift Command Line tool to streamline the migration to this operator based implementation.

The configurations of the three primary schemas (rfc2307, activeDirectory and augmentedActiveDirectory) can be directly migrated as is without any modification.

Name Description Defaults Required
ca Reference to a resource containing a SSL certificate to use for communication (See below) No
caSecret DEPRECATED Reference to a secret containing a SSL certificate to use for communication (See below) No
credentialsSecret Reference to a secret containing authentication details (See below) No
insecure Ignore SSL verification false No
groupUIDNameMapping User defined name mapping No
rfc2307 Configuration using the rfc2307 schema No
activeDirectory Configuration using the activeDirectory schema No
augmentedActiveDirectory Configuration using the activeDirectory schema No
url Connection URL for the LDAP server ldap://ldapserver:389 No
whitelist Explicit list of groups to synchronize No
blacklist Explicit list of groups to not synchronize No
prune Prune Whether to prune groups that are no longer in LDAP false No

The following is an example using the rfc2307 schema:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: ldap-groupsync
spec:
  providers:
  - ldap:
      credentialsSecret:
        name: ldap-group-sync
        namespace: group-sync-operator
      insecure: true
      rfc2307:
        groupMembershipAttributes:
        - member
        groupNameAttributes:
        - cn
        groupUIDAttribute: dn
        groupsQuery:
          baseDN: ou=Groups,dc=example,dc=com
          derefAliases: never
          filter: (objectClass=groupofnames)
          scope: sub
        tolerateMemberNotFoundErrors: true
        tolerateMemberOutOfScopeErrors: true
        userNameAttributes:
        - cn
        userUIDAttribute: dn
        usersQuery:
          baseDN: ou=Users,dc=example,dc=com
          derefAliases: never
          scope: sub
      url: ldap://ldapserver:389
    name: ldap

The examples provided in the OpenShift documented referenced previously can be used to construct the schemas for the other LDAP synchronization types.

Authenticating to LDAP

If authentication is required in order to communicate with the LDAP server, a secret should be created in the same namespace that contains the GroupSync resource. The following keys can be defined:

  • username - Username (Bind DN) for authenticating with the LDAP server
  • password - Password for authenticating with the LDAP server

The secret can be created by executing the following command:

oc create secret generic ldap-group-sync --from-literal=username=<username> --from-literal=password=<password>

Whitelists and Blacklists

Groups can be explicitly whitelisted or blacklisted in order to control the groups that are eligible to be synchronized into OpenShift. When running LDAP group synchronization using the command line, this configuration is referenced via separate files, but these are instead specified in the blacklist and whitelist properties as shown below:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: ldap-groupsync
spec:
  providers:
  - ldap:
...
      whitelist:
      - cn=Online Corporate Banking,ou=Groups,dc=example,dc=com
...
    name: ldap
apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: ldap-groupsync
spec:
  providers:
  - ldap:
...
      blacklist:
      - cn=Finance,ou=Groups,dc=example,dc=com
...
    name: ldap

Keycloak

Groups stored within Keycloak can be synchronized into OpenShift. The following table describes the set of configuration options for the Keycloak provider:

Name Description Defaults Required
ca Reference to a resource containing a SSL certificate to use for communication (See below) No
caSecret DEPRECATED Reference to a secret containing a SSL certificate to use for communication (See below) No
credentialsSecret Reference to a secret containing authentication details (See below) Yes
groups List of groups to filter against No
insecure Ignore SSL verification false No
loginRealm Realm to authenticate against master No
realm Realm to synchronize Yes
scope Scope for group synchronization. Options are one for one level or sub to include subgroups sub No
url Base URL for the Keycloak server. Older versions (<17.0.0) including Red Hat SSO should include the context path /auth appended to the hostname Yes
prune Prune Whether to prune groups that are no longer in Keycloak false No

The following is an example of a minimal configuration that can be applied to integrate with a Keycloak provider:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: keycloak-groupsync
spec:
  providers:
  - name: keycloak
    keycloak:
      realm: ocp
      credentialsSecret:
        name: keycloak-group-sync
        namespace: group-sync-operator
      url: https://keycloak-keycloak-operator.apps.openshift.com/auth

Authenticating to Keycloak

A user with rights to query for Keycloak groups must be available. The following permissions must be associated to the user:

  • Password must be set (Temporary option unselected) on the Credentials tab
  • On the Role Mappings tab, select master-realm or realm-management next to the Client Roles dropdown and then select query-groups, query-users, and view-users.

A secret must be created in the same namespace that contains the GroupSync resource. It must contain the following keys for the user previously created:

  • username - Username for authenticating with Keycloak
  • password - Password for authenticating with Keycloak

The secret can be created by executing the following command:

oc create secret generic keycloak-group-sync --from-literal=username=<username> --from-literal=password=<password>

Okta

Okta Groups assigned to Okta Applications can be synchronized into OpenShift. The developer docs for the Okta API that the Okta Syncer uses can be found here. The following table describes the set of configuration options for the Okta provider:

Name Description Defaults Required
credentialsSecret Reference to a secret containing authentication details (See below) '' Yes
groups List of groups to filter against nil No
url Okta URL which can be found under the "Okta Domain" in your application settings (must contain the scheme and a trailing slash) '' Yes
appId Okta Application (Client) ID that is attached to the application groups you wish to sync '' Yes
extractLoginUsername Bool to determine if you should extract username from okta login false No
profileKey Attribute field on Okta User Profile you would like to use as identity 'login' No
groupLimit Integer to set the maximum number of groups to retrieve from OKTA per request. 1000 No
prune Prune Whether to prune groups that are no longer in OKTA false No

The following is an example of a minimal configuration that can be applied to integrate with an Okta provider:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: okta-sync
spec:
  providers:
    - name: okta
      okta:
        credentialsSecret:
          name: okta-api-token
          namespace: group-sync-operator
        url: "https://example.okta.com/"
        appId: xxxxxxxxxxxxxxxxxxxx

Authenticating to Okta

A secret must be created in the same namespace as the group-sync-operator pod. It must contain the following key:

  • okta-api-token - Okta API Token for interacting with Okta

The secret can be created by executing the following command:

oc create secret generic okta-api-token --from-literal=okta-api-token=<OKTA_API_TOKEN> -n group-sync-operator

Support for Additional Metadata (Beta)

Additional metadata based on Keycloak group are also added to the OpenShift groups as Annotations including:

  • Parent/child relationship between groups and their subgroups
  • Group attributes

CA Certificates

Several providers allow for certificates to be provided in either a ConfigMap or Secret to communicate securely to the target host through the use of a property called ca.

The certificate can be added to a Secret called keycloak-certs using the key ca.crt representing the certificate using the following command.

oc create secret generic keycloak-certs --from-file=ca.crt=<file>

An example of how the CA certificate can be added to the Keycloak provider is shown below:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: keycloak-groupsync
spec:
  providers:
  - name: keycloak
    keycloak:
      realm: ocp
      credentialsSecret:
        name: keycloak-group-sync
        namespace: group-sync-operator
      ca:
        kind: Secret
        name: keycloak-certs
        namespace: group-sync-operator
        key: ca.crt
      url: https://keycloak-keycloak-operator.apps.openshift.com

Alteratively, a ConfigMap can be used instead instead of a Secret. This is useful when using the Certificate injection using Operators feature.

The following command can be used to create a ConfigMap containing the certificate:

oc create configmap keycloak-certs --from-file=ca.crt=<file>

An example of how the CA certificate can be added to the Keycloak provider is shown below:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: keycloak-groupsync
spec:
  providers:
  - name: keycloak
    keycloak:
      realm: ocp
      credentialsSecret:
        name: keycloak-group-sync
        namespace: group-sync-operator
      ca:
        kind: ConfigMap
        name: keycloak-certs
        namespace: group-sync-operator
        key: ca.crt
      url: https://keycloak-keycloak-operator.apps.openshift.com

Scheduled Execution

A cron style expression can be specified for which a synchronization event will occur. The following specifies that a synchronization should occur nightly at 3AM

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: keycloak-groupsync
spec:
  schedule: "0 3 * * *"
  providers:
  - ...

If a schedule is not provided, synchronization will occur only when the object is reconciled by the platform.

Accessing Secrets and ConfigMaps in Other Namespaces

By default, the operator monitors resources in the namespace that it has been deployed within. This is defined by setting the WATCH_NAMESPACE environment variable. Support is available for accessing ConfigMaps and Secrets in other namespaces so that existing resources may be utilized as desired.

To enable the operator to access resources across multiple, set the environment variable with a comma separate list of namespaces that include the namespace the operator is deployed within and any additional namespaces that are desired.

To make use of this feature when deploying through the Operator Lifecycle Manager, set the following configuration on the Subscription resource:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: group-sync-operator
  namespace: group-sync-operator
spec:
  channel: alpha
  installPlanApproval: Automatic
  name: group-sync-operator
  source: community-operators
  sourceNamespace: openshift-marketplace
  config:
    env:
      - name: WATCH_NAMESPACE
        value: "<comma separated list of namespaces>"

Deploying the Operator

This is a namespace level operator that you can deploy in any namespace. However, group-sync-operator is recommended.

It is recommended to deploy this operator via OperatorHub, but you can also deploy it using Helm.

Deploying from OperatorHub

If you want to utilize the Operator Lifecycle Manager (OLM) to install this operator, you can do so in two ways: from the UI or the CLI.

Multiarch Support

Arch Support
amd64 โœ…
arm64 โœ…
ppc64le โœ…
s390x โœ…

Deploying from OperatorHub UI

  • If you would like to launch this operator from the UI, you'll need to navigate to the OperatorHub tab in the console.
  • Search for this operator by name: group sync operator. This will then return an item for our operator and you can select it to get started. Once you've arrived here, you'll be presented with an option to install, which will begin the process.
  • After clicking the install button, you are presented with the namespace that the operator will be installed in. A suggested name of group-sync-operator is presented and can be created automatically at installation time.
  • Select the installation strategy you would like to proceed with (Automatic or Manual).
  • Once you've made your selection, you can select Subscribe and the installation will begin. After a few moments you can go ahead and check your namespace and you should see the operator running.

Deploying from OperatorHub using CLI

If you'd like to launch this operator from the command line, you can use the manifests contained in this repository by running the following:

oc new-project group-sync-operator
oc apply -f config/operatorhub -n group-sync-operator

This will create the appropriate OperatorGroup and Subscription and will trigger OLM to launch the operator in the specified namespace.

Deploying with Helm

Here are the instructions to install the latest release with Helm.

oc new-project group-sync-operator
helm repo add group-sync-operator https://redhat-cop.github.io/group-sync-operator
helm repo update
helm install group-sync-operator group-sync-operator/group-sync-operator

This can later be updated with the following commands:

helm repo update
helm upgrade group-sync-operator group-sync-operator/group-sync-operator

Metrics

Prometheus compatible metrics are exposed by the Operator and can be integrated into OpenShift's default cluster monitoring. To enable OpenShift cluster monitoring, label the namespace the operator is deployed in with the label openshift.io/cluster-monitoring="true".

oc label namespace <namespace> openshift.io/cluster-monitoring="true"

Test metrics

export operatorNamespace=group-sync-operator-local # or group-sync-operator
oc label namespace ${operatorNamespace} openshift.io/cluster-monitoring="true"
oc rsh -n openshift-monitoring -c prometheus prometheus-k8s-0 /bin/bash
export operatorNamespace=group-sync-operator-local # or group-sync-operator
curl -v -s -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://group-sync-operator-controller-manager-metrics-service.${operatorNamespace}.svc.cluster.local:8443/metrics
exit

Development

Running the operator locally

make install
export repo=redhatcopuser #replace with yours
docker login quay.io/$repo/group-sync-operator
make docker-build IMG=quay.io/$repo/group-sync-operator:latest
make docker-push IMG=quay.io/$repo/group-sync-operator:latest
oc new-project group-sync-operator-local
kustomize build ./config/local-development | oc apply -f - -n group-sync-operator-local
export token=$(oc serviceaccounts get-token 'group-sync-operator-controller-manager' -n group-sync-operator-local)
oc login --token ${token}
make run ENABLE_WEBHOOKS=false

Test helm chart locally

Define an image and tag. For example...

export imageRepository="quay.io/redhat-cop/group-sync-operator"
export imageTag="$(git -c 'versionsort.suffix=-' ls-remote --exit-code --refs --sort='version:refname' --tags https://github.com/redhat-cop/group-sync-operator.git '*.*.*' | tail --lines=1 | cut --delimiter='/' --fields=3)"

Deploy chart...

make helmchart IMG=${imageRepository} VERSION=${imageTag}
helm upgrade -i group-sync-operator-local charts/group-sync-operator -n group-sync-operator-local --create-namespace

Delete...

helm delete group-sync-operator-local -n group-sync-operator-local
kubectl delete -f charts/group-sync-operator/crds/crds.yaml

Building/Pushing the operator image

export repo=redhatcopuser #replace with yours
docker login quay.io/$repo/group-sync-operator
make docker-build IMG=quay.io/$repo/group-sync-operator:latest
make docker-push IMG=quay.io/$repo/group-sync-operator:latest

Deploy to OLM via bundle

make manifests
make bundle IMG=quay.io/$repo/group-sync-operator:latest
operator-sdk bundle validate ./bundle --select-optional name=operatorhub
make bundle-build BUNDLE_IMG=quay.io/$repo/group-sync-operator-bundle:latest
docker login quay.io/$repo/group-sync-operator-bundle
docker push quay.io/$repo/group-sync-operator-bundle:latest
operator-sdk bundle validate quay.io/$repo/group-sync-operator-bundle:latest --select-optional name=operatorhub
oc new-project group-sync-operator
oc label namespace group-sync-operator openshift.io/cluster-monitoring="true"
operator-sdk cleanup group-sync-operator -n group-sync-operator
operator-sdk run bundle -n group-sync-operator quay.io/$repo/group-sync-operator-bundle:latest

Releasing

git tag -a "<tagname>" -m "<commit message>"
git push upstream <tagname>

If you need to remove a release:

git tag -d <tagname>
git push upstream --delete <tagname>

If you need to "move" a release to the current master

git tag -f <tagname>
git push upstream -f <tagname>

Cleaning up

operator-sdk cleanup group-sync-operator -n group-sync-operator
oc delete operatorgroup operator-sdk-og
oc delete catalogsource group-sync-operator-catalog

group-sync-operator's People

Contributors

abelk-bw avatar davgordo avatar davidkarlsen avatar dependabot[bot] avatar dweebo avatar erzhan46 avatar garethahealy avatar itewk avatar joharder avatar jolbax avatar klofton-bw avatar lbac-redhat avatar liamkinne avatar mathianasj avatar michaelryanmcneill avatar pendagtp avatar raffaelespazzoli avatar rbaumgar avatar renovate[bot] avatar sabre1041 avatar tanalam2411 avatar thekoma avatar trevorbox avatar tylerauerbeck avatar vianuevm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

group-sync-operator's Issues

[keycloak] users synchronization

Hi,

I have encountered a bug:

Info
I have installed this operator v0.0.12 and keylcoak/ RHSSO operators on a OCPv 4.X.
The keycloak syncs with an AD server and gets the needed users and groups.
group-sync operator syncs all the groups from keycloak on the OCP.

Bug
Only the first 16-22 groups are populated with users.
I have run the same manifest many times. On the first run, it synced 22 groups with users. Then, I deleted everything and reran the manifest. It imported less than 20 groups with users. I have more than 100 groups with users in each group (not empty groups). Although, I would like to point out that the groups were created on every run/test, I made.
In addition, the object status was marked as successful on every run/test.

`controller-gen` syntax error during deploy

I am attempting to deploy this operator in a 4.6.9 disconnected cluster in AWS GovCloud. I am getting the following error during the make deploy step.

oc new-project group-sync-operator

Now using project "group-sync-operator" on server "https://api.ocp4.example.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

โžœ  ~ oc project
Using project "group-sync-operator" on server "https://api.ocp4.example.com:6443".
โžœ  ~ git clone https://github.com/redhat-cop/group-sync-operator.git

Cloning into 'group-sync-operator'...
remote: Enumerating objects: 34, done.
remote: Counting objects: 100% (34/34), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 808 (delta 5), reused 21 (delta 4), pack-reused 774
Receiving objects: 100% (808/808), 419.25 KiB | 9.75 MiB/s, done.
Resolving deltas: 100% (365/365), done.
โžœ  ~ cd group-sync-operator
โžœ  group-sync-operator git:(master) make deploy
which: no controller-gen in (/home/etsmith/.zinit/polaris/bin:/home/etsmith/google-cloud-sdk/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/etsmith/bin:/home/etsmith/.composer/vendor/bin:/var/lib/snapd/snap/bin)
which: no kustomize in (/home/etsmith/.zinit/polaris/bin:/home/etsmith/google-cloud-sdk/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/etsmith/bin:/home/etsmith/.composer/vendor/bin:/var/lib/snapd/snap/bin)
go: creating new go.mod: module tmp
go: downloading sigs.k8s.io/controller-tools v0.3.0
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.3.0
go: downloading github.com/spf13/cobra v0.0.5
go: downloading sigs.k8s.io/yaml v1.2.0
go: downloading golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72
go: downloading k8s.io/apimachinery v0.18.2
go: downloading github.com/gobuffalo/flect v0.2.0
go: downloading k8s.io/apiextensions-apiserver v0.18.2
go: downloading k8s.io/api v0.18.2
go: downloading github.com/fatih/color v1.7.0
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/inconshreveable/mousetrap v1.0.0
go: downloading gopkg.in/yaml.v2 v2.2.8
go: downloading github.com/gogo/protobuf v1.3.1
go: downloading k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89
go: downloading github.com/google/gofuzz v1.1.0
go: downloading gopkg.in/yaml.v3 v3.0.0-20190905181640-827449938966
go: downloading github.com/mattn/go-colorable v0.1.2
go: downloading k8s.io/klog v1.0.0
go: downloading sigs.k8s.io/structured-merge-diff/v3 v3.0.0
go: downloading gopkg.in/inf.v0 v0.9.1
go: downloading github.com/mattn/go-isatty v0.0.8
go: downloading golang.org/x/net v0.0.0-20191004110552-13f9640d40b9
go: downloading golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7
go: downloading github.com/json-iterator/go v1.1.8
go: downloading github.com/modern-go/reflect2 v1.0.1
go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd
go: downloading golang.org/x/text v0.3.2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   620  100   620    0     0   6391      0 --:--:-- --:--:-- --:--:--  6391
100 6113k  100 6113k    0     0  5993k      0  0:00:01  0:00:01 --:--:-- 7521k
/home/etsmith/go/bin/controller-gen "crd:trivialVersions=true,crdVersions=v1beta1,preserveUnknownFields=false" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
Error: go [list -e -json -compiled=true -test=false -export=false -deps=true -find=false -tags ignore_autogenerated -- ./...]: exit status 1: go: github.com/openshift/[email protected]+incompatible: invalid pseudo-version: preceding tag (v3.9.0) not found

Usage:
  controller-gen [flags]

Examples:
        # Generate RBAC manifests and crds for all types under apis/,
        # outputting crds to /tmp/crds and everything else to stdout
        controller-gen rbac:roleName=<role name> crd paths=./apis/... output:crd:dir=/tmp/crds output:stdout

        # Generate deepcopy/runtime.Object implementations for a particular file
        controller-gen object paths=./apis/v1beta1/some_types.go

        # Generate OpenAPI v3 schemas for API packages and merge them into existing CRD manifests
        controller-gen schemapatch:manifests=./manifests output:dir=./manifests paths=./pkg/apis/...

        # Run all the generators for a given project
        controller-gen paths=./apis/...

        # Explain the markers for generating CRDs, and their arguments
        controller-gen crd -ww


Flags:
  -h, --detailed-help count   print out more detailed help
                              (up to -hhh for the most detailed output, or -hhhh for json output)
      --help                  print out usage and a summary of options
      --version               show version
  -w, --which-markers count   print out all markers available with the requested generators
                              (up to -www for the most detailed output, or -wwww for json output)


Options


generators

+webhook                                                                                                  package  generates (partial) {Mutating,Validating}WebhookConfiguration objects.
+schemapatch:manifests=<string>[,maxDescLen=<int>]                                                        package  patches existing CRDs with new schemata.
+rbac:roleName=<string>                                                                                   package  generates ClusterRole objects.
+object[:headerFile=<string>][,year=<string>]                                                             package  generates code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
+crd[:crdVersions=<[]string>][,maxDescLen=<int>][,preserveUnknownFields=<bool>][,trivialVersions=<bool>]  package  generates CustomResourceDefinition objects.


generic

+paths=<[]string>  package  represents paths and go-style path patterns to use as package roots.


output rules (optionally as output:<generator>:...)

+output:artifacts[:code=<string>],config=<string>  package  outputs artifacts to different locations, depending on whether they're package-associated or not.
+output:dir=<string>                               package  outputs each artifact to the given directory, regardless of if it's package-associated or not.
+output:none                                       package  skips outputting anything.
+output:stdout                                     package  outputs everything to standard-out, with no separation.

run `controller-gen crd:trivialVersions=true,crdVersions=v1beta1,preserveUnknownFields=false rbac:roleName=manager-role webhook paths=./... output:crd:artifacts:config=config/crd/bases -w` to see all available markers, or `controller-gen crd:trivialVersions=true,crdVersions=v1beta1,preserveUnknownFields=false rbac:roleName=manager-role webhook paths=./... output:crd:artifacts:config=config/crd/bases -h` for usage
make: *** [Makefile:91: manifests] Error 1

Upgrade fails

error validating existing CRs agains new CRD's schema: groupsyncs.redhatcop.redhat.io: error validating custom resource against new schema &apiextensions.CustomResourceValidation{OpenAPIV3Schema:(*apiextensions.JSONSchemaProps)(0xc00365e200)}: [].status.conditions.reason: Invalid value: "": status.conditions.reason in body should match '^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$'

from 0.0.7 to 0.0.8

[Bug Keycloak/RHSSO] No. users > 100

We have been testing Group sync operator v0.0.6 with OpenShift 4.5, and it seems there is a bug with Keycloak provider. When the number of users is higher than 100, only the first 100 are shown in OpenShift.

Cannot upgrade from 0.0.6 to 0.0.7 because ServiceAccount does not have RBAC policy rules

Hi Andrew,

Thank you very much for your help earlier in troubleshooting my problems with this. However, we are now experiencing a problem during the automatic upgrade from 0.0.6 to 0.0.7.

oc get csv yields

NAME DISPLAY VERSION REPLACES PHASE
group-sync-operator.v0.0.6 Group Sync Operator 0.0.6 Replacing
group-sync-operator.v0.0.7 Group Sync Operator 0.0.7 group-sync-operator.v0.0.6 Pending

When issuing the command

**oc describe csv group-sync-operator.v0.0.7 **

we receive the following error:

Requirement Status:
Group: apiextensions.k8s.io
Kind: CustomResourceDefinition
Message: CRD is present and Established condition is true
Name: groupsyncs.redhatcop.redhat.io
Status: Present
Uuid: aba35d5c-3d49-4f2d-8bac-433b3e238ace
Version: v1beta1
Dependents:
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: namespaced rule:{"verbs":["get","list","watch","create","update","patch","delete"],"apiGroups":[""],"resources":["configmaps"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: namespaced rule:{"verbs":["get","update","patch"],"apiGroups":[""],"resources":["configmaps/status"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: namespaced rule:{"verbs":["create","patch"],"apiGroups":[""],"resources":["events"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["get","list","watch"],"apiGroups":[""],"resources":["secrets"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["redhatcop.redhat.io"],"resources":["groupsyncs"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["get","patch","update"],"apiGroups":["redhatcop.redhat.io"],"resources":["groupsyncs/status"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["user.openshift.io"],"resources":["groups"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["create"],"apiGroups":["authentication.k8s.io"],"resources":["tokenreviews"]}
Status: NotSatisfied
Version: v1beta1
Group: rbac.authorization.k8s.io
Kind: PolicyRule
Message: cluster rule:{"verbs":["create"],"apiGroups":["authorization.k8s.io"],"resources":["subjectaccessreviews"]}
Status: NotSatisfied
Version: v1beta1
Group:
Kind: ServiceAccount
Message: Policy rule not satisfied for service account
Name: default
Status: PresentNotSatisfied
Version: v1
Events:

Can you suggest a possible resolution to this?

Thank you very much,

Michael

Secret permissions too wide ?

It seems the operator has "get, watch, list" on the secrets of all namespaces. I assume that it's because it's possible to use secrets from any namespace. But is that necessary ? Wouldn't be wiser to allow only to read secrets from its own namespace ?

Empty displayName in Azure AD group causes operator to crash

Group records in Azure AD with empty displayName cause operator to crash with panic: "invalid memory address or nil pointer dereference".

Step to reproduce:

  1. Create a group in Azure AD with empty displayName
  2. Deploy operator with 'groups' list to limit set of groups to be synchronized.

Example yaml file:

kind: GroupSync
metadata:
  name: test-azure-groupsync
  namespace: group-sync-operator
spec:
  providers:
  - name: azure
    azure:
      groups:
      - Operate-OP
      - Operate-SF
      credentialsSecret:
        name: azure-group-sync
        namespace: group-sync-operator

Logs:

I0804 09:40:30.919240       1 request.go:655] Throttling request took 1.024563386s, request: GET:https://172.30.0.1:443/apis/networking.k8s.io/v1?timeout=32s
2021-08-04T09:40:32.735Z    INFO    controller-runtime.metrics  metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
2021-08-04T09:40:32.735Z    INFO    setup   starting manager
I0804 09:40:32.736303       1 leaderelection.go:243] attempting to acquire leader lease group-sync-operator/085c249a.redhat.io...
2021-08-04T09:40:32.736Z    INFO    controller-runtime.manager  starting metrics server {"path": "/metrics"}
I0804 09:40:50.499031       1 leaderelection.go:253] successfully acquired lease group-sync-operator/085c249a.redhat.io
2021-08-04T09:40:50.499Z    DEBUG   controller-runtime.manager.events   Normal  {"object": {"kind":"ConfigMap","namespace":"group-sync-operator","name":"085c249a.redhat.io","uid":"f453e8b1-bd90-4432-8e25-320ff4e04b16","apiVersion":"v1","resourceVersion":"35832254"}, "reason": "LeaderElection", "message": "group-sync-operator-controller-manager-7bfc577b89-x52dc_b11e872b-eca1-409e-a54b-bfdaa5595bd4 became leader"}
2021-08-04T09:40:50.499Z    INFO    controller-runtime.manager.controller.groupsync Starting EventSource    {"reconciler group": "redhatcop.redhat.io", "reconciler kind": "GroupSync", "source": "kind source: /, Kind="}
2021-08-04T09:40:50.600Z    INFO    controller-runtime.manager.controller.groupsync Starting Controller {"reconciler group": "redhatcop.redhat.io", "reconciler kind": "GroupSync"}
2021-08-04T09:40:50.600Z    INFO    controller-runtime.manager.controller.groupsync Starting workers    {"reconciler group": "redhatcop.redhat.io", "reconciler kind": "GroupSync", "worker count": 1}
2021-08-04T09:41:10.233Z    INFO    controllers.GroupSync   Beginning Sync  {"groupsync": "group-sync-operator/test-azure-groupsync", "Provider": "azure"}
E0804 09:41:30.160042       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 332 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x2250a00, 0x3d58750)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x86
panic(0x2250a00, 0x3d58750)
    /usr/local/go/src/runtime/panic.go:965 +0x1b9
github.com/redhat-cop/group-sync-operator/pkg/syncer.(*AzureSyncer).Sync(0xc0007c9710, 0x0, 0x0, 0x2, 0x2, 0xc000978000)
    /workspace/pkg/syncer/azure.go:194 +0x5ae
github.com/redhat-cop/group-sync-operator/controllers.(*GroupSyncReconciler).Reconcile(0xc0004b6000, 0x29cb758, 0xc000c95410, 0xc0000e3f08, 0x13, 0xc0000e3ef0, 0x14, 0xc000c95410, 0xc000030000, 0x247c060, ...)
    /workspace/controllers/groupsync_controller.go:111 +0xf37
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0005b3c20, 0x29cb6b0, 0xc00059a000, 0x2400bc0, 0xc000c96820)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:298 +0x30d
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0005b3c20, 0x29cb6b0, 0xc00059a000, 0x656242c600000000)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2(0x29cb6b0, 0xc00059a000)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216 +0x4a
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0x37
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00066c750)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000e9ff50, 0x2991b80, 0xc0005bf2c0, 0xc00059a001, 0xc0005a82a0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00066c750, 0x3b9aca00, 0x0, 0x1, 0xc0005a82a0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x29cb6b0, 0xc00059a000, 0xc00056f6e0, 0x3b9aca00, 0x0, 0x1)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0xa6
k8s.io/apimachinery/pkg/util/wait.UntilWithContext(0x29cb6b0, 0xc00059a000, 0xc00056f6e0, 0x3b9aca00)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99 +0x57
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:213 +0x40d
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
    panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1ee9a0e]
goroutine 332 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x109
panic(0x2250a00, 0x3d58750)
    /usr/local/go/src/runtime/panic.go:965 +0x1b9
github.com/redhat-cop/group-sync-operator/pkg/syncer.(*AzureSyncer).Sync(0xc0007c9710, 0x0, 0x0, 0x2, 0x2, 0xc000978000)
    /workspace/pkg/syncer/azure.go:194 +0x5ae
github.com/redhat-cop/group-sync-operator/controllers.(*GroupSyncReconciler).Reconcile(0xc0004b6000, 0x29cb758, 0xc000c95410, 0xc0000e3f08, 0x13, 0xc0000e3ef0, 0x14, 0xc000c95410, 0xc000030000, 0x247c060, ...)
    /workspace/controllers/groupsync_controller.go:111 +0xf37
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0005b3c20, 0x29cb6b0, 0xc00059a000, 0x2400bc0, 0xc000c96820)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:298 +0x30d
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0005b3c20, 0x29cb6b0, 0xc00059a000, 0x656242c600000000)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2(0x29cb6b0, 0xc00059a000)
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216 +0x4a
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0x37
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00066c750)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000e9ff50, 0x2991b80, 0xc0005bf2c0, 0xc00059a001, 0xc0005a82a0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00066c750, 0x3b9aca00, 0x0, 0x1, 0xc0005a82a0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x29cb6b0, 0xc00059a000, 0xc00056f6e0, 0x3b9aca00, 0x0, 0x1)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0xa6
k8s.io/apimachinery/pkg/util/wait.UntilWithContext(0x29cb6b0, 0xc00059a000, 0xc00056f6e0, 0x3b9aca00)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99 +0x57
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
    /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:213 +0x40d

gh provider, map users via scim username

Our usecase:

  1. we use AAD OIDC for user auth in OCP (using upn as userid)
  2. we do not auth with GH as OIDC provider, as we want users to have singular identities, and the user-audience of OCP is wider than GH.
  3. we have various teams in GH, which would map nicely over to groups in OCP and hence roles

However, syncing GH teams as groups with the current impl. would use the gh login id, while we'd like to list team-members, find their linked SCIM-identity, and then use the Username attribute, which would match upn.

Would you accept such an implementation? Could fit into the same package, and be toggled via a flag?

Option to leave group metadata unchanged on update

I've been testing the group-sync-operator in a PoC with Azure AD integration.
The requirement I've been working to requires the groups created by the group-sync-operator to be augmented with metadata e.g. labels. Once these groups are augmented we are then using the namespace-configuration-operator create namespaces and RBAC based on a GroupConfig object.

The problem is, when the group-sync-operator runs again (I have this running on a cron schedule) it removes any metadata added.
I've found a workaround which is to change the "group-sync-operator.redhat-cop.io/sync-provider" label, which causes the group-sync-operator to ignore this group when updating. Problem with this though is any user additions / subtractions from this group are also ignored.

So I'm proposing a change to then group-sync-operator to leave the group untouched if it already exists, and just update users associated with the group.

Okta provider only supports 20 groups

The Okta provider will not sync more than 20 groups at a time. Due to the way the OktaClient is configured the request returns by hitting the groups API returns a paginated list. This causes only the first page of groups to be added which by default is 20.

When trying LDAP rfc2307 synchronization with whitelist parameter, I get the error when syncing "Expected Label":"<metadata.name>_<spec_providers_ldap:name>,"Found Label":""}

I am trying to run the operator for ldap, specifically the rfc2307 configuration, and using the whitelist parameter for an AD group. However, I receive the following error in the log from the resulting pod:

{"level":"info","ts":1606189207.9799204,"logger":"controller_groupsync","msg":"Group Provider Label Did Not Match Expected Provider Label","Group Name":"<AD_GROUP>","Expected Label":"ldap-groupsync_ldap","Found Label":""}

The underlying YML code to deploy the operator would be:

`apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
name: ldap-groupsync
spec:
providers:

  • ldap:
    credentialsSecret:
    name: ldap-group-sync
    namespace: group-sync-operator
    insecure: true
    whitelist:
    • <AD_GROUP>
      rfc2307:
      groupMembershipAttributes: [ member ]
      groupNameAttributes: [ cn ]
      groupUIDAttribute: cn
      groupsQuery:
      baseDN: "OU=,OU=,OU=,OU=,OU=,DC=,DC=,DC=,DC=net"
      derefAliases: never
      filter: (objectClass=group)
      scope: sub
      pageSize: 1000
      tolerateMemberNotFoundErrors: true
      tolerateMemberOutOfScopeErrors: true
      userNameAttributes: [ sAMAccountName ]
      userUIDAttribute: dn
      usersQuery:
      baseDN: "DC=,DC=,DC=,DC=net"
      derefAliases: never
      scope: sub
      pageSize: 1000
      url: <ldap_url>
      name: ldap
      `

In the error above, I can see that when it says "Expected Label":"ldap-groupsync_ldap", this value is a concatenation of

<metadata.name>_<spec_providers_ldap:name>

from the YML above, which I even proved before by adjusting their values and seeing their results. I am unsure where these "label" values come from, as I don't know if this type of parameter exists in LDAP. I'm wondering if it is an underlying issue with the operator itself?

Please let me know if more information is required.

Thanks,

Michael

[Azure] Change from Azure AD Graph API to Microsoft Graph API

The Azure Active Directory Graph API has been deprecated with the Microsoft Graph API as its replacement. The new API also provides more granular permissions (i.e. GroupMember.Read.All instead of requiring Directory.Read.All)

This issue is to track changing the Azure syncer to use the new API.

metadata.labels: Invalid value: \"github-groupsync_\"

I get this error [*] when syncing from gh.
v0.0.12

[*]

Group.user.openshift.io \"architecture\" is invalid: metadata.labels: Invalid value: \"github-groupsync_\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')Failed to Create or Update OpenShift Group" source="groupsync_controller.go:169"
2021-08-06T12:20:03.811Z        DEBUG   controller-runtime.manager.events       Warning {"object": {"kind":"GroupSync","namespace":"group-sync-operator","name":"github-groupsync","uid":"e5fa947a-8b9c-4b47-b54a-06f27b6b9258","apiVersion":"redhatcop.redhat.io/v1alpha1","resourceVersion":"1082750161"}, "reason": "ProcessingError", "message": "Group.user.openshift.io \"architecture\" is invalid: metadata.labels: Invalid value: \"github-groupsync_\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')"}
2021-08-06T12:20:03.817Z        ERROR   controller-runtime.manager.controller.groupsync Reconciler error        {"reconciler group": "redhatcop.redhat.io", "reconciler kind": "GroupSync", "name": "github-groupsync", "namespace": "group-sync-operator", "error": "Group.user.openshift.io \"architecture\" is invalid: metadata.labels: Invalid value: \"github-groupsync_\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')"}

Authorization management !

First of all thanks for that great tool, it makes RedHat admins life easier ,i just completed a PoC with that Operator for sync LDAP groups from AD and all was good, however not sure if there is something missing there or maybe it was me who missed this part ,as after the sync completed i got the correct groups on OpenShift side with the right users inside and that was great, however not sure how we can manage the Authorization here ! like binding OCP roles like admin,view,read with the groups that just got created from the sync ! as without this step users can login and being in the right group but they cannot do anything as they do not have any permission on the cluster , Thanks in advance guys i really appreciate the great work done to bring that operator to OCP admins .

Provide a clear single status when the operator is installed (automation)

For automation purpose, when the operator is installed and ready to create instances of GroupSyncs, provide an easily fetchable status of the operator.

For example, you create and install the operator in an automatic process. Before making groupsync instances, you want to make sure the operator is clean, installed and ready to be used. Therefore, you fetch the status in the yaml seeking for it to be 'Installed' or something more meaningful.

Support pruning

Add support for pruning of groups when they are removed from the provider

  • Should be provider specific
  • Pruning should be disabled by default

error retrieving resource lock

The following error occurs when deploying v0.0.8

E1231 06:31:47.702125       1 leaderelection.go:325] error retrieving resource lock group-sync-operator/085c249a.redhat.io: leases.coordination.k8s.io "085c249a.redhat.io" is forbidden: User "system:serviceaccount:group-sync-operator:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "group-sync-operator"

sync from openshift

im trying to sync groups to openshift from rhsso.
i deployed the oprator from helm chart.
here is the configuration file:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: keycloak-groupsync
spec:
  providers:
  - name: keycloak
    keycloak:
      realm: ocp
      credentialsSecret:
        name: keycloak-group-sync
        namespace: group-sync-operator
      caSecret:
        name: rhsso-group-sync
        namespace: group-sync-operator
        key: root-ca.crt
      url: https://rhsso.apps.cluster.fqdn
      loginRealm: ocp
      scope: sub
  schedule: "*/15 * * * *"

from the logs of the container "group-sync-operator" i get the error :

ERROR          controller-runtime.manager.controller.groupsync Reconciler error      {"reconciler group":  "redhatcop.redhat.io", "reconciler kind": "GroupSync",  "name":   "rhsso-groupsync",  "namespace":  "group-sync-operator",  "error":  "could not get token"} 
github.com/go-logr/zapr.(*zapLogger).Error
               /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*controller.)reconcileHandler
              /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:267
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*controller).processNextWorkItem
             /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*controller).Start.func1.1
             /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContex.func1
            /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
           /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
           /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
           /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContex
          /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185
k8s.io/apimachinery/pkg/util/wait.UntilWithContex
           /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99

also my environment is restricted.

AD/LDAP upgrade from 0.0.6 to.. ?

Hi,

I'm using v0.0.6 to add AD to my OCP 4.x clusters. I'm struggling to figure out how to adapt the instructions to do the same on upstream without using a 'make deploy'.

On 0.0.6, this was as simple as:

cd ${PATH_SCRIPT}/group-sync-operator
oc ${OC_ARGS} apply -f deploy/crds/redhatcop.redhat.io_groupsyncs_crd.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/service_account.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/clusterrole.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/clusterrole_binding.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/role.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/role_binding.yaml
oc ${OC_ARGS} apply -n group-sync-operator -f deploy/operator.yaml

oc ${OC_ARGS} apply -f ${PATH_SCRIPT}/krynn-ad-oauth-group-sync-operator.yaml

Is 'make deploy' the only supported way to do this?

[azure-ad] usermemebers show as nil

When I sync the groups, the members are all nil:

users:
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>
  - <nil>

[Keycloak] Groups not populated with users

EDIT: of course, shortly after the creation of this issue, I stumbled upon the probable solution: I added realm-admin to the sync user's role mappings in RH-SSO, and users were added to groups in OpenShift. From what little playing I have done, it appears the sync user also requires view-users besides query-users and query-groups. Maybe this is something recent?


Using the keycloak provider to link with RedHat SSO, groups within the configured realm are synchronised as expected, however group memberships is not.

We have users and groups in RH-SSO. Users are part of one or more groups. The user used for synchronisation has query-groups and query-users as assigned roles.

From what I understand from people we consulted, users that login through RH-SSO should automatically be added to a group by the operator (I assume when it runs depending on its schedule, or when does this happen exactly?)

However, the groups that get synchronised by the operator stay empty.
Even when manually adding a user to the group within OpenShift with oc adm groups add-users $group $user1, whenever the operator synchronises again, it will just remove the user, even though the user is part of that group within RH-SSO.

Looking at the relevant code, and confirming via gocloak, users should be added to groups automatically within OpenShift.

The operator logs do not appear to be very verbose on this, and there doesn't appear to be a debug mode or verbosity option.

You can find our GroupSync and OAuth objects here. This is what we use to oc apply. We have tried with all the available scope's, they don't make a difference in regards to this problem.

What are we missing?

Distribute via hub?

It would be nice to have versions distributed via operator-hub instead of having users having to compile etc themselves. WDYT?

Azure - groups filter being ignored

When deploying group-sync-operator 0.0.5 against using Azure provider per below config and 'groups' specification is being ignored and all groups are being synced from AzureAD rather than the one(s) specified

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
name: azure-groupsync
spec:
providers:

  • name: azure
    azure:
    credentialsSecret:
    name: azure-group-sync
    namespace: group-sync-operator
    groups:
    - testgroupname
    schedule: '* * * * *'

Delete groups in openshift

If a group gets deleted in one of the external provides (in my case Azure) currently the group will keep existing in openshift.

Could it be possible to also check if a group got deleted and then also delete it in openshift?

add support for synching group labels/annotations

currently group created by the sync operator are to not carry metadata from the source IDP.
add support for carrying over additional metadata in the shape of annotations and/or labels/
This is probably going to be a provider-dependent mapping.

Add documentation to setup RHSSO user with minimum required permissions

For security purposes, it is better to use a permission-limited user in RHSSO for the group-sync-operator.

To do so (mostly from https://stackoverflow.com/questions/56743109/keycloak-create-admin-user-in-a-realm):

  1. Open RHSSO admin console and select realm of your choice (realm selection box on top left side)
  2. Go to users (sidebar) -> add user (button on the right side)
  3. Fill in required fields and press save button.
  4. Open the Credentials tab and set a password; be sure to disable the "Temporary" switch
  5. Open Role Mapping tab
  6. Select realm-management under Client Roles.
  7. Select query-groups and query-users roles

You can then use this account by creating the asscotiated secret in OCP and use it for the GroupSync CR:

$ oc create secret generic keycloak-group-sync --from-literal=username=group-sync-user --from-literal=password=group-sync-password
$ cat groupsync.yaml
  [...]
  - keycloak:
      credentialsSecret:
        name: keycloak-group-sync
        namespace: < groupsync-operator-namespace >
      loginRealm: < the selected realm >
      realm: < the selected realm >
   [...]

Error with LDAP group sync

The below error with "util unable to update status" always reports from group sync operator , running with version 0.0.11

2021-03-22T14:22:35.060Z INFO controllers.GroupSync Beginning Sync {"groupsync": "group-sync-operator/ldap-groupsync", "Provider": "ldap"}

2021-03-22T14:30:05.664Z INFO controllers.GroupSync Sync Completed Successfully {"groupsync": "group-sync-operator/ldap-groupsync", "Provider": "ldap", "Groups Created or Updated": 7274}

2021-03-22T14:30:05.668Z ERROR util unable to update status {"error": "GroupSync.redhatcop.redhat.io "ldap-groupsync" is invalid: status.conditions.reason: Invalid value: "": status.conditions.reason in body should match 'A-Za-z?$'"}github.com/go-logr/zapr.(*zapLogger).Error /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132github.com/redhat-cop/operator-utils/pkg/util.(*ReconcilerBase).ManageSuccess /go/pkg/mod/github.com/redhat-cop/[email protected]/pkg/util/reconciler.go:398github.com/redhat-cop/group-sync-operator/controllers.(*GroupSyncReconciler).Reconcile /workspace/controllers/groupsync_controller.go:179sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:263sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1 /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:198k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1 /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1 /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155k8s.io/apimachinery/pkg/util/wait.BackoffUntil /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156k8s.io/apimachinery/pkg/util/wait.JitterUntil /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185k8s.io/apimachinery/pkg/util/wait.UntilWithContext /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:992021-03-22T14:30:05.668Z ERROR controller-runtime.manager.controller.groupsync Reconciler error {"reconciler group": "redhatcop.redhat.io", "reconciler kind": "GroupSync", "name": "ldap-groupsync", "namespace": "group-sync-operator", "error": "GroupSync.redhatcop.redhat.io "ldap-groupsync" is invalid: status.conditions.reason: Invalid value: "": status.conditions.reason in body should match 'A-Za-z?$'"}github.com/go-logr/zapr.(*zapLogger).Error /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:267sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1 /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:198k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1 /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1 /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155k8s.io/apimachinery/pkg/util/wait.BackoffUntil /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156k8s.io/apimachinery/pkg/util/wait.JitterUntil /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185k8s.io/apimachinery/pkg/util/wait.UntilWithContext /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99

Regex filter on groups.

Is it possible when using a groups filter to also use a regex as a filter.
When having a large number of test groups, where the groups alway begins with "test-" it is nice to have an regex filter to
have one entry of "test-*" that will include all the test- groups.
This is in favour of listing all separate test- groups individually.

And if a regex can be used, it is also nice to "negate" a single group so that it can be excluded from a regex.

We currently using keycloack as our identity provider.

An error occurred "Not Found" on 0.0.9

Installed the the operator 0.0.9
When trying to configure keycloak I always get the message: an error occurred Not Found
It does not matter if I use the From view or Yaml

My groupsync is defined as:

apiVersion: redhatcop.redhat.io/v1alpha1
kind: GroupSync
metadata:
  name: groupsync
spec:
  providers:
    - keycloak:
        credentialsSecret:
          name: keycloak-group-sync
          namespace: group-sync-operator
        realm: <realm>
        loginRealm: <login-realm>
        scope: sub
        url: https://xxx.xxx.xxx

Add unit testing

To improve the overall quality of the operator, add unit testing

Console -- Providers->LDAP Provider->"Secret Containing the Credential" improper label class of blacklist

In the Web console for LDAP -> Provider the input box presented as "Secret Containing the Credential" is actually the input class for the blacklist resulting in a secret name being placed into the blacklist config.

<label class="" for="root_spec_providers_0_ldap_blacklist_accordion-content">Secret Containing the Credentials</label>
<input class="pf-c-form-control" id="root_spec_providers_0_ldap_blacklist_0" required="" type="text" value="">

Resulting in:

spec:
  providers:
    - ldap:
        blacklist:
          - testbadconfig

I'm not sure where console code is so opening the issue.

Document required graph-api permissions

What graph-api permissions are required for the operator to work correctly with AAD?
Version 0.0.7.

I have these grants:

email
Group.Read.All
GroupMember.Read.All
openid
profile
User.Read

Getting:

ger 2020-12-11T19:33:38.869Z        DEBUG   controller-runtime.manager.events       Warning {"object": {"kind":"GroupSync","namespace":"group-sync-operator","name":"azure-groupsync","uid":"42ca321c-9d87-404e-a8f5-17ad28315530","apiVersion":"redhatcop.redhat.io/v1alpha1","resourceVersion":"94637800"}, "reason": "GroupSyncError", "message": "403 Forbidden: {\"error\":{\"code\":\"Authorization_RequestDenied\",\"message\":\"Insufficient privileges to complete the operation.\",\"innerError\":{\"client-request-id\":\"20d8fa8f-f796-4f33-b2ad-81f242a3a45f\",\"date\":\"2020-12-11T19:33:38\",\"request-id\":\"20d8fa8f-f796-4f33-b2ad-81f242a3a45f\"}}}"}
group-sync-operator-controller-manager-7db84b854f-hccw7 manager time="2020-12-11T19:33:38Z" level=error msg="<nil>unable to update status" source="groupsync_controller.go:231"
group-sync-operator-controller-manager-7db84b854f-hccw7 manager 2020-12-11T19:33:38.876Z        ERROR   controller      Reconciler error        {"reconcilerGroup": "redhatcop.redhat.io", "reconcilerKind": "GroupSync", "controller": "groupsync", "name": "azure-groupsync", "namespace": "group-sync-operator", "error": "403 Forbidden: {\"error\":{\"code\":\"Authorization_RequestDenied\",\"message\":\"Insufficient privileges to complete the operation.\",\"innerError\":{\"client-request-id\":\"20d8fa8f-f796-4f33-b2ad-81f242a3a45f\",\"date\":\"2020-12-11T19:33:38\",\"request-id\":\"20d8fa8f-f796-4f33-b2ad-81f242a3a45f\"}}}"}
group-sync-operator-controller-manager-7db84b854f-hccw7 manager github.com/go-logr/zapr.(*zapLogger).Error
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
group-sync-operator-controller-manager-7db84b854f-hccw7 manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:237
group-sync-operator-controller-manager-7db84b854f-hccw7 manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209
group-sync-operator-controller-manager-7db84b854f-hccw7 manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188
group-sync-operator-controller-manager-7db84b854f-hccw7 manager k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155
group-sync-operator-controller-manager-7db84b854f-hccw7 manager k8s.io/apimachinery/pkg/util/wait.BackoffUntil
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156
group-sync-operator-controller-manager-7db84b854f-hccw7 manager k8s.io/apimachinery/pkg/util/wait.JitterUntil
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
group-sync-operator-controller-manager-7db84b854f-hccw7 manager k8s.io/apimachinery/pkg/util/wait.Until
group-sync-operator-controller-manager-7db84b854f-hccw7 manager         /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90

also - did/do you consider supporting using client-certificates for authenticating towards AAD?

Add support for GitHub oauth

For clusters where there is an identity provider tied to a github organization, it would be nice to be able to sync the teams under that organization to groups in openshift.

Expose Runtime Prometheus Metrics

The operator currently exposes a default set of Prometheus metrics provided by kubebuilder/Operator SDK. Introduce metrics pertaining to runtime operation to provide insights into the the current state of the operator.

Possible metrics worth exposing

  • Number of groups synchronized per provider
  • Time of next scheduled synchronization
  • Time of last success per provider
  • Time of last failure per provider
  • Number of successes
  • Number of failures

List of existing related issues:

#72

add supported for nested groups

when groups are nested add a standard annotation to the child group indicating the parent group, for example: parent: <parent-group>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.