Coder Social home page Coder Social logo

operator-test-playbooks's Introduction

Operator testing

Running operator testing playbooks locally

The operator testing can be run locally using the Ansible playbook at local-test-operator.yml. The playbook roles that run the operator tests used in this are identical to ones used as part of the CVP operator testing pipeline.

One notable exception is the package name uniqueness test which requires access to an internal database.

The playbook finishes successfully if all enabled tests pass. The logs and files for the test run can be found at /tmp/operator-test directory by default.

Prerequisites

The testing playbooks has a number of prerequisites that it requires for a successful run. Most of these requirements need to be supplied as Ansible parameters when running the playbook.

1. Installed operator-courier

The instructions for installing operator-courier can be found at this link.

2. Kubeconfig file for a working OCP cluster (for ISV and Community operators only)

The user needs to be currently logged into a working OCP cluster that will be used for testing as kubeadmin for the tests to work.

The path to the kubeconfig file for the OCP cluster needs to be supplied as the kubeconfig_path parameter, for example: -e kubeconfig_path=~/testing/kubeconfig

For rapid prototyping, you can spin up an OCP cluster using Red Hat CodeReady Workspaces

You can then specify your kubeconfig as follows: -e kubeconfig_path= ~/.crc/cache/crc_libvirt_4.2.14/kubeconfig

3. A valid quay.io namespace (for ISV and Community operators only)

A valid quay.io namespace to which the user has access to needs to be supplied as the quay_namespace parameter, for example: -e quay_namespace="${QUAY_NAMESPACE}"

The testing process includes creating a private repository, be advised about the limits on the account owning the namespace.

4. An quay.io access token (for ISV and Community operators only)

The token for the quay.io account that owns the namespace used for testing can be obtained by using the following command:

QUAY_TOKEN=$(curl -sH "Content-Type: application/json" -XPOST https://quay.io/cnr/api/v1/users/login -d '
{
    "user": {
        "username": "'"${QUAY_USERNAME}"'",
        "password": "'"${QUAY_PASSWORD}"'"
    }
}' | jq -r '.token' | cut -d' ' -f2)

The token can then be supplied as the quay_token parameter, for example: -e quay_token="${QUAY_TOKEN}"

5. The operator metadata directory

The operator's metadata in either flattened or nested format must be placed in it's own directory for testing.

The path to the directory containing the operator's metadata must be supplied to the operator_dir parameter, for example: -e operator_dir=~/testing/operator-metadata

6. Other required binaries

The rest of the required binaries are downloaded by the playbook to a temporary directory in /tmp/operator-test/bin and don't need to be installed manually.

If we, for some reason, want to skip the download (for example if we already have the required binaries at that location from a previous playbook run), we can set the run_prereqs parameter in this way: -e run_prereqs=false

7. The parameters required to support image pull secrets

1. The kube_objects(a kubernetes resource)

The kube_objects is a kubernetes resource requires to be injected to the openshift cluster.

The gpg encoded kube_objects can be passed as a parameter to the playbook for example: -e kube_objects=kube_objects

Selecting operator tests

If we want to enable or disable individual tests, we can use these parameters and set them to true or false:

  • run_lint for running operator-courier linting, default true
  • run_catalog_init for running the catalog initialization test, default true
  • run_deploy for deploying the operator to the testing cluster - this test is required for the subsequent tests, default true
  • run_scorecard for running the operator scorecard tests on the operator that's deployed to the testing cluster, default true
  • run_imagesource for checking the image sources of the tested operator - applies to Red Hat and ISV operators, otherwise disable with -e run_imagesource=false

Resource cleanup

The created resources and namespace are cleaned up after the playbook run by default. If we want to leave the resources after the run, we can set the run_cleanup parameter like this: -e run_cleanup=false

Example usages

1. Testing Red Hat (optional) operators

If we want to run the Red Hat operator tests, we invoke the playbook with the following command:

ansible-playbook -vv -i "localhost," --connection=local local-test-operator.yml \
    -e run_deploy=false \
    -e production_quay_namespace="redhat-operators" \
    -e operator_dir="${OPERATOR_DIR}"

Currently the access to an OCP cluster or an quay.io account is not required

2. Full operator testing for ISV (Certified) operators

If we want to run the full ISV operator testing, we invoke the playbook with the following command (inserting the aforementioned prerequisites):

ansible-playbook -vv -i "localhost," --connection=local local-test-operator.yml \
    -e kubeconfig_path="${KUBECONFIG_PATH}" \
    -e quay_token="${QUAY_TOKEN}" \
    -e quay_namespace="${QUAY_NAMESPACE}" \
    -e production_quay_namespace="certified-operators" \
    -e operator_dir="${OPERATOR_DIR}"

If we want to run full operator testing with image pull secrets and certified-operators

ansible-playbook -vv -i "localhost," --connection=local local-test-operator.yml \
    -e kubeconfig_path="${KUBECONFIG_PATH}" \
    -e quay_token="${QUAY_TOKEN}" \
    -e quay_namespace="${QUAY_NAMESPACE}" \
    -e production_quay_namespace="certified-operators" \
    -e operator_dir="${OPERATOR_DIR}" \
    -e kube_objects="${KUBE_OBJECTS}"

3. Testing community operators

If we want to run the operator testing for a community operator without running the imagesource test, we invoke the playbook with the following command:

ansible-playbook -vv -i "localhost," --connection=local local-test-operator.yml \
    -e kubeconfig_path="${KUBECONFIG_PATH}" \
    -e quay_token="${QUAY_TOKEN}" \
    -e quay_namespace="${QUAY_NAMESPACE}" \
    -e production_quay_namespace="community-operators" \
    -e operator_dir="${OPERATOR_DIR}" \
    -e run_imagesource=false

3. Running optional-operator-subscribe using playbooks

If we would like to run the optional operator subscribe on the pre-built operator indices we can invoke the following playbook as follows:

ansible-playbook -vvvv -i "localhost," --connection=local optional-operators-subscribe.yml \
    -e kubeconfig_path="${KUBECONFIG_PATH}" \
    -e "OO_INDEX=${OPERATOR_INDEX}" \
    -e "OO_PACKAGE=${OPERATOR_PACKAGE}" \
    -e "OO_CHANNEL=${OPERATOR_CHANNEL}" \
    -e "ARTIFACT_DIR=${ARTIFACT_DIRECTORY}"

operator-test-playbooks's People

Contributors

14rcole avatar asergienk avatar dependabot[bot] avatar dirgim avatar j0zi avatar jsztuka avatar likhithaeda avatar mvalarh avatar nmars avatar p3ck avatar samvarankashyap avatar scoheb avatar sonam1412 avatar wheelerlaw avatar yashvardhannanavati avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

operator-test-playbooks's Issues

operator pod does not start up

When I run the tests I see the subscription created, but its not creating any deployment and hence the pod.

$ ls -l /tmp/operator-test
total 188
drwxrwxr-x 3 jey jey     98 Mar  4 12:22 bin
-rw-rw-r-- 1 jey jey 136103 Mar  4 14:22 catalog-init-debug-release-4.2.txt
-rw-rw-r-- 1 jey jey      1 Mar  4 12:33 catalog-init-rc-release-4.2.txt
-rw-rw-r-- 1 jey jey   4028 Mar  9 15:37 linting-errors.txt
-rw-rw-r-- 1 jey jey      0 Mar  9 15:37 linting-output.txt
-rw-rw-r-- 1 jey jey      1 Mar  4 12:31 linting-rc.txt
-rw-rw-r-- 1 jey jey   2563 Mar  9 15:37 linting-results.json
-rw-rw-r-- 1 jey jey      5 Mar  4 12:31 linting-version.txt
-rw-rw-r-- 1 jey jey  20471 Mar  9 15:54 olm-catalog-operator-debug.txt
-rw-rw-r-- 1 jey jey   3652 Mar  9 15:54 olm-catalog-source-debug.txt
-rw-rw-r-- 1 jey jey      0 Mar  4 13:03 olm-installplan-debug.txt
drwxrwxr-x 2 jey jey    130 Mar  4 12:47 olm-operator-files
drwxrwxr-x 4 jey jey     70 Mar  9 15:37 operator-files
-rw-rw-r-- 1 jey jey    648 Mar  4 12:31 parsed_operator_data.yml
drwxrwxr-x 2 jey jey     27 Mar  9 15:37 scorecard-cr-files
-rw-rw-r-- 1 jey jey    935 Mar  9 15:37 scorecard.secret.yaml

The tests fail with

TASK [deploy_olm_operator : Wait for the operator pingpong-operator pod to start up] *******************************************************************************************************************************************************************************************
task path: /home/jey/workspace/src/github.com/redhat-operator-ecosystem/operator-test-playbooks/roles/deploy_olm_operator/tasks/main.yml:99
FAILED - RETRYING: Wait for the operator pingpong-operator pod to start up (90 retries left).
FAILED - RETRYING: Wait for the operator pingpong-operator pod to start up (89 retries left).

command run:

$ ansible-playbook \
-vv -i "localhost,"     \
--connection=local     \
-e run_catalog_init=false     \
-e quay_token=<FIXME>    \
-e quay_namespace=jeyaramashok     \
-e kubeconfig_path=~/.kube/config     \
-e operator_dir="~/pingpong-operator"     \
local-test-operator.yml

full log: run-2020-03-10_104940.log

operator manifest: pingpong-operator-2.zip

EDIT:

fixed to add correct zip file

operator catalog initialization test fails

I am trying to run the playbooks locally and having some issues. One of the failure happens in operator_catalog_initialization_test

"error loading manifests from directory: error loading package into db: UNIQUE constraint failed: package.name"

Command run:

ansible-playbook -vv -i "localhost," \
	--connection=local \
	-e run_catalog_init=true \
	-e quay_token=<FIXME> \
	-e quay_namespace=jeyaramashok \
        -e kubeconfig_path=~/.kube/config \
        -e operator_dir="~/pingpong-operator" \
	local-test-operator.yml

here is the full log of catalog-init-debug-release-4.2.txt

I am able to get past this by setting -e run_catalog_init=false but wanted to bring this up if this would affect any following steps

Scorecard Fails with empty alm-examples

When trying to test an operator that does not create and CRDs, the scorecard test fails with

fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/tmp/operator-test/bin/oc delete -f /tmp/operator-test/scorecard-cr-files/first.cr.yaml --ignore-not-found=true --grace-period=60 --timeout=120s", "delta": "0:00:00.096581", "end": "2020-04-14 12:30:22.673926", "msg": "non-zero return code", "rc": 1, "start": "2020-04-14 12:30:22.577345", "stderr": "error: unable to decode "/tmp/operator-test/scorecard-cr-files/first.cr.yaml": Object 'Kind' is missing in '{"metadata":{"namespace":"test-operator"}}'", "stderr_lines": ["error: unable to decode "/tmp/operator-test/scorecard-cr-files/first.cr.yaml": Object 'Kind' is missing in '{"metadata":{"namespace":"test-operator"}}'"], "stdout": "", "stdout_lines": []}

Expected to pass as the operator does not require creating resources.

This is also blocking the automated process from testing operators when PRs are created to the community-operators.

OLM test failed on local

I am facing below error while running OLM test on Kogito operator at local setup.
Error logs:

TASK [operator_info : Adding operator version label to 'oi_auto_labels'] *******
fatal: [localhost]: FAILED! => 
  msg: |-
    The conditional check 'cluster_type == "ocp"' failed. The error was: error while evaluating conditional (cluster_type == "ocp"): 'cluster_type' is undefined
  
    The error appears to be in '/playbooks/upstream/roles/operator_info/tasks/op_info_ver.yml': line 157, column 3, but may
    be elsewhere in the file depending on the exact syntax problem.
  
    The offending line appears to be:

I am running OLM test using below cmd:

SCRIPT_URL="https://raw.githubusercontent.com/redhat-openshift-ecosystem/operator-test-playbooks/master/upstream/test/test.sh"
export OP_TEST_ANSIBLE_PULL_REPO=https://github.com/redhat-openshift-ecosystem/operator-test-playbooks
bash <(curl -sL "${SCRIPT_URL}") kiwi  community-operators/kogito-operator/"${version}"

pull policy is always but image has been referred to by ID

Hi Team,

I am facing below issue while running operator-test-playbook using below script.
https://raw.githubusercontent.com/operator-framework/operator-test-playbooks/master/upstream/test/test.sh

previously it was working fine.

TASK [validate_operator_bundle : Pull index image 'kind-registry:5000/test-operator/kogito-operator:v2.0.0-snapshot' (operator_sdk_none fix)] ******************************************************************************************
fatal: [localhost]: FAILED! => changed=true 
  cmd: podman pull kind-registry:5000/test-operator/kogito-operator:v2.0.0-snapshot
  delta: '0:00:00.061042'
  end: '2021-05-11 12:24:14.337491'
  msg: non-zero return code
  rc: 125
  start: '2021-05-11 12:24:14.276449'
  stderr: 'Error: pull policy is always but image has been referred to by ID (kind-registry:5000/test-operator/kogito-operator:v2.0.0-snapshot)'
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>

Tags in 'OpenShift Project Configuration' objects cause operator certification to fail

Additional object YAML with tags results in operator certification failure (see exact message in example below). Objects created to enable verification testing shouldn't be considered during operator checks (i.e. requirement for no tags for operator shouldn't apply to configuration data to setup test).

===== Test: operator-image-source =====
Container images used by the operator must be Red Hat certified and available
in a published registry or available from one of the following registries:
registry.connect.redhat.com
registry.redhat.io
registry.access.redhat.com
Images detected in Pods after operator installation:
[FAIL] docker.io/ibmcom/ibm-common-service-catalog:latest (tags not permitted)
....
One or more container images are either from an incorrect registry or, if the image uses a Universal Base Image (UBI), derive from an unpublished repository, or are not certified, or is using a tag when tags are not permitted. See test results for details.
-------------------
Execution Reference:
-> /cvp/cvp-isv-operator-metadata-validation-test/ospid-a4f5710e-f8bb-4e92-8574-03377201a9f9-b51cec77-f269-455a-8b7a-cbf55cfccd58/b51cec77-f269-455a-8b7a-cbf55cfccd58/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.